How to use open source tools to keep tabs on enterprise applications
Everyone should monitor their production system to understand how the system is behaving. Monitors help you understand the workloads and ensure you get notifications when something fails—or is about to fail.
In Java EE applications, you can choose to monitor many metrics on your servers that will identify workloads and issues with applications. For example, you could monitor the Java heap, active threads, open sockets, CPU utilization, and memory usage.
If you have a Java EE application deployed to Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes, this article is for you.
Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes can help you quickly create Oracle WebLogic configurations on Oracle Cloud, for example, to allocate network resources, reuse existing virtual cloud networks or subnets, configure the load balancer, integrate with Identity Cloud Manager, or configure Oracle Database.
In this article, I’ll show you how to use two open source tools—Grafana and Prometheus—to monitor an Oracle WebLogic domain deployed in Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes.
By the way, this procedure will use several Helm charts to walk through the individual steps required to install and configure Prometheus and Grafana. For your own deployment, it is up to you to create a single Helm chart to deploy Prometheus or Grafana.
Prerequisites
Before you get started, you should have installed at least one of these Oracle Cloud Marketplace applications. (UCM refers to the Universal Credits model; BYOL stands for bring your own license.)
Deploy WebLogic Monitoring Exporter to your Oracle WebLogic domain
Here are the step-by-step instructions.
1. Open a terminal window and access the administration instance that is created with Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes. You can see detailed instructions here.
2. Go to the root Oracle Cloud Infrastructure File Storage Service folder, which is /u01/shared.
cd /u01/shared
3. Download the WebLogic Monitoring Exporter war file from GitHub into the wlsdeploy folder.
wget https://github.com/oracle/weblogic-monitoring-exporter/releases/download/v2.0.0/wls-exporter.war -P wlsdeploy/applications
4. Include the sample exporter configuration file.
zip -r weblogic-exporter-archive.zip wlsdeploy/
wget https://raw.githubusercontent.com/oracle/weblogic-monitoring-exporter/master/samples/kubernetes/end2end/dashboard/exporter-config.yaml -O config.yml
zip wlsdeploy/applications/wls-exporter.war -m config.yml
5. Create a WebLogic Server Deploy Tooling archive where you’ll place the weblogic-exporter-archive.war file.
zip -r weblogic-exporter-archive.zip wlsdeploy/
6. Create a WebLogic Server Deploy Tooling model to deploy the WebLogic Monitoring Exporter application to your domain.
ADMIN_SERVER_NAME=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.wls_admin_server_name')
DOMAIN_CLUSTER_NAME=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.wls_cluster_name')
cat > deploy-monitoring-exporter.yaml << EOF
appDeployments:
Application:
'wls-exporter' :
SourcePath: 'wlsdeploy/applications/wls-exporter.war'
Target: '$DOMAIN_CLUSTER_NAME,$ADMIN_SERVER_NAME'
ModuleType: war
StagingMode: nostage
EOF
7. Deploy the WebLogic Monitoring Exporter application to your domain using the Pipeline update-domain screen.
8. From the Jenkins dashboard, open the Pipeline update-domain screen and specify the parameters, as follows (and see Figure 1):
◉ For Archive_Source, select Shared File System.
◉ For Archive_File_Location, enter /u01/shared/weblogic-exporter-archive.zip.
◉ For Domain_Model_Source, select Shared File System.
◉ For Model_File_Location, enter /u01/shared/deploy-monitoring-exporter.yaml.
Figure 1. The Pipeline update-domain parameters screen
Then click the build button. To verify that the deployment is working, run the following commands:
INGRESS_NS=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.ingress_namespace')
SERVICE_NAME=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.service_name')
WLS_CLUSTER_URL=$(kubectl get svc "$SERVICE_NAME-external" -n $INGRESS_NS -ojsonpath="{.status.loadBalancer.ingress[0].ip}")
The output should look something like the following:
[opc@wlsoke-admin ~]$ curl -k https://$WLS_CLUSTER_URL/wls-exporter
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Weblogic Monitoring Exporter</title>
</head>
Create PersistentVolume and PersistentVolumeClaim for Grafana, Prometheus Server, and Prometheus Alertmanager
Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes creates a shared file system using Oracle Cloud Infrastructure File Storage Service, which is mounted across the different pods running in the Oracle Container Engine for Kubernetes cluster and the administration host. To store data on that shared file system, the next step is to create subpaths for Grafana and Prometheus to store data.
This procedure will create a Helm chart with PersistentVolume (PV) and PersistentVolumeClaim (PVC) for Grafana, Prometheus Server, and Prometheus Alertmanager. This step doesn’t use the Prometheus and Grafana charts for creating the PVC because those don’t yet support Oracle Cloud Infrastructure Container Engine for Kubernetes with Oracle Cloud Infrastructure File Storage Service.
1. Open a terminal window and access the administration instance that is created with Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes.
2. Create folders for monitoringpv and templates. You’ll place the Helm chart here.
mkdir -p monitoringpv/templates
3. Create the Chart.yaml file in the monitoringpv folder.
cat > monitoringpv/Chart.yaml << EOF
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for creating pv and pvc for Grafana, Prometheus and Alertmanager
name: monitoringpv
version: 0.1.0
EOF
4. Similarly, create the values.yaml file required for the chart using the administration instance metadata.
cat > monitoringpv/values.yaml << EOF
exportpath: $(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.fss_export_path')
classname: $(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.fss_chart_name')
serverip: $(kubectl get pv jenkins-oke-pv -o jsonpath='{.spec.nfs.server}')
EOF
5. Create the target folders on the shared file system.
mkdir /u01/shared/alertmanager
mkdir /u01/shared/prometheus
mkdir /u01/shared/grafana
6. Create template files for PV and PVC for Grafana, Prometheus Server, and Prometheus Alertmanager.
cat > monitoringpv/templates/grafanapv.yaml << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-grafana
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
mountOptions:
- nosuid
nfs:
path: {{ .Values.exportpath }}{{"/grafana"}}
server: "{{ .Values.serverip }}"
persistentVolumeReclaimPolicy: Retain
storageClassName: "{{ .Values.classname }}"
volumeMode: Filesystem
EOF
cat > monitoringpv/templates/grafanapvc.yaml << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-grafana
namespace: monitoring
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: "{{ .Values.classname }}"
volumeMode: Filesystem
volumeName: pv-grafana
EOF
cat > monitoringpv/templates/prometheuspv.yaml << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-prometheus
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
mountOptions:
- nosuid
nfs:
path: {{ .Values.exportpath }}{{"/prometheus"}}
server: "{{ .Values.serverip }}"
persistentVolumeReclaimPolicy: Retain
storageClassName: "{{ .Values.classname }}"
volumeMode: Filesystem
EOF
cat > monitoringpv/templates/prometheuspvc.yaml << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-prometheus
namespace: monitoring
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: "{{ .Values.classname }}"
volumeMode: Filesystem
volumeName: pv-prometheus
EOF
cat > monitoringpv/templates/alertmanagerpv.yaml << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-alertmanager
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
mountOptions:
- nosuid
nfs:
path: {{ .Values.exportpath }}{{"/alertmanager"}}
server: "{{ .Values.serverip }}"
persistentVolumeReclaimPolicy: Retain
storageClassName: "{{ .Values.classname }}"
volumeMode: Filesystem
EOF
cat > monitoringpv/templates/alermanagerpvc.yaml << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-alertmanager
namespace: monitoring
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: "{{ .Values.classname }}"
volumeName: pv-alertmanager
EOF
7. Install the monitoringpv Helm chart you created.
helm install monitoringpv monitoringpv --create-namespace --namespace monitoring --wait
8. Verify that the output looks something like the following:
[opc@wlsoke-admin ~]$ helm install monitoringpv monitoringpv --namespace monitoring --wait
NAME: monitoringpv
LAST DEPLOYED: Wed Apr 15 16:43:41 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
Install the Prometheus Helm chart
These instructions are a subset of those in the Prometheus Community Kubernetes Helm Charts GitHub project. Do these steps in the same terminal window where you accessed the administration instance created with Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes:
1. Add the required Helm repositories.
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add kube-state-metrics https://kubernetes.github.io/kube-state-metrics
helm repo update
At this time, you could optionally inspect all of Helm’s available configurable options by showing Prometheus’ values.yaml file.
helm show values prometheus-community/prometheus
2. Copy the needed values from the WebLogic Monitoring Exporter GitHub project to the Prometheus directory.
wget https://raw.githubusercontent.com/oracle/weblogic-monitoring-exporter/master/samples/kubernetes/end2end/prometheus/values.yaml -P prometheus
3. To customize your Prometheus deployment with your own domain information, create a custom-values.yaml file to override some of the values from the prior step.
DOMAIN_NS=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.wls_domain_namespace')
DOMAIN_NAME=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.wls_domain_uid')
DOMAIN_CLUSTER_NAME=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.wls_cluster_name')
cat > prometheus/custom-values.yaml << EOF
alertmanager:
prefixURL: '/alertmanager'
baseURL: http://localhost:9093/alertmanager
nodeExporter:
hostRootfs: false
server:
prefixURL: '/prometheus'
baseURL: "http://localhost:9090/prometheus"
extraScrapeConfigs: |
- job_name: '$DOMAIN_NAME'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_pod_label_weblogic_domainUID, __meta_kubernetes_pod_label_weblogic_clusterName]
action: keep
regex: $DOMAIN_NS;$DOMAIN_NAME;$DOMAIN_CLUSTER_NAME
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: \$1:\$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
basic_auth:
username: --FIX ME--
password: --FIX ME--
EOF
4. Open the custom-values.yaml file and update the username and password. Use the credentials you use to log in to the administrative console.
basic_auth:
username: myadminuser
password: myadminpwd
5. Install the Prometheus chart.
helm install --wait prometheus prometheus-community/prometheus --namespace monitoring -f prometheus/values.yaml -f prometheus/custom-values.yaml
6. Verify that the output looks something like the following:
[opc@wlsoke-admin ~]$ helm install --wait prometheus prometheus-community/prometheus --namespace monitoring -f prometheus/values.yaml -f prometheus/custom-values.yaml
NAME: prometheus
LAST DEPLOYED: Wed Apr 15 22:35:15 2021
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
. . .
7. Create an ingress file to expose Prometheus through the internal load balancer.
cat << EOF | kubectl apply -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: prometheus
namespace: monitoring
spec:
rules:
- http:
paths:
- backend:
serviceName: prometheus-server
servicePort: 80
path: /prometheus
EOF
8. The Prometheus dashboard should now be available at the same IP address used to access the Oracle WebLogic Server Administration Console or the Jenkins console but at the /Prometheus path (see Figure 2).
Figure 2. The Prometheus dashboard
Install the Grafana Helm chart
The instructions described here are a subset of those in the Grafana Community Kubernetes Helm Charts GitHub project. As before, do these steps within the same terminal window where you accessed the administration instance created with Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes.
1. Add the Grafana charts repository.
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
2. Create a values.yaml file to customize the Grafana installation.
INGRESS_NS=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.ingress_namespace')
SERVICE_NAME=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.service_name')
INTERNAL_LB_IP=$(kubectl get svc "$SERVICE_NAME-internal" -n $INGRESS_NS -ojsonpath="{.status.loadBalancer.ingress[0].ip}")
mkdir grafana
cat > grafana/values.yaml << EOF
persistence:
enabled: true
existingClaim: pvc-grafana
admin:
existingSecret: "grafana-secret"
userKey: username
passwordKey: password
grafana.ini:
server:
domain: "$INTERNAL_LB_IP"
root_url: "%(protocol)s://%(domain)s:%(http_port)s/grafana/"
serve_from_sub_path: true
EOF
3. Create a grafana-secret Kubernetes secret file containing admin credentials for Grafana server (with your own credentials, of course).
kubectl --namespace monitoring create secret generic grafana-secret --from-literal=username=your username --from-literal=password=yourpassword
4. Install the Grafana Helm chart.
helm install --wait grafana grafana/grafana --namespace monitoring -f grafana/values.yaml
5. Verify that the output looks something like the following:
[opc@wlsoke-admin ~]$ helm install --wait grafana grafana/grafana --namespace monitoring -f grafana/values.yaml
NAME: grafana
LAST DEPLOYED: Fri Apr 16 16:40:21 2021
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
. . .
6. Expose the Grafana dashboard using the ingress controller.
cat <<EOF | kubectl apply -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: grafana
namespace: monitoring
spec:
rules:
- http:
paths:
- backend:
serviceName: grafana
servicePort: 80
path: /grafana
EOF
7. The Grafana dashboard should now be available at the same IP address used to access the Oracle WebLogic Server Administration Console or the Jenkins console and Prometheus but at the /Grafana path (see Figure 3). You should log in with the credentials you configured in the secret file.
Figure 3. The Grafana login screen
Create the Grafana data source
For this article, I’ll reuse the steps described in the WebLogic Monitoring Exporter sample. You can find the full documentation on how to create Grafana data sources in the Grafana documentation.
1. Once you log in to the Grafana dashboard (as shown in Figure 3), go to Configuration > Data Sources (see Figure 4) and click Add data source to go to the screen where you add the new data source (see Figure 5).
Figure 4. The Configuration menu with the Data Sources option
Figure 5.The screen where you add a new data source
2. Select Prometheus as the data source type (see Figure 6).
Figure 6. Choose Prometheus as the data source type.
3. Set the URL to http://<INTERNAL_LB_IP>/prometheus and click the Save&Test button (see Figure 7).
Important note. INTERNAL_LB_IP is the same IP address you use to access Grafana, Prometheus, Jenkins, and Oracle WebLogic Server Administration Console. You can see how to get that address in this document.
Figure 7. Set the URL for the data source; be sure to use your own IP address.
Import the Oracle WebLogic Server dashboard into Grafana
1. Log in to the Grafana dashboard. Navigate to Dashboards > Manage and click Import (see Figure 8).
Figure 8. The screen for importing a new dashboard
2. Open this JSON code file in a browser. Copy the contents into the Import via panel json section of the dashboard screen and click Load (see Figure 9).
Figure 9. This is where you’ll paste the JSON code.
3. Click the Import button and verify you can see the Oracle WebLogic Server dashboard on Grafana (see Figure 10). That’s it! You’re done!
Figure 10. The Oracle WebLogic Server dashboard running within Grafana
Source: oracle.com
0 comments:
Post a Comment