monitoringas in the command below. By default, Prometheus only discovers PodMonitors within its namespace. This should be disabled by setting
falseso that Akri's custom PodMonitors can be discovered. Additionally, the Grafana service can be exposed to the host by making it a NodePort service. It may take a minute or so to deploy all the components.
The Prometheus dashboard can also be exposed to the host by adding
--set prometheus.service.type=NodePort. If intending to expose metrics from a Broker Pod via a ServiceMonitor also set
/metricsendpoint. However, these cannot be accessed by Prometheus without creating PodMonitors, which are custom resources that tell Prometheus which Pods to monitor. These components can all be automatically created and deployed via Helm by setting
--set prometheus.enabled=truewhen installing Akri.
<Grafana Service port>with the port number outputted in the previous step.
http://localhost:50000/and enter Grafana's default username
process_cpu_seconds_total) and RAM usage (
process_resident_memory_bytes), along with the following custom metrics, all of which are prefixed with
akri.sh/configuration: <Akri Configuration>), since the Configuration name is added as a label to all the Broker Pods by the Akri Controller. Finally, deploy a ServiceMonitor that selects for the previously mentioned service. This tells Prometheus which service(s) to discover.
akri_frame_countmetric has been created in the sample udev-video-broker. Like the Agent and Controller, it publishes both the default process metrics and the custom
akri_frame_countmetric to port 8080 at a
akri-udev-video, by running:
Note: This instruction assumes you are using vanilla Kubernetes. Be sure to reference the user guide to determine whether the distribution you are using requires crictl path configuration.
Note: To expose the Agent and Controller's Prometheus metrics, add
Note: If Prometheus is running in a different namespace as Akri and was not enabled to discover ServiceMonitors in other namespaces when installed, upgrade your Prometheus Helm installation to set
false.1helm upgrade prometheus prometheus-community/kube-prometheus-stack \2$AKRI_HELM_CRICTL_CONFIGURATION \3--set grafana.service.type=NodePort \4--set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false \5--set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false \6--namespace monitoringCopied!
The metrics also could have been exposed by adding the metrics port to the Configuration level service in the udev Configuration.