What are mixins and how to add them to OpenShift (Rules and Dashboards)

Prometheus and grafana have been established as de-facto stacks for kubernetes cluster monitoring, at least in the upstream communities. Once a cluster has the stack available, kubernetes and other application components are exposing their metrics. There is a need to create curated alerts based on those metrics and dashboards that give a high level view of what is happening. There is an effort to create those artifacts in a templated way.

Testing KEDA with kafka autoscaller

One of the most appealing features of container orchestrators like kubernetes is the ability to adapt the resources consumption to the need at that moment. At some point, the architects start to think about scaling the workloads in a smart way. Kubernetes provides two native mechanisms for autoscaling the workloads, the horizontal pod autoscaler (hpa) and the vertical pod autoscaler (vpa), however the latter in my opinion is not a transparent scaler.

Artemis monitoring in OpenShift

Artemis monitoring in OpenShift It is really simple to monitor the brokers deployed on OpenShift and show the metrics in grafana or configuring alerts based on the metrics. We are using the artemis version included in AMQ 7.9 deployed with the operator. Configuring user workload monitoring We can use a custom prometheus operator deployed on OpenShift, or just enable the user workload monitoring following the simple steps on this document