One of the most appealing features of container orchestrators like kubernetes is the ability to adapt the resources consumption to the need at that moment. At some point, the architects start to think about scaling the workloads in a smart way. Kubernetes provides two native mechanisms for autoscaling the workloads, the horizontal pod autoscaler (hpa) and the vertical pod autoscaler (vpa), however the latter in my opinion is not a transparent scaler.
Artemis monitoring in OpenShift
Artemis monitoring in OpenShift It is really simple to monitor the brokers deployed on OpenShift and show the metrics in grafana or configuring alerts based on the metrics. We are using the artemis version included in AMQ 7.9 deployed with the operator. Configuring user workload monitoring We can use a custom prometheus operator deployed on OpenShift, or just enable the user workload monitoring following the simple steps on this document
Benchmarking Apache Kafka, Kubernetes revisited (with Ceph)
It has been ages since I was introduced in Apache Kafka and read the post from linkedin Benchmarking Apache Kafka: 2 Million Writes Per Second (On Three Cheap Machines). At that time, the post wanted to show why the new messaging platform fitted the linkedin use cases with better performance than other traditional brokers. Now that new kubernetes workloads are on the picture and it is fairly simple to deploy a kafka cluster using operators like strimzi, I was tempted to try to repeat the scenarios from the original post in a kubernetes cluster.