GitHub Actions in OpenShift

With the OpenShift Container Platform, Tekton is included ( https://tekton.dev/) as a tool for CI. However, some companies are using other stacks to build their applications, like Azure DevOps or Gitlab DevOps. While those technologies can be used as a pure managed service, some particular requirements (security, regulatory, etc) or cost-effective deployments may search for using runners on-prem.

That is the case of Github Actions, that can implement custom runners, and they can be executed in Kubernetes/OpenShift.

Java and DNS in OpenShift: How it works and Challenges

This post is born out of a real-world experience. While deploying a CoreDNS dashboard in Grafana to monitor OpenShift DNS servers, I discovered several quirks that not only impact performance but also explain some puzzling application behaviors driven by Java’s DNS implementation. Let’s dive in.

DNS Service Discovery in Kubernetes

To understand the basics of what is DNS and his invention, I recommend an interview with Paul Mockapetris. This post focuses on how it is implemented in Kubernetes/OpenShift and how Java resolution interacts.

Using cert-manager with ipa-server and ACME with DNS challenge

This article shows how to use a private ipa-server to provide certificates to kubernetes applications.

There is a very good post on the subject about how to configure Identity Manager (ipa-server in RHEL) by Josep Font. A developer subscription for RHEL at no cost can be used, or CentOS Stream can be used for playing with the latest ipa-server version.

Another really good post about Cert-manager integration is done by another two colleagues, Jose Angel de Bustos and Jorge Tudela. They configure the HTTP-01 challenge using an Ingress Controller to expose the web needed for it. Although it is a very straightforward alternative, I do like to use the DNS-01 challenge best.

Stateful Applications In Kubernetes (part 1): Credentials

After several years working in the container space, I still hear in a lot of sessions and meetings that Kubernetes is not meant for running stateful applications. Of course, stateless apps are a lot more easier, less challenging and disposable than stateful apps. But in the end, almost every useful application depends on data. Kubernetes is a great choice to scale up/down apps and adapt them to demand, but if a database/broker/other-stateful-app cannot scale similarly, we can guess where the bottleneck and the limits would be.

JMX on Kubernetes

Recently, I had a situation where I needed to introspect on the memory of a java program running on kubernetes. The usual procedure is connecting to the container, getting a dump, collecting this dump from a standalone laptop/host and analyzing it offline.

However, I wanted to explore some other alternatives, so I would get here some additional techniques that can be used either from developers or for operations doing debugging or forensics.

What are mixins and how to add them to OpenShift (Rules and Dashboards)

Prometheus and grafana have been established as de-facto stacks for kubernetes cluster monitoring, at least in the upstream communities. Once a cluster has the stack available, kubernetes and other application components are exposing their metrics. There is a need to create curated alerts based on those metrics and dashboards that give a high level view of what is happening.

There is an effort to create those artifacts in a templated way. It enables a distribution to bundle them with customization if needed but keeping the baseline reusable.

Testing KEDA with kafka autoscaller

One of the most appealing features of container orchestrators like kubernetes is the ability to adapt the resources consumption to the need at that moment. At some point, the architects start to think about scaling the workloads in a smart way.

Kubernetes provides two native mechanisms for autoscaling the workloads, the horizontal pod autoscaler (hpa) and the vertical pod autoscaler (vpa), however the latter in my opinion is not a transparent scaler.

Artemis monitoring in OpenShift

Artemis monitoring in OpenShift

It is really simple to monitor the brokers deployed on OpenShift and show the metrics in grafana or configuring alerts based on the metrics. We are using the artemis version included in AMQ 7.9 deployed with the operator.

Configuring user workload monitoring

We can use a custom prometheus operator deployed on OpenShift, or just enable the user workload monitoring following the simple steps on this document

Exposing prometheus metrics

When a set of brokers are deployed, in the deploymentPlan section, the metrics plugin should be enabled. Be sure to use the correct ArtemisBroker Custom Resource version (broker.amq.io/v2alpha5) as previous ones did not include this field.

Benchmarking Apache Kafka, Kubernetes revisited (with Ceph)

It has been ages since I was introduced in Apache Kafka and read the post from linkedin Benchmarking Apache Kafka: 2 Million Writes Per Second (On Three Cheap Machines). At that time, the post wanted to show why the new messaging platform fitted the linkedin use cases with better performance than other traditional brokers.

Now that new kubernetes workloads are on the picture and it is fairly simple to deploy a kafka cluster using operators like strimzi, I was tempted to try to repeat the scenarios from the original post in a kubernetes cluster. My intention is a little bit different from the original, now that we know there are cases where kafka fits perfectly and others where alternatives shine better. I want to double check if cloud native deployments can be far more agile than traditional ones without impacting the performance significantly.

Monitoring Spring Boot embedded Infinispan in Kubernetes

In previous article, we explanined how to develop a microservice in SpringBoot that uses infinispan as spring-cache implementation. We exposed the actuator prometheus endpoint to check in our local environment that it was running right. In this post, we are going to use the actuator endpoint to scrape it with prometheus and create a grafana dashboard to monitor the performance.

Initial Set Up in kubernetes

We assume that we have our microservice already deployed on your kubernetes namespace. From now on, in our examples, we will use the namespace customers.