A journey of migration: Java applications on Kubernetes

Your journey to migrate Java applications from standard systems, such as physical or virtual machines, to a Kubernetes cluster is full of threats. Here you can find some guidelines to follow if you want to avoid the main dangers and make the trip as safe as possible.

A journey of migration: Java applications on Kubernetes
Migrating from standard systems to Kubernetes means running applications inside containers and deploying them on a cluster.

That is a wrong statement, which anyone who has ever worked with containers or Kubernetes clusters can confirm.
But there is something true in that sentence. The migration process is divided into two parts: first, the application must run in a container, then it can be deployed on Kubernetes.

Although Java was born to be a portable language, migrating Java applications through these two systems requires extra work on the applications themselves.

Containerizing applications requires a paradigm shift with regards to the storage, network, logging, and monitoring aspects.
The same is valid for the following step, from containers to Kubernetes clusters: between these two systems there are also different approaches to managing the surrounding topics of the application.

Focus points

It would be impossible to cover in a single blog post every activity required to containerize and deploy Java applications on Kubernetes.

But, based on our experience, if you focus your attention on a few topics, this process will be easier.

Most of the issues we’ve usually found in migrating these kind of applications can be resolved with the correct set-up of these topics:

  1. Build the correct container
  2. Configure the JVM properly
  3. Monitor your application

We will assume that you use Docker as a container runtime and a self-managed Kubernetes cluster,just to cover a general scenario in the following chapters.

There is a useful hands-on tutorial on Katakoda that can be used as a reference for your work in modernizing Java applications.

Build the correct container

This is the main step in modernizing your Java application and everything starts with a “FROM ...” statement. In fact, the first choice to make is the base image of the system where your application will be installed.

Although the correct choice will depend on your application, there are some general rules you can follow to choose the right base image:

  1. Use a secure and updated base image
    Thinking about shifting Docker security left, you should choose a Docker image based on the vulnerabilities at OS-level and the status of updates.
    SIGHUP provides secured images with its Container Certified Images service. It consists of a Docker registry with many images analyzed and maintained by SIGHUP to work with the latest and most secure systems.
  2. Small images are better
    A good Docker image ready for production must contain the minimum number of tools and libraries necessary to run the application correctly.
    Following this approach, it’s better to choose an image with an “-alpine” tag if possible, because they are the thinnest.
    Also, JRE images are preferred over JDK, to remove all the components that are only useful at a development stage.
  3. Build the application outside the final container
    As a consequence of the previous point, you must avoid building the application inside the container that will become the final image, to keep that as clean as possible from all the components used during the build from systems like Maven or Gradle.
    You can use a multi-stage build to both compile and containerize your application.

Here's an example of a Dockerfile which follows the best practices.
You can compile and build the image with your application simply by running:

docker build -t demo:0.0.1-SNAPSHOT .

FROM reg.sighup.io/r/library/java/maven:3.6.3-openjdk-11 as builder

# Prepare directories
RUN mkdir /app
WORKDIR /app

# Copy the application
COPY pom.xml /app/pom.xml
COPY src /app/src

# Compile Java application
RUN mvn clean package

FROM reg.sighup.io/r/library/java/openjdk:11-jre

# Copy compiled app into /app directory
COPY --from=builder /app/target/demo-0.0.1-SNAPSHOT.jar /app/demo-0.0.1-SNAPSHOT.jar

# The app exposes 8080 port
EXPOSE 8080

Configure the JVM properly

This is a well-known topic for every Java developer. The performance of Java applications is related to the quality of the code and the configuration of the JVM where it runs.

The JVM can be configured in different ways, with options passed to the start command, with environment variables, etc.
There are general options that are not necessarily related to the Java application, while other options must be tuned specifically for the application.

Here there are some generic options to evaluate when you configure the JVM:

  • Choose the Garbage Collector
    There are different GC algorithms to choose from that could improve the JVM performances depending on the workload to be handled.
    The most common and stable GC is G1 (-XX:+UseG1GC)
  • Enable logging details
    Add these options to the JVM to have a better idea of what happens with the Garbage Collector: -XX:+PrintGCDetails -XX:+PrintGCTimeStamps for Java 8 or -Xlog:gc* for Java 11.
  • Save dump file in case of OOM error
    It’s smart to save the dump of the heap on the disk by using the -XX:+HeapDumpOnOutOfMemoryError option. In this way, if the JVM runs out of memory, the heap dump will be saved in the file java_pid.hprof in the working directory of the VM.

Then there are those options specific to the application to tune the heap size of the JVM.
You can choose to have a fixed heap size with the -Xmx option (generally better for legacy applications) or adjust the heap size based on the total amount of memory available in your container.

  • -XX:InitialRAMPercentage: is used to compute the initial heap size of your Java application
  • -XX:MinRAMPercentage: despite the name, this is used to determine the maximum heap size if the total memory available in the container is less than 250MB
  • -XX:MaxRAMPercentage: is used to determine the maximum heap size if the total memory available in the container is more than 250MB

The options above are valid starting from Java 8u191; for previous versions the memory management is slightly different.
If you want to learn more about these options check this article.

Using JVMs in Kubernetes

The JVM tuning should be done according to the resources allocated to the containers in Pods.

The overall resources available for the system are those specified in the manifests as requests and limits.

  • Requests: are the minimum amount of CPU/Memory that your application will get from the cluster. A Pod is scheduled on a node only if requested resources are available.
  • Limits: are the maximum amount of CPU/Memory that your application will get from the cluster. Once these limits are reached, the Pod will be affected by different problems.

What happens if the JVM crosses the limits set on containers?

  • CPU reaches the limit -> the application will suffer from CPU throttling, so it could assume strange behaviors like throwing timeout errors or losing data.
    It could also happen that Readiness and Liveness probes cannot be satisfied when the CPU throttles, so your Pod could become Not Ready or it could be restarted.
    Note: when the JVM starts, there is a warm-up phase that requires more CPU than the amount of CPU needed at runtime. So a good approach could be to start with a CPU limit twice that which is necessary at the working phase, then it can be tuned starting from this value.
    For example, if you see that your application requires 1000m of CPU, you can start by setting the limit at 2000m and monitor the warm-up phase with dedicated tools (see the last chapter).
  • Memory reaches the limit -> it can happen either for the JVM or the node where the Pod is running.
    If the JVM runs out of memory and you have used the suggested option above, the heap will be dumped on the disk and you can debug the issue. The Pod will be restarted tracking the OOM error in the events list.
    If the node doesn’t have free memory, your applications will have poor performance and the node could crash.

Let’s suppose that you have built the application with the Dockerfile above and you know that it needs, at runtime, 1000m CPU and 300MB of memory to work properly. This could be a starting point for the Pod manifest:

apiVersion: v1
kind: Pod
metadata:
  name: demo
spec:
  containers:
  - name: app
    image: demo:0.0.1-SNAPSHOT
    env:
    - name: JAVA_OPTS
      value: "-XX:+UseG1GC -Xlog:gc* -XX:+HeapDumpOnOutOfMemoryError -XX:InitialRAMPercentage=50.0 -XX:MaxRAMPercentage=80.0"
    resources:
      requests:
        memory: "300Mi"
        cpu: "1000m"
      limits:
        memory: "500Mi"
        cpu: "2000m"

With this manifest, a Pod named “demo” will be deployed on Kubernetes with one container based on the image previously built (demo:0.0.1-SNAPSHOT).
This pod has an environment variable called JAVA_OPTS which contains the configurations for the JVM that will execute the application (you could use the alternative JAVA_TOOL_OPTIONS, depending on the base image).
After a few generic options, there are two options for heap size: InitialRAMPercentage, set as 50% of the total memory, and MaxRAMPercentage, set as 80% of total memory.

What is considered as total memory in this case?
From the JVM point of view, the total memory available in the container is the limit set in the manifest.
With the configuration above, the JVM will start with a heap size of 100MB (50% of the limit) and it can grow up to 400MB (80% of the limit).

Monitor your application

Now your application can be deployed in a cluster and, with the power of Kubernetes, nothing can break it, right? Wrong!

We need to act as if something will definitely break the application, so we need to monitor it.
When it comes to implementing an observability stack in a Kubernetes cluster, Prometheus and Grafana cannot be missing.

  • Prometheus is a time-series database – it stores the metrics coming from different sources and provides a powerful query system.
  • Grafana is an observability platform with advanced dashboards to better analyze and visualize the data stored in Prometheus (and from other sources).

As you probably already know, the best way to monitor Java applications is to use the JMX framework. There are tons of tools able to read this stream of data coming from the JVM.

It’s just a matter of transferring JMX data to Prometheus and creating a dedicated Grafana Dashboard.
To do this, we can use the Prometheus JMX Exporter, a Java Agent to deploy along with your application to send JMX metrics to Prometheus. It consists of a .jar file to expose an HTTP server with metrics coming from a JMX target (defined in a configuration file):

java -javaagent:./jmx_prometheus_javaagent-0.15.0.jar=8080:config.yaml -jar yourJar.jar

There are two main ways to deploy this agent in Kubernetes:

  1. By using an Init container: creating a Docker image that saves the Agent .jar file in a shared volume so the main container can use it.
kind: Pod
metadata:
  name: demo
spec:
  containers:
  – name: app
    image: <image-name>
    env:
    – name: JAVA_OPTIONS 
      value: -javaagent:/shared/java-agent/rook.jar=30000:/shared/config.yaml
    volumeMounts:
    – mountPath: /shared/java-agent
      name: java-agent
  initContainers:
  – name: init-agent
    image: <init-agent-image-name>
    volumeMounts:
    – mountPath: /shared/java-agent
      name: java-agent
  volumes: 
  – name: java-agent
    emptyDir: {}

2.   By including the Java Agent in the Docker image: adding the Agent .jar file and its configuration directly in the initial Dockerfile of your application.
SIGHUP Java Certified Images already come with Java Agent configured out of the box.

FROM reg.sighup.io/r/library/java/openjdk:11-jre

COPY --from=builder /app/target/demo-0.0.1-SNAPSHOT.jar /app/demo-0.0.1-SNAPSHOT.jar

COPY jmx_prometheus_javaagent.jar /opt/javaagent/jmx_prometheus_javaagent.jar
COPY jmx_exporter.yaml /javaagent/jmx_exporter.yaml

ENV JAVA_OPTIONS="${JAVA_OPTIONS} -javaagent:/opt/javaagent/jmx_prometheus_javaagent.jar=30000:/javaagent/jmx_exporter.yaml"

EXPOSE 8080 30000


Either way, you still need to instruct Prometheus on how to scrape the JMX metrics coming from the Java Agent. If you have installed it using the Prometheus Operator, you can simply create a ServiceMonitor object to scrape those metrics.
You can check an example on the Katakoda tutorial.

Finally, you can create a dashboard in Grafana or pick one of the thousands present on its site.
Here's an example of a standard dashboard showing JMX metrics coming from Prometheus exporter.

Conclusions

Now you should have a better understanding of which dangers await you during your journey migrating Java applications from legacy systems to modern systems.

For sure you will encounter other threats (it wouldn’t be possible to prevent every possible incident), but with the guidelines described in this post you should be able to resolve them quickly and get back on track.

Have a good journey!

If you want to learn more: