Recently I’ve joined a project with real time machine learning inference. The project was set to run on Kubernetes 1.12 on AWS while development and training was made on premise, and some were made using Docker.

When the application was deployed to production we started to see poor performance and started to investigate. and it was weird, on the developers machines and in docker everything worked faster.

After debugging it for a while I understood that the jvm doesn’t see all available cores inside the pod. while searching some solutions I found that when running java 8 with and adding some extra JVM_OPTS along with all the rest of the opts it would solve my issue. So, I’ve added it to the deployment.yaml file and deployed it again.

env:
  - name: JAVA_OPTS
    value: "-XX:+UnlockExperimentalVMOptions"

But, still we still had the same poor performance of the application inside a pod.

The debugging started

I took the leanest possible way to debug jvm application for running a small snippet that will print the available cores of the container.

class processors {
    public static void main(String[] args) {
    int processors = Runtime.getRuntime().availableProcessors();
    System.out.println("CPU cores: " + processors);
  }
}

And wrote a simple Docker file that will allow me to build it.

ARG tag
FROM openjdk:${tag}

COPY main.java /mnt
WORKDIR /mnt
RUN javac main.java
RUN java -version
CMD [ "java", "processors"]

Added a build script

#!/bin/bash

REPO='jvm-num-processors'

docker build --build-arg tag=$1 -t omerha/$REPO:openjdk$1 .
docker push omerha/$REPO:openjdk$1

and I am ready to start

./build.sh 8 # Java 8
./build.sh 9
./build.sh 10
./build.sh 11
./build.sh 12 # Java 12

First I ran it on my local machine to see how it behaves in docker (this is an example of java8)

➜  jvm-num-processors git:(master) docker run omerha/jvm-num-processors:openjdk8
CPU cores: 2

All the containers ran the same and printed out 2 available cores, which was the correct answer (docker-for-mac by default assigns 2 cpu’s to the daemon) and by default docker, gives all available resource to the running container, so whats the problem in production?

I’ve started to deploy this container on my kubernetes cluster to start understanding what happening with resource 3 options:

  1. Resources by units
  2. Resources by milicores
  3. No resources requests or limits

I’ve created this snippet for kubernetes deployment

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jvm
  labels:
    app: jvm
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jvm
  template:
    metadata:
      labels:
        app: jvm
      containers:
      - name: jvm
        image: jvm-num-processors:8
        # resources:
        #   requests:
        #     cpu: "1"
        #     memory: "4096Mi"
        #   limits:
        #     cpu: "1"
        #     memory: "4096Mi"

When running with units or milicores I was able to see the correct number of cpu’s np matter on what java version I’ve tested it, and when remarking resource limits (that was natural during development, because we can’t predict the resources needed) I saw a different behavior than running it using docker.

CPU count was 1, not like Docker does on our developers workstations.

Conclusion

While Docker creates containers with no limits at all, kubernetes will run pods with no memory limit but with 1 CPU Core.

So if you application is core hungry so make sure to set the resources.requests.cpu to the number of cores you actually need.

I don’t feel that this is a hugh discovery, but it was a day to day work that made us think why the pod behaves different than Docker container while running with no limits at all.

Hope you enjoyed this article.