I have node.js application that is logging inside memory usage.
rss: 161509376,
heapTotal: 97697792,
heapUsed: 88706896,
external: 733609
And command kubectl top pod which saying how many memory pod is using.
NAME CPU(cores) MEMORY(bytes)
api-596d754fc6-s7xvc 2m 144Mi
As you can see, node app using only 93 MB of memory, while k8s saying that pod consumes 144 MB of memory.
We are using alpine as a base image for the node.js app. I checked the raw alpine image with all dependencies installed without actual application running, and it consumed about 4-8 MB of memory. Deployment has limits set.
...
resources:
limits:
memory: 400Mi
cpu: 2
requests:
memory: 90Mi
cpu: 100m
So, requested memory is lower than one that k8s showing to me. I expect to see, that there would be something closer to actual memory consumption, let's say 100 MB.
How can I understand where this additional memory come from? Why are these numbers having a difference?
All tests have been launched on a single pod (single service has a single pod, no mistakes here).
Update 1.
FROM node:8-alpine
ENV NODE_ENV development
ENV PORT XXXX
RUN echo https://repository.fit.cvut.cz/mirrors/alpine/v3.8/main > /etc/apk/repositories; \
echo https://repository.fit.cvut.cz/mirrors/alpine/v3.8/community >> /etc/apk/repositories
RUN apk update && \
apk upgrade && \
apk --no-cache add git make gcc g++ python
RUN apk --no-cache add vips-dev fftw-dev build-base \
--repository https://repository.fit.cvut.cz/mirrors/alpine/edge/testing/ \
--repository https://repository.fit.cvut.cz/mirrors/alpine/edge/main
WORKDIR /app
COPY ./dist /app
RUN npm install --only=production --unsafe-perm
RUN apk del make gcc g++ python build-base && \
rm /var/cache/apk/*
EXPOSE XXXX
CMD node index.js
Docker image looking like so.