-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Description
I have a StatefulSet With a PersistentVolume mounted to /var/lib/postgresql. Debian based Postgres images work as expected. Database is initialized and /var/lib/postgresql/data folder is stored in the PersistentVolume.
$ grep image statefulset.yaml | grep -v \#
image: postgres:15.0
$ kubectl apply -k .
namespace/test created
statefulset.apps/postgres created
$ kubectl exec -it -n test statefulset/postgres -- bash
bash-5.1# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 126G 34G 87G 28% /
tmpfs 64M 0 64M 0% /dev
/dev/mapper/rl-root 126G 34G 87G 28% /etc/hosts
shm 64M 1.1M 63M 2% /dev/shm
/dev/longhorn/pvc-7eab52f7-918e-4791-a924-74072998cd26 7.8G 28K 7.8G 1% /var/lib/postgresql
...
With Alpine based images the initialization process creates a separate mount for the /var/lib/postgresql/data folder to the OS disk. This will lead to data loss when the pod is recycled (data is not in the PersistenVolume).
$ grep image statefulset.yaml | grep -v \#
image: postgres:15.0-alpine
$ kubectl apply -k .
namespace/test created
statefulset.apps/postgres created
$ kubectl exec -it -n test statefulset/postgres -- bash
bash-5.1# df -h
Filesystem Size Used Available Use% Mounted on
overlay 125.4G 33.0G 86.7G 28% /
tmpfs 64.0M 0 64.0M 0% /dev
/dev/mapper/rl-root 125.4G 33.0G 86.7G 28% /etc/hosts
/dev/mapper/rl-root 125.4G 33.0G 86.7G 28% /dev/termination-log
/dev/mapper/rl-root 125.4G 33.0G 86.7G 28% /etc/hostname
/dev/mapper/rl-root 125.4G 33.0G 86.7G 28% /etc/resolv.conf
shm 64.0M 1.0M 63.0M 2% /dev/shm
/dev/longhorn/pvc-7a73320e-4498-403a-a5ec-33d3c4f8ae76
7.8G 28.0K 7.8G 0% /var/lib/postgresql
/dev/mapper/rl-root 125.4G 33.0G 86.7G 28% /var/lib/postgresql/data
...
Not sure if this is a configuration problem in the StatefulSet itself. Atleast it is suprising behaviour which is hard to detect at first. It took me couple of times losing the data before suspecting something is wrong. It is not Longhorn related, I have seen the problem happen with AWS based storage too.
To reproduce checkout the following repository.
https://github.com/tuupola/postgres-alpine/
For documentation purposes this is the StateFulset config.
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
replicas: 1
serviceName: postgres
selector:
matchLabels:
app.kubernetes.io/name: postgres
app.kubernetes.io/component: database
app.kubernetes.io/instance: test-postgres
template:
metadata:
name: postgres
# These are not needed by considered a good practice
labels:
app.kubernetes.io/name: postgres
app.kubernetes.io/component: database
app.kubernetes.io/instance: test-postgres
app.kubernetes.io/version: "15.0"
spec:
containers:
- name: postgres
#image: postgres:16.0
#image: postgres:16.0-alpine
#image: postgres:15.0
image: postgres:15.0-alpine
#image: postgres:14.0
#image: postgres:14.0-alpine
env:
# These are created automagically
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: example
- name: POSTGRES_DB
value: test
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 1000m
memory: 1024Mi
volumeMounts:
- name: data
mountPath: /var/lib/postgresql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: longhorn