-
Notifications
You must be signed in to change notification settings - Fork 35
Open
Description
On a dual-stack-v6-primary k8s (one that has v6 cluster/service subnets specified first to kubelet), healthchecks for eda-api fail, leading to the pod never being listed as a service backend and eventually being killed.
Our case is RKE2 with this configuration for cluster/service cidrs:
cluster-cidr: "fd10:ceff:1067::/56,172.20.0.0/16"
service-cidr: "fd12:ceff:1067::/112,172.22.0.0/16"I believe this could be fixed by changing 0.0.0.0 to [::] in this template for the gunicorn and daphne listeners:
This would also increase v6 support for EDA overall.
However, there may need to be some logic applied to select v4 or v6 in this template - if a k8s is v4 or v6 only for example, or if it's dual stack v4 primary, etc.
The failing healthchecks can be seen to be attempting the v6 address of the pod.

Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels