Skip to content

eda-api deployment healthchecks fail on v6-primary cluster #244

@thehonker

Description

@thehonker

On a dual-stack-v6-primary k8s (one that has v6 cluster/service subnets specified first to kubelet), healthchecks for eda-api fail, leading to the pod never being listed as a service backend and eventually being killed.

Our case is RKE2 with this configuration for cluster/service cidrs:

cluster-cidr: "fd10:ceff:1067::/56,172.20.0.0/16"
service-cidr: "fd12:ceff:1067::/112,172.22.0.0/16"

I believe this could be fixed by changing 0.0.0.0 to [::] in this template for the gunicorn and daphne listeners:

https://github.com/ansible/eda-server-operator/blob/main/roles/eda/templates/eda-api.deployment.yaml.j2

This would also increase v6 support for EDA overall.

However, there may need to be some logic applied to select v4 or v6 in this template - if a k8s is v4 or v6 only for example, or if it's dual stack v4 primary, etc.

The failing healthchecks can be seen to be attempting the v6 address of the pod.
image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions