Skip to content

Incorrect cluster name used in BuildUpstreamClusterState #2041

@sehorne

Description

@sehorne

Summary

The BuildUpstreamClusterState function in controller/external.go is using the wrong variable for cluster identification, causing AWS API calls to fail when importing/updating EKS clusters where the Rancher resource name differs from the actual AWS cluster name. Discovered under Rancher issue 53778.

Bug Details

Affected Function: BuildUpstreamClusterState in controller/external.go

Issue: The function parameter name (EKSClusterConfig resource name) was being used instead of clusterState.Cluster.Name (actual AWS cluster name) in several critical locations:

  1. Line 56: upstreamSpec.DisplayName assignment
  2. Line 104: CheckEBSAddon AWS API call
  3. Lines 171, 174, 177: Error messages (these should use name for operator context)

Impact

When importing an EKS cluster where the Rancher EKSClusterConfig resource name differs from the AWS cluster name:

  • CheckEBSAddon fails because it receives the wrong cluster name
  • DisplayName is set incorrectly
  • Cluster import operations fail

This is a common scenario in import workflows where operators use different naming conventions in Rancher vs AWS.

Root Cause

Variable naming confusion between:

  • name parameter: The Kubernetes EKSClusterConfig resource name (Rancher context)
  • clusterState.Cluster.Name: The actual AWS EKS cluster name (AWS context)

Fix

Changed the code to:

  • Use aws.ToString(clusterState.Cluster.Name) for AWS API calls and DisplayName
  • Use name parameter in error messages for operator debugging context

Files Changed:

  • controller/external.go - Bug fix
  • controller/external_test.go - New test coverage

Test Coverage

Added comprehensive tests in controller/external_test.go that verify:

  1. AWS API calls receive the correct AWS cluster name
  2. Error messages contain the resource name for operator context
  3. DisplayName is set to the AWS cluster name
  4. Behavior is correct when names match (baseline case)

These tests would have caught this bug.

Reproduction Steps

  1. Create an EKS cluster in AWS named "production-cluster"
  2. Import it into Rancher with a different name like "rancher-prod-123"
  3. The import fails when checking for EBS CSI driver addon
  4. Error: cluster "rancher-prod-123" not found in AWS (should be "production-cluster")

Expected Behavior

  • AWS API calls should use the actual AWS cluster name from clusterState.Cluster.Name
  • Error messages should reference the Rancher resource name for debugging
  • Import should succeed regardless of naming differences

Actual Behavior (Before Fix)

  • AWS API calls received the Rancher resource name
  • API calls failed with "cluster not found" errors
  • Import operations failed

Environment

  • Component: eks-operator
  • Affected versions: All versions prior to fix
  • Scenario: Cluster import with different Rancher/AWS names

Related Issues

This bug was discovered while investigating cluster import failures in rancher/rancher.

Pull Request

Branch: bug-53778-fix
Commit: Correct usage of 'name' when calling CheckEBSAddon, needs clusterState.Cluster.Name

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions