Skip to content

Conversation

@teknaS47
Copy link
Member

Description

Adding localization tests for other locales to the ocp-helm-nightly job

Which issue(s) does this PR fix

PR acceptance criteria

Please make sure that the following steps are complete:

  • GitHub Actions are completed and successful
  • Unit Tests are updated and passing
  • E2E Tests are updated and passing
  • Documentation is updated if necessary (requirement for new features)
  • Add a screenshot if the change is UX/UI related

How to test changes / Special notes to the reviewer

Signed-off-by: Sanket Saikia <sanketsaikia13@gmail.com>
@rhdh-qodo-merge
Copy link

rhdh-qodo-merge bot commented Jan 19, 2026

PR Reviewer Guide 🔍

(Review updated until commit 39c67f4)

Here are some key observations to aid the review process:

🎫 Ticket compliance analysis 🔶

RHIDP-11153 - Partially compliant

Compliant requirements:

  • Translation e2e tests are executed as part of the existing nightly CI workflow.

Non-compliant requirements:

Requires further human verification:

  • Verify the nightly OCP Helm job actually runs the showcase-localization-fr Playwright project successfully and publishes the expected artifacts/results.
  • Confirm the OSD-GCP skip condition matches the intended job naming patterns in all relevant CI environments.
⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
🔒 No security concerns identified
⚡ Recommended focus areas for review

CI Regression

The signature/parameter semantics for run_tests() and check_and_test() changed (new url positional, added artifacts_dir, moved defaults). Any existing callers not updated to the new positional ordering may now pass max_attempts/wait_seconds into the wrong slots, or omit the URL entirely, which could break deployments/tests or artifact publishing in other job scripts not shown in this diff.

run_tests() {
  local release_name=$1
  local namespace=$2
  local playwright_project=$3
  local url=$4
  local artifacts_dir="${5:-${namespace}}"

  CURRENT_DEPLOYMENT=$((CURRENT_DEPLOYMENT + 1))
  save_status_deployment_namespace $CURRENT_DEPLOYMENT "$namespace"
  save_status_failed_to_deploy $CURRENT_DEPLOYMENT false

  BASE_URL="${url}"
  export BASE_URL
  log::info "BASE_URL: ${BASE_URL}"
  log::info "Running Playwright project '${playwright_project}' against namespace '${namespace}'"

  cd "${DIR}/../../e2e-tests"
  local e2e_tests_dir
  e2e_tests_dir=$(pwd)

  yarn install --immutable > /tmp/yarn.install.log.txt 2>&1
  INSTALL_STATUS=$?
  if [ $INSTALL_STATUS -ne 0 ]; then
    log::error "=== YARN INSTALL FAILED ==="
    cat /tmp/yarn.install.log.txt
    exit $INSTALL_STATUS
  else
    log::success "Yarn install completed successfully."
  fi

  yarn playwright install chromium

  Xvfb :99 &
  export DISPLAY=:99

  (
    set -e
    log::info "Using PR container image: ${TAG_NAME}"
    # Run Playwright directly with --project flag instead of using yarn script aliases
    yarn playwright test --project="${playwright_project}"
  ) 2>&1 | tee "/tmp/${LOGFILE}"

  local RESULT=${PIPESTATUS[0]}

  pkill Xvfb || true

  mkdir -p "${ARTIFACT_DIR}/${artifacts_dir}/test-results"
  mkdir -p "${ARTIFACT_DIR}/${artifacts_dir}/attachments/screenshots"
  cp -a "${e2e_tests_dir}/test-results/"* "${ARTIFACT_DIR}/${artifacts_dir}/test-results" || true
  cp -a "${e2e_tests_dir}/${JUNIT_RESULTS}" "${ARTIFACT_DIR}/${artifacts_dir}/${JUNIT_RESULTS}" || true
  if [[ "${CI}" == "true" ]]; then
    cp "${ARTIFACT_DIR}/${artifacts_dir}/${JUNIT_RESULTS}" "${SHARED_DIR}/junit-results-${artifacts_dir}.xml" || true
  fi

  cp -a "${e2e_tests_dir}/screenshots/"* "${ARTIFACT_DIR}/${artifacts_dir}/attachments/screenshots/" || true
  ansi2html < "/tmp/${LOGFILE}" > "/tmp/${LOGFILE}.html"
  cp -a "/tmp/${LOGFILE}.html" "${ARTIFACT_DIR}/${artifacts_dir}" || true
  cp -a "${e2e_tests_dir}/playwright-report/"* "${ARTIFACT_DIR}/${artifacts_dir}" || true

  echo "Playwright project '${playwright_project}' in namespace '${namespace}' RESULT: ${RESULT}"
  if [ "${RESULT}" -ne 0 ]; then
    save_overall_result 1
    save_status_test_failed $CURRENT_DEPLOYMENT true
  else
    save_status_test_failed $CURRENT_DEPLOYMENT false
  fi
  if [ -f "${e2e_tests_dir}/${JUNIT_RESULTS}" ]; then
    failed_tests=$(grep -oP 'failures="\K[0-9]+' "${e2e_tests_dir}/${JUNIT_RESULTS}" | head -n 1)
    echo "Number of failed tests: ${failed_tests}"
    save_status_number_of_test_failed $CURRENT_DEPLOYMENT "${failed_tests}"
  else
    echo "JUnit results file not found: ${e2e_tests_dir}/${JUNIT_RESULTS}"
    local failed_tests="some"
    echo "Number of failed tests unknown, saving as $failed_tests."
    save_status_number_of_test_failed $CURRENT_DEPLOYMENT "${failed_tests}"
  fi
}

check_backstage_running() {
  local release_name=$1
  local namespace=$2
  local url=$3
  local max_attempts=${4:-30}
  local wait_seconds=${5:-30}

  if [ -z "${url}" ]; then
    log::error "Error: URL is not set. Please provide a valid URL."
    return 1
  fi

  log::info "Checking if Backstage is up and running at ${url}"

  for ((i = 1; i <= max_attempts; i++)); do
    # Check HTTP status
    local http_status
    http_status=$(curl --insecure -I -s -o /dev/null -w "%{http_code}" "${url}")

    if [ "${http_status}" -eq 200 ]; then
      log::success "✅ Backstage is up and running!"
      return 0
    else
      log::warn "Attempt ${i} of ${max_attempts}: Backstage not yet available (HTTP Status: ${http_status})"
      oc get pods -n "${namespace}"
      sleep "${wait_seconds}"
    fi
  done

  log::error "❌ Failed to reach Backstage at ${url} after ${max_attempts} attempts."
  oc get events -n "${namespace}" --sort-by='.lastTimestamp' | tail -10
  mkdir -p "${ARTIFACT_DIR}/${namespace}"
  cp -a "/tmp/${LOGFILE}" "${ARTIFACT_DIR}/${namespace}/" || true
  save_all_pod_logs "${namespace}"
  return 1
}

install_olm() {
  if operator-sdk olm status > /dev/null 2>&1; then
    log::warn "OLM is already installed."
  else
    log::info "OLM is not installed. Installing..."
    operator-sdk olm install
  fi
}

uninstall_olm() {
  if operator-sdk olm status > /dev/null 2>&1; then
    log::info "OLM is installed. Uninstalling..."
    operator-sdk olm uninstall
  else
    log::info "OLM is not installed. Nothing to uninstall."
  fi
}

# Installs the Red Hat OpenShift Pipelines operator if not already installed
# Use waitfor_pipelines_operator to wait for the operator to be ready
install_pipelines_operator() {
  DISPLAY_NAME="Red Hat OpenShift Pipelines"
  # Check if operator is already installed
  if oc get csv -n "openshift-operators" | grep -q "${DISPLAY_NAME}"; then
    log::warn "Red Hat OpenShift Pipelines operator is already installed."
  else
    log::info "Red Hat OpenShift Pipelines operator is not installed. Installing..."
    # Install the operator and wait for deployment
    install_subscription openshift-pipelines-operator openshift-operators latest openshift-pipelines-operator-rh redhat-operators openshift-marketplace
  fi

  # Wait for Tekton Pipeline CRD to be registered before proceeding
  log::info "Waiting for Tekton Pipeline CRD to be registered..."
  timeout 120 bash -c '
    until oc get crd pipelines.tekton.dev &>/dev/null; do
      log::info "Waiting for pipelines.tekton.dev CRD..."
      sleep 5
    done
  ' || {
    log::error "Error: Timed out waiting for Tekton Pipeline CRD to be registered."
    return 1
  }
  log::success "Tekton Pipeline CRD is available."
}

waitfor_pipelines_operator() {
  wait_for_deployment "openshift-operators" "pipelines"
  wait_for_endpoint "tekton-pipelines-webhook" "openshift-pipelines"
}

# Installs the Tekton Pipelines if not already installed (alternative of OpenShift Pipelines for Kubernetes clusters)
# Use waitfor_tekton_pipelines to wait for the operator to be ready
install_tekton_pipelines() {
  DISPLAY_NAME="tekton-pipelines-webhook"
  if oc get pods -n "tekton-pipelines" | grep -q "${DISPLAY_NAME}"; then
    log::info "Tekton Pipelines are already installed."
  else
    log::info "Tekton Pipelines is not installed. Installing..."
    kubectl apply -f https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
  fi
}

waitfor_tekton_pipelines() {
  DISPLAY_NAME="tekton-pipelines-webhook"
  wait_for_deployment "tekton-pipelines" "${DISPLAY_NAME}"
  wait_for_endpoint "tekton-pipelines-webhook" "tekton-pipelines"

  # Wait for Tekton Pipeline CRD to be registered before proceeding
  log::info "Waiting for Tekton Pipeline CRD to be registered..."
  timeout 120 bash -c '
    until kubectl get crd pipelines.tekton.dev &>/dev/null; do
      log::info "Waiting for pipelines.tekton.dev CRD..."
      sleep 5
    done
  ' || {
    log::error "Error: Timed out waiting for Tekton Pipeline CRD to be registered."
    return 1
  }
  log::success "Tekton Pipeline CRD is available."
}

delete_tekton_pipelines() {
  log::info "Checking for Tekton Pipelines installation..."
  # Check if tekton-pipelines namespace exists
  if kubectl get namespace tekton-pipelines &> /dev/null; then
    log::info "Found Tekton Pipelines installation. Attempting to delete..."
    # Delete the resources and ignore errors
    kubectl delete -f https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml --ignore-not-found=true 2> /dev/null || true
    # Wait for namespace deletion (with timeout)
    log::info "Waiting for Tekton Pipelines namespace to be deleted..."
    timeout 30 bash -c '
        while kubectl get namespace tekton-pipelines &> /dev/null; do
            echo "Waiting for tekton-pipelines namespace deletion..."
            sleep 5
        done
        log::success "Tekton Pipelines deleted successfully."
        ' || log::warn "Warning: Timed out waiting for namespace deletion, continuing..."
  else
    log::info "Tekton Pipelines is not installed. Nothing to delete."
  fi
}

cluster_setup_ocp_helm() {
  # first install all operators to run the installation in parallel
  install_pipelines_operator
  install_crunchy_postgres_ocp_operator

  # Skip orchestrator infra installation on OSD-GCP due to infrastructure limitations
  if [[ ! "${JOB_NAME}" =~ osd-gcp ]]; then
    install_orchestrator_infra_chart
  else
    echo "Skipping orchestrator-infra installation on OSD-GCP environment"
  fi

  # then wait for the right status one by one
  waitfor_pipelines_operator
  waitfor_crunchy_postgres_ocp_operator
}

cluster_setup_ocp_operator() {
  # first install all operators to run the installation in parallel
  install_pipelines_operator
  install_crunchy_postgres_ocp_operator
  install_serverless_ocp_operator
  install_serverless_logic_ocp_operator

  # then wait for the right status one by one
  waitfor_pipelines_operator
  waitfor_crunchy_postgres_ocp_operator
  waitfor_serverless_ocp_operator
  waitfor_serverless_logic_ocp_operator
}

cluster_setup_k8s_operator() {
  # first install all operators to run the installation in parallel
  install_olm
  install_tekton_pipelines
  # install_crunchy_postgres_k8s_operator # Works with K8s but disabled in values file

  # then wait for the right status one by one
  waitfor_tekton_pipelines
  # waitfor_crunchy_postgres_k8s_operator
}

cluster_setup_k8s_helm() {
  # first install all operators to run the installation in parallel
  # install_olm
  install_tekton_pipelines
  # install_crunchy_postgres_k8s_operator # Works with K8s but disabled in values file

  # then wait for the right status one by one
  waitfor_tekton_pipelines
  # waitfor_crunchy_postgres_k8s_operator
}

install_orchestrator_infra_chart() {
  ORCH_INFRA_NS="orchestrator-infra"
  configure_namespace ${ORCH_INFRA_NS}

  log::info "Deploying orchestrator-infra chart"
  cd "${DIR}"
  helm upgrade -i orch-infra -n "${ORCH_INFRA_NS}" \
    "oci://quay.io/rhdh/orchestrator-infra-chart" --version "${CHART_VERSION}" \
    --wait --timeout=5m \
    --set serverlessLogicOperator.subscription.spec.installPlanApproval=Automatic \
    --set serverlessOperator.subscription.spec.installPlanApproval=Automatic

  until [ "$(oc get pods -n openshift-serverless --no-headers 2> /dev/null | wc -l)" -gt 0 ]; do
    sleep 5
  done

  until [ "$(oc get pods -n openshift-serverless-logic --no-headers 2> /dev/null | wc -l)" -gt 0 ]; do
    sleep 5
  done

  oc wait pod --all --for=condition=Ready --namespace=openshift-serverless --timeout=5m
  oc wait pod --all --for=condition=Ready --namespace=openshift-serverless-logic --timeout=5m

  oc get crd | grep "sonataflow" || echo "Sonataflow CRDs not found"
  oc get crd | grep "knative" || echo "Serverless CRDs not found"
}

# Helper function to get common helm set parameters
get_image_helm_set_params() {
  local params=""

  # Add image repository
  params+="--set upstream.backstage.image.repository=${QUAY_REPO} "

  # Add image tag
  params+="--set upstream.backstage.image.tag=${TAG_NAME} "

  echo "${params}"
}

# Helper function to perform helm install/upgrade
perform_helm_install() {
  local release_name=$1
  local namespace=$2
  local value_file=$3

  # shellcheck disable=SC2046
  helm upgrade -i "${release_name}" -n "${namespace}" \
    "${HELM_CHART_URL}" --version "${CHART_VERSION}" \
    -f "${DIR}/value_files/${value_file}" \
    --set global.clusterRouterBase="${K8S_CLUSTER_ROUTER_BASE}" \
    $(get_image_helm_set_params)
}

base_deployment() {
  configure_namespace ${NAME_SPACE}

  deploy_redis_cache "${NAME_SPACE}"

  cd "${DIR}"
  local rhdh_base_url="https://${RELEASE_NAME}-developer-hub-${NAME_SPACE}.${K8S_CLUSTER_ROUTER_BASE}"
  apply_yaml_files "${DIR}" "${NAME_SPACE}" "${rhdh_base_url}"
  log::info "Deploying image from repository: ${QUAY_REPO}, TAG_NAME: ${TAG_NAME}, in NAME_SPACE: ${NAME_SPACE}"
  perform_helm_install "${RELEASE_NAME}" "${NAME_SPACE}" "${HELM_CHART_VALUE_FILE_NAME}"

  deploy_orchestrator_workflows "${NAME_SPACE}"
}

rbac_deployment() {
  configure_namespace "${NAME_SPACE_POSTGRES_DB}"
  configure_namespace "${NAME_SPACE_RBAC}"
  configure_external_postgres_db "${NAME_SPACE_RBAC}"

  # Initiate rbac instance deployment.
  local rbac_rhdh_base_url="https://${RELEASE_NAME_RBAC}-developer-hub-${NAME_SPACE_RBAC}.${K8S_CLUSTER_ROUTER_BASE}"
  apply_yaml_files "${DIR}" "${NAME_SPACE_RBAC}" "${rbac_rhdh_base_url}"
  log::info "Deploying image from repository: ${QUAY_REPO}, TAG_NAME: ${TAG_NAME}, in NAME_SPACE: ${RELEASE_NAME_RBAC}"
  perform_helm_install "${RELEASE_NAME_RBAC}" "${NAME_SPACE_RBAC}" "${HELM_CHART_RBAC_VALUE_FILE_NAME}"

  # NOTE: This is a workaround to allow the sonataflow platform to connect to the external postgres db using ssl.
  # Wait for the sonataflow database creation job to complete with robust error handling
  if ! wait_for_job_completion "${NAME_SPACE_RBAC}" "${RELEASE_NAME_RBAC}-create-sonataflow-database" 10 10; then
    echo "❌ Failed to create sonataflow database. Aborting RBAC deployment."
    return 1
  fi
  oc -n "${NAME_SPACE_RBAC}" patch sfp sonataflow-platform --type=merge \
    -p '{"spec":{"services":{"jobService":{"podTemplate":{"container":{"env":[{"name":"QUARKUS_DATASOURCE_REACTIVE_URL","value":"postgresql://postgress-external-db-primary.postgress-external-db.svc.cluster.local:5432/sonataflow?search_path=jobs-service&sslmode=require&ssl=true&trustAll=true"},{"name":"QUARKUS_DATASOURCE_REACTIVE_SSL_MODE","value":"require"},{"name":"QUARKUS_DATASOURCE_REACTIVE_TRUST_ALL","value":"true"}]}}}}}}'
  oc rollout restart deployment/sonataflow-platform-jobs-service -n "${NAME_SPACE_RBAC}"

  # initiate orchestrator workflows deployment
  deploy_orchestrator_workflows "${NAME_SPACE_RBAC}"
}

initiate_deployments() {
  cd "${DIR}"
  base_deployment
  rbac_deployment
}

# OSD-GCP specific deployment functions that merge diff files and skip orchestrator workflows
base_deployment_osd_gcp() {
  configure_namespace ${NAME_SPACE}

  deploy_redis_cache "${NAME_SPACE}"

  cd "${DIR}"
  local rhdh_base_url="https://${RELEASE_NAME}-developer-hub-${NAME_SPACE}.${K8S_CLUSTER_ROUTER_BASE}"
  apply_yaml_files "${DIR}" "${NAME_SPACE}" "${rhdh_base_url}"

  # Merge base values with OSD-GCP diff file
  yq_merge_value_files "merge" "${DIR}/value_files/${HELM_CHART_VALUE_FILE_NAME}" "${DIR}/value_files/${HELM_CHART_OSD_GCP_DIFF_VALUE_FILE_NAME}" "/tmp/merged-values_showcase_OSD-GCP.yaml"
  mkdir -p "${ARTIFACT_DIR}/${NAME_SPACE}"
  cp -a "/tmp/merged-values_showcase_OSD-GCP.yaml" "${ARTIFACT_DIR}/${NAME_SPACE}/" # Save the final value-file into the artifacts directory.

  log::info "Deploying image from repository: ${QUAY_REPO}, TAG_NAME: ${TAG_NAME}, in NAME_SPACE: ${NAME_SPACE}"

  # shellcheck disable=SC2046
  helm upgrade -i "${RELEASE_NAME}" -n "${NAME_SPACE}" \
    "${HELM_CHART_URL}" --version "${CHART_VERSION}" \
    -f "/tmp/merged-values_showcase_OSD-GCP.yaml" \
    --set global.clusterRouterBase="${K8S_CLUSTER_ROUTER_BASE}" \
    $(get_image_helm_set_params)

  # Skip orchestrator workflows deployment for OSD-GCP
  log::warn "Skipping orchestrator workflows deployment on OSD-GCP environment"
}

rbac_deployment_osd_gcp() {
  configure_namespace "${NAME_SPACE_POSTGRES_DB}"
  configure_namespace "${NAME_SPACE_RBAC}"
  configure_external_postgres_db "${NAME_SPACE_RBAC}"

  # Initiate rbac instance deployment.
  local rbac_rhdh_base_url="https://${RELEASE_NAME_RBAC}-developer-hub-${NAME_SPACE_RBAC}.${K8S_CLUSTER_ROUTER_BASE}"
  apply_yaml_files "${DIR}" "${NAME_SPACE_RBAC}" "${rbac_rhdh_base_url}"

  # Merge RBAC values with OSD-GCP diff file
  yq_merge_value_files "merge" "${DIR}/value_files/${HELM_CHART_RBAC_VALUE_FILE_NAME}" "${DIR}/value_files/${HELM_CHART_RBAC_OSD_GCP_DIFF_VALUE_FILE_NAME}" "/tmp/merged-values_showcase-rbac_OSD-GCP.yaml"
  mkdir -p "${ARTIFACT_DIR}/${NAME_SPACE_RBAC}"
  cp -a "/tmp/merged-values_showcase-rbac_OSD-GCP.yaml" "${ARTIFACT_DIR}/${NAME_SPACE_RBAC}/" # Save the final value-file into the artifacts directory.

  log::info "Deploying image from repository: ${QUAY_REPO}, TAG_NAME: ${TAG_NAME}, in NAME_SPACE: ${RELEASE_NAME_RBAC}"

  # shellcheck disable=SC2046
  helm upgrade -i "${RELEASE_NAME_RBAC}" -n "${NAME_SPACE_RBAC}" \
    "${HELM_CHART_URL}" --version "${CHART_VERSION}" \
    -f "/tmp/merged-values_showcase-rbac_OSD-GCP.yaml" \
    --set global.clusterRouterBase="${K8S_CLUSTER_ROUTER_BASE}" \
    $(get_image_helm_set_params)

  # Skip orchestrator workflows deployment for OSD-GCP
  log::warn "Skipping orchestrator workflows deployment on OSD-GCP RBAC environment"
}

initiate_deployments_osd_gcp() {
  cd "${DIR}"
  base_deployment_osd_gcp
  rbac_deployment_osd_gcp
}

# install base RHDH deployment before upgrade
initiate_upgrade_base_deployments() {
  local release_name=$1
  local namespace=$2
  local url=$3
  local max_attempts=${4:-30} # Default to 30 if not set
  local wait_seconds=${5:-30}

  log::info "Initiating base RHDH deployment before upgrade"

  CURRENT_DEPLOYMENT=$((CURRENT_DEPLOYMENT + 1))
  save_status_deployment_namespace $CURRENT_DEPLOYMENT "$namespace"

  configure_namespace "${namespace}"

  deploy_redis_cache "${namespace}"

  cd "${DIR}"

  apply_yaml_files "${DIR}" "${namespace}" "${url}"
  log::info "Deploying image from base repository: ${QUAY_REPO_BASE}, TAG_NAME_BASE: ${TAG_NAME_BASE}, in NAME_SPACE: ${namespace}"

  # Get dynamic value file path based on previous release version
  local previous_release_value_file
  previous_release_value_file=$(get_previous_release_value_file "showcase")
  echo "Using dynamic value file: ${previous_release_value_file}"

  helm upgrade -i "${release_name}" -n "${namespace}" \
    "${HELM_CHART_URL}" --version "${CHART_VERSION_BASE}" \
    -f "${previous_release_value_file}" \
    --set global.clusterRouterBase="${K8S_CLUSTER_ROUTER_BASE}" \
    --set upstream.backstage.image.repository="${QUAY_REPO_BASE}" \
    --set upstream.backstage.image.tag="${TAG_NAME_BASE}"
}

initiate_upgrade_deployments() {
  local release_name=$1
  local namespace=$2
  local url=$3
  local max_attempts=${4:-30} # Default to 30 if not set
  local wait_seconds=${5:-30}
  local wait_upgrade="10m"

  log::info "Initiating upgrade deployment"
  cd "${DIR}"

  yq_merge_value_files "merge" "${DIR}/value_files/${HELM_CHART_VALUE_FILE_NAME}" "${DIR}/value_files/diff-values_showcase_upgrade.yaml" "/tmp/merged_value_file.yaml"
  log::info "Deploying image from repository: ${QUAY_REPO}, TAG_NAME: ${TAG_NAME}, in NAME_SPACE: ${NAME_SPACE}"

  helm upgrade -i "${RELEASE_NAME}" -n "${NAME_SPACE}" \
    "${HELM_CHART_URL}" --version "${CHART_VERSION}" \
    -f "/tmp/merged_value_file.yaml" \
    --set global.clusterRouterBase="${K8S_CLUSTER_ROUTER_BASE}" \
    --set upstream.backstage.image.repository="${QUAY_REPO}" \
    --set upstream.backstage.image.tag="${TAG_NAME}" \
    --wait --timeout=${wait_upgrade}

  oc get pods -n "${namespace}"
  save_all_pod_logs $namespace
}

initiate_runtime_deployment() {
  local release_name=$1
  local namespace=$2
  configure_namespace "${namespace}"
  uninstall_helmchart "${namespace}" "${release_name}"

  oc apply -f "$DIR/resources/postgres-db/dynamic-plugins-root-PVC.yaml" -n "${namespace}"

  # shellcheck disable=SC2046
  helm upgrade -i "${release_name}" -n "${namespace}" \
    "${HELM_CHART_URL}" --version "${CHART_VERSION}" \
    -f "$DIR/resources/postgres-db/values-showcase-postgres.yaml" \
    --set global.clusterRouterBase="${K8S_CLUSTER_ROUTER_BASE}" \
    $(get_image_helm_set_params)
}

initiate_sanity_plugin_checks_deployment() {
  local release_name=$1
  local name_space_sanity_plugins_check=$2
  local sanity_plugins_url=$3

  configure_namespace "${name_space_sanity_plugins_check}"
  uninstall_helmchart "${name_space_sanity_plugins_check}" "${release_name}"
  deploy_redis_cache "${name_space_sanity_plugins_check}"
  apply_yaml_files "${DIR}" "${name_space_sanity_plugins_check}" "${sanity_plugins_url}"
  yq_merge_value_files "overwrite" "${DIR}/value_files/${HELM_CHART_VALUE_FILE_NAME}" "${DIR}/value_files/${HELM_CHART_SANITY_PLUGINS_DIFF_VALUE_FILE_NAME}" "/tmp/${HELM_CHART_SANITY_PLUGINS_MERGED_VALUE_FILE_NAME}"
  mkdir -p "${ARTIFACT_DIR}/${name_space_sanity_plugins_check}"
  cp -a "/tmp/${HELM_CHART_SANITY_PLUGINS_MERGED_VALUE_FILE_NAME}" "${ARTIFACT_DIR}/${name_space_sanity_plugins_check}/" || true # Save the final value-file into the artifacts directory.
  # shellcheck disable=SC2046
  helm upgrade -i "${release_name}" -n "${name_space_sanity_plugins_check}" \
    "${HELM_CHART_URL}" --version "${CHART_VERSION}" \
    -f "/tmp/${HELM_CHART_SANITY_PLUGINS_MERGED_VALUE_FILE_NAME}" \
    --set global.clusterRouterBase="${K8S_CLUSTER_ROUTER_BASE}" \
    $(get_image_helm_set_params) \
    --set orchestrator.enabled=true
}

check_and_test() {
  local release_name=$1
  local namespace=$2
  local playwright_project=$3
  local url=$4
  local max_attempts=${5:-30}              # Default to 30 if not set
  local wait_seconds=${6:-30}              # Default to 30 if not set
  local artifacts_dir="${7:-${namespace}}" # Default to namespace if not set

  if check_backstage_running "${release_name}" "${namespace}" "${url}" "${max_attempts}" "${wait_seconds}"; then
    echo "Display pods for verification..."
    oc get pods -n "${namespace}"
    run_tests "${release_name}" "${namespace}" "${playwright_project}" "${url}" "${artifacts_dir}"
  else
Brittle Locator

A raw CSS locator (#search-bar-text-field) was introduced (with lint suppression). This may be less stable across UI changes and may bypass Playwright’s recommended semantic locator strategy, increasing flakiness in CI (especially in localized runs where accessible names/placeholders may vary).

  );
  await searchBar.click();
  await searchBar.fill("test query term");
  expect(await uiHelper.isBtnVisibleByTitle("Clear")).toBeTruthy();
  const dropdownList = page.getByRole("listbox");
  await expect(dropdownList).toBeVisible();
  await searchBar.press("Enter");
  await uiHelper.verifyHeading(t["rhdh"][lang]["app.search.title"]);
  // eslint-disable-next-line playwright/no-raw-locators
  const searchResultPageInput = page.locator("#search-bar-text-field");
  await expect(searchResultPageInput).toHaveValue("test query term");
});
📚 Focus areas based on broader codebase context

Missing LOCALE

run_localization_tests() triggers the localization Playwright project but does not set LOCALE=fr (or otherwise ensure the locale is applied). Validate that the Playwright project itself injects LOCALE; otherwise the job may silently run in the default en locale and not actually test translations. (Ref 2, Ref 3)

run_localization_tests() {
  local url="https://${RELEASE_NAME}-developer-hub-${NAME_SPACE}.${K8S_CLUSTER_ROUTER_BASE}"
  log::section "Running localization tests"
  # French (fr) - uses custom artifacts_dir to avoid overwriting main showcase test artifacts
  check_and_test "${RELEASE_NAME}" "${NAME_SPACE}" "${PW_PROJECT_SHOWCASE_LOCALIZATION_FR}" "${url}" "" "" "${PW_PROJECT_SHOWCASE_LOCALIZATION_FR}"
}

Reference reasoning: The existing i18n E2E documentation specifies that locale selection for tests is controlled via the LOCALE environment variable (defaulting to en when unset). To reliably execute French localization coverage in CI, the job should set LOCALE=fr (or an equivalent mechanism) when running the localization test project.

📄 References
  1. redhat-developer/rhdh/docs/e2e-tests/CI.md [142-160]
  2. redhat-developer/rhdh/docs/e2e-tests/e2e-i18n-guide.md [149-161]
  3. redhat-developer/rhdh/docs/e2e-tests/e2e-i18n-guide.md [206-215]
  4. redhat-developer/rhdh/docs/e2e-tests/CI.md [84-87]
  5. redhat-developer/rhdh/docs/e2e-tests/CI.md [181-195]
  6. redhat-developer/rhdh/docs/e2e-tests/CI.md [88-93]
  7. redhat-developer/rhdh/docs/e2e-tests/e2e-i18n-guide.md [1-6]
  8. redhat-developer/rhdh/docs/e2e-tests/e2e-i18n-guide.md [189-205]

@rhdh-qodo-merge
Copy link

PR Type

Enhancement, Tests


Description

  • Add localization tests for French language to OCP nightly jobs

  • Implement run_localization_tests() function with conditional execution

  • Skip localization tests for OSD-GCP environments

  • Update documentation across multiple memory and rules files

  • Fix terminology from "marketplace" to "extensions" in documentation


File Walkthrough

Relevant files
Enhancement
ocp-nightly.sh
Add localization test execution to nightly pipeline           

.ibm/pipelines/jobs/ocp-nightly.sh

  • Add conditional check to skip localization tests for OSD-GCP jobs
  • Implement new run_localization_tests() function
  • Function runs French localization tests against standard deployment
  • Uses existing check_and_test helper with localization project
+12/-0   
Documentation
add_extension_metadata.md
Fix directory naming in extension metadata docs                   

.claude/memories/add_extension_metadata.md

  • Fix documentation: change "marketplace directory" to "extensions
    directory"
+1/-1     
ci-e2e-testing.md
Document French localization tests in CI/E2E guide             

.claude/memories/ci-e2e-testing.md

  • Add showcase-localization-fr to list of Playwright test projects
  • Document new Localization Tests section with French language support
  • Add yarn command for running French localization tests
  • Update config map documentation to include localization project
+13/-0   
add_extension_metadata.mdc
Update extension metadata rules documentation                       

.cursor/rules/add_extension_metadata.mdc

  • Fix directory reference from "marketplace" to "extensions"
  • Update git commit message to reference "extensions" instead of
    "marketplace"
+2/-2     
ci-e2e-testing.mdc
Add localization tests to cursor rules documentation         

.cursor/rules/ci-e2e-testing.mdc

  • Add showcase-localization-fr to Playwright projects list
  • Document Localization Tests section with French language details
  • Add yarn command for French localization tests
  • Update config map references for localization project
+13/-0   
add_extension_metadata.md
Sync extension metadata rules documentation                           

.rulesync/rules/add_extension_metadata.md

  • Fix directory naming from "marketplace" to "extensions"
  • Update git commands to use correct extensions directory paths
  • Update commit message to reference "extensions"
+4/-4     
ci-e2e-testing.md
Sync CI/E2E testing rules documentation                                   

.rulesync/rules/ci-e2e-testing.md

  • Add showcase-localization-fr to test projects list
  • Document Localization Tests section with French language support
  • Add yarn command for running French localization tests
  • Update config map documentation for localization project
+13/-0   
CI.md
Document localization testing in CI documentation               

docs/e2e-tests/CI.md

  • Add new Localization Tests section documenting French language testing
  • Document supported languages and future expansion plans
  • Explain when tests run and skip conditions for OSD-GCP
  • Reference implementation location in nightly pipeline script
+12/-0   

@rhdh-qodo-merge
Copy link

rhdh-qodo-merge bot commented Jan 19, 2026

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
Improve scalability for adding more locales

The script ocp-nightly.sh hardcodes the French localization test. It should be
refactored to iterate over a list of locales to make it more scalable for adding
new languages in the future.

Examples:

.ibm/pipelines/jobs/ocp-nightly.sh [63-68]
run_localization_tests() {
  local url="https://${RELEASE_NAME}-developer-hub-${NAME_SPACE}.${K8S_CLUSTER_ROUTER_BASE}"
  log::section "Running localization tests"
  # French (fr)
  check_and_test "${RELEASE_NAME}" "${NAME_SPACE}" "${PW_PROJECT_SHOWCASE_LOCALIZATION_FR}" "${url}"
}

Solution Walkthrough:

Before:

# .ibm/pipelines/jobs/ocp-nightly.sh

run_localization_tests() {
  local url="..."
  log::section "Running localization tests"

  # French (fr)
  check_and_test "${RELEASE_NAME}" "${NAME_SPACE}" "${PW_PROJECT_SHOWCASE_LOCALIZATION_FR}" "${url}"
}

After:

# .ibm/pipelines/jobs/ocp-nightly.sh

run_localization_tests() {
  local url="..."
  log::section "Running localization tests"

  declare -A LOCALES_TO_TEST=(
    ["fr"]="${PW_PROJECT_SHOWCASE_LOCALIZATION_FR}"
    # Future locales can be added here
  )

  for locale in "${!LOCALES_TO_TEST[@]}"; do
    log::info "Running localization tests for ${locale}"
    check_and_test "${RELEASE_NAME}" "${NAME_SPACE}" "${LOCALES_TO_TEST[$locale]}" "${url}"
  done
}
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies a design limitation where the French locale test is hardcoded, and proposes a scalable, loop-based alternative that would significantly ease the addition of future languages, which is mentioned as a possibility in the PR's documentation.

Medium
General
Use wildcard matching for skip check

Replace the regex match !~ with the wildcard pattern != for checking the
JOB_NAME variable.

.ibm/pipelines/jobs/ocp-nightly.sh [37-39]

-if [[ ! "${JOB_NAME}" =~ osd-gcp ]]; then
+if [[ "${JOB_NAME}" != *osd-gcp* ]]; then
   run_localization_tests
 fi
  • Apply / Chat
Suggestion importance[1-10]: 4

__

Why: The suggestion correctly proposes using a shell wildcard pattern, which is more idiomatic and readable for this simple substring check, though the existing regex is also correct.

Low
  • Update

@teknaS47
Copy link
Member Author

/test e2e-ocp-helm-nightly

@github-actions
Copy link
Contributor

The image is available at:

/test e2e-ocp-helm

@teknaS47
Copy link
Member Author

/test e2e-ocp-helm-nightly

@github-actions
Copy link
Contributor

The image is available at:

/test e2e-ocp-helm

@invincibleJai
Copy link
Member

@teknaS47
Copy link
Member Author

/test e2e-ocp-helm-nightly

@github-actions
Copy link
Contributor

The image is available at:

/test e2e-ocp-helm

@teknaS47
Copy link
Member Author

/test e2e-ocp-helm

Copy link
Member

@zdrapela zdrapela left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic to run the tests looks good. We need to have CI fixed to verify everything works smoothly. I've seen failed tests in https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/pr-logs/pull/redhat-developer_rhdh/4020/pull-ci-redhat-developer-rhdh-main-e2e-ocp-helm-nightly/2013461772190617600/artifacts/e2e-ocp-helm-nightly/redhat-developer-rhdh-ocp-helm-nightly/artifacts/showcase/index.html, but I'm not sure if it's caused by your test.

Anyway, I found some issues that are not a problem of this PR, but more of the code base. Please see my inline comments and let me know if you are up for fixing them or if I should rather take a look. I hope I explained myself well, but let me know if something is not clear 😉

@github-actions
Copy link
Contributor

🚫 Image Push Skipped.

The container image push was skipped because the build was skipped (either due to [skip-build] tag or no relevant changes with existing image)

@teknaS47
Copy link
Member Author

/test e2e-ocp-helm-nightly

@rhdh-qodo-merge
Copy link

ⓘ Your monthly quota for Qodo has expired. Upgrade your plan
ⓘ Paying users. Check that your Qodo account is linked with this Git user account

@zdrapela
Copy link
Member

/lgtm

@openshift-ci openshift-ci bot added the lgtm label Jan 28, 2026
@openshift-ci
Copy link

openshift-ci bot commented Jan 28, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: zdrapela
Once this PR has been reviewed and has the lgtm label, please assign nickboldt for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@github-actions
Copy link
Contributor

@openshift-ci openshift-ci bot removed the lgtm label Jan 28, 2026
@openshift-ci
Copy link

openshift-ci bot commented Jan 28, 2026

New changes are detected. LGTM label has been removed.

@ciiay
Copy link
Member

ciiay commented Jan 28, 2026

Hi @teknaS47 , some e2e locale tests changes are coming in rhdh pr, please hold on merge before that one gets in. Thank you 🤝

@github-actions
Copy link
Contributor

@github-actions
Copy link
Contributor

@zdrapela
Copy link
Member

/hold until #4051 is merged

@sonarqubecloud
Copy link

@teknaS47
Copy link
Member Author

/unhold

@teknaS47
Copy link
Member Author

/test e2e-ocp-helm-nightly

@rhdh-qodo-merge
Copy link

ⓘ Your monthly quota for Qodo has expired. Upgrade your plan
ⓘ Paying users. Check that your Qodo account is linked with this Git user account

1 similar comment
@qodo-code-review
Copy link

ⓘ Your monthly quota for Qodo has expired. Upgrade your plan
ⓘ Paying users. Check that your Qodo account is linked with this Git user account

@github-actions
Copy link
Contributor

@openshift-ci
Copy link

openshift-ci bot commented Jan 30, 2026

@teknaS47: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-ocp-helm-nightly f2a9b0b link false /test e2e-ocp-helm-nightly

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants