Conversation
Signed-off-by: Jooho Lee <jlee@redhat.com>
WalkthroughShell and test scripts updated: loopy launcher invocation path adjusted, a utils.sh comment added; KServe role now clones and runs tests from a role-local repo; operator install selects a single pod deterministically; many test fixtures and paths relocated; several YAML/config/test tweaks (quoting, descriptions, version bump, namespace flags, skip and OperatorGroup rename). No public API signature changes. Changes
Sequence Diagram(s)sequenceDiagram
participant RoleScript as KServe role main.sh
participant FS as Filesystem
participant Git as Git
participant Py as pytest
RoleScript->>FS: check $ROLE_DIR/opendatahub-tests
alt not present
RoleScript->>Git: git clone opendatahub-tests.git -> $ROLE_DIR/opendatahub-tests
Git-->>RoleScript: clone complete
end
RoleScript->>FS: cd $ROLE_DIR/opendatahub-tests/${PYTEST_PATH}
alt PYTEST_MARKER set
RoleScript->>Py: pytest -m "$PYTEST_MARKER"
else
RoleScript->>Py: pytest --pdb
end
sequenceDiagram
participant Script as operator/install/main.sh
participant Kubectl as kubectl
participant Wait as wait_for_pod_name_ready
Script->>Kubectl: get pods | grep <prefix> | head -1 | awk '{print $1}'
Kubectl-->>Script: pod_name (single) or empty
alt pod_name found
Script->>Wait: wait_for_pod_name_ready(pod_name)
else
Script->>Script: loop until timeout
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 5
🔭 Outside diff range comments (7)
src/commons/scripts/utils.sh (5)
423-429: Harden ROSA login: handle missing OCM_TOKEN and quote variablesInteractive fallback is currently only commented. Also, unquoted token can break on special chars.
Apply this diff to make login robust and deterministic:
- # 4. can rosa login? - rosa login --token=$OCM_TOKEN - # rosa login --use-auth-code + # 4. can rosa login? + if [[ -n "$OCM_TOKEN" ]]; then + rosa login --token="$OCM_TOKEN" + else + warn "OCM_TOKEN is not set; falling back to auth-code login" + rosa login --use-auth-code + fi
366-373: Bug: check_if_result uses $? instead of provided rcThe function ignores its argument and checks the exit code of the previous command.
Fix to use the rc parameter:
function check_if_result() { rc=$1 - if [[ $? != 0 ]]; then + if [[ $rc != 0 ]]; then die "FAIL" else success "PASS" fi }
389-391: Undefined function ‘light_info’light_info is not defined anywhere; this will cause a runtime error.
Replace with an existing logger:
- light_info " Please log in to your cluster as a cluster-admin" + info "Please log in to your cluster as a cluster-admin"
279-303: oc::wait::object::availability executes the command twice and ignores iterationsCurrently runs the command twice per loop (side-effect risk) and uses a hard-coded 100 instead of iterations; stray dollar in message too.
Refactor to single execution and correct iteration handling:
function oc::wait::object::availability() { local cmd=$1 # Command whose output we require local interval=$2 # How many seconds to sleep between tries local iterations=$3 # How many times we attempt to run the command ii=0 info "[START] Wait for \"${cmd}\" " - while [ $ii -le $iterations ]; do - - token=$($cmd &>/dev/null) && returncode=$? || returncode=$? - echo "$cmd " | sh - - if [ $returncode -eq 0 ]; then + while [ $ii -le $iterations ]; do + # Execute once per loop + bash -c "$cmd" &>/dev/null + returncode=$? + echo "$cmd " | sh + if [ $returncode -eq 0 ]; then break fi ((ii = ii + 1)) - if [ $ii -eq 100 ]; then - warn "${cmd} did not return a value$" + if [ $ii -eq $iterations ]; then + warn "${cmd} did not return a value" exit 1 fi sleep $interval done success "[END] \"${cmd}\" is successfully done" }
235-237: Undefined variable in wait_for_just_created_pod_readyUses $pod_name instead of $created_pod_name on timeout path, causing “Pods not found” confusion.
- oc get pods $pod_name -n $namespace + oc get pods "$created_pod_name" -n "$namespace"src/roles/operator/install/main.sh (2)
160-165: Timeout path sets success and misreports wait timeOn timeout it sets result="0" (success) and the minutes formula is off.
- error "Timed out after $((30 * wait_counter / 60)) minutes waiting for pod with prefix: $OPERATOR_POD_PREFIX" - result="0" + error "Timed out after $((wait_counter * 10 / 60)) minutes waiting for pod with prefix: $OPERATOR_POD_PREFIX" + result="1"
179-179: Report uses $? instead of tracked resultThis may log an unrelated status code. Use $result for consistency.
-echo "${index_role_name}::${OPERATOR_NAME}::$?" >>${REPORT_FILE} +echo "${index_role_name}::${OPERATOR_NAME}::${result}" >>"${REPORT_FILE}"
🧹 Nitpick comments (6)
src/commons/scripts/utils.sh (3)
110-113: Spurious “-n” in error messageerror -n is printed literally; printf doesn’t honor the -n flag. Use the logger correctly.
- error -n "No pods found with selector $pod_selector in $pod_namespace. Pods may not be up yet." + error "No pods found with selector $pod_selector in $pod_namespace. Pods may not be up yet."
95-103: Quote selector/namespace and capture errors safelyUnquoted -l $pod_selector and -n $pod_namespace can break if they contain special chars or spaces. Quote expansions.
- pod_status=$(oc get pods -l $pod_selector -n $pod_namespace -o jsonpath="$JSONPATH") + pod_status=$(oc get pods -l "$pod_selector" -n "$pod_namespace" -o jsonpath="$JSONPATH")
330-348: Message mentions .github but code searches for .gitMinor mismatch between user guidance and behavior; could mislead during troubleshooting.
Consider updating the message to “.git” or searching for “.github” if that’s intended.
loopy (1)
7-7: LGTM; consider python3 and exec for better propagationChange is benign. Optionally, prefer python3 and exec to propagate exit code and replace the shell.
-PYTHONPATH=.:src python cmd/loopy.py "$@" +PYTHONPATH=.:src exec "${PYTHON:-python3}" cmd/loopy.py "$@"src/roles/operator/install/main.sh (1)
154-154: Make pod selection more robust and deterministicGood move adding head -1. Improve further by removing headers and extracting the name before limiting to the first line.
- pod_name=$(oc get pod -n ${OPERATOR_NAMESPACE} | grep ${OPERATOR_POD_PREFIX} |head -1 | awk '{print $1}') + pod_name=$(oc get pod -n "${OPERATOR_NAMESPACE}" --no-headers | awk '{print $1}' | grep -m1 "${OPERATOR_POD_PREFIX}")src/roles/kserve/tests/e2e-test/main.sh (1)
126-133: Remove dead commented code to reduce noiseThe legacy /tmp path block is now fully superseded. Delete it to keep the script focused.
-#if [[ ${PYTEST_MARKER} != "" ]]; then -# echo "uv run pytest --setup-show --tc=distribution:upstream /tmp/opendatahub-tests/${PYTEST_PATH} -m \"${PYTEST_MARKER}\" -s" -# uv run pytest --setup-show --tc=distribution:upstream /tmp/opendatahub-tests/${PYTEST_PATH} -m "${PYTEST_MARKER}" -s -#else -# echo "uv run pytest --setup-show --tc=distribution:upstream /tmp/opendatahub-tests/${PYTEST_PATH} -s" -# uv run pytest --setup-show --tc=distribution:upstream /tmp/opendatahub-tests/${PYTEST_PATH} -s -#fi
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these settings in your CodeRabbit configuration.
📒 Files selected for processing (7)
loopy(1 hunks)src/commons/scripts/utils.sh(1 hunks)src/roles/clusters/openshift/crc/crc-install/config.yaml(2 hunks)src/roles/kserve/tests/create-devflag/config.yaml(2 hunks)src/roles/kserve/tests/e2e-test/main.sh(2 hunks)src/roles/operator/install/main.sh(1 hunks)tests/custom-context.json(3 hunks)
🧰 Additional context used
🪛 Shellcheck (0.10.0)
src/roles/kserve/tests/e2e-test/main.sh
[warning] 35-35: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
🔇 Additional comments (4)
src/roles/clusters/openshift/crc/crc-install/config.yaml (1)
34-34: Verify consumers tolerate string-typed defaultsDefaults for numeric/boolean-like fields are now quoted strings. If downstream scripts parse/compare as strings (common for env), it’s fine; if they expect numeric/boolean types via yq/jq, this may require explicit casting.
Please confirm:
- Any arithmetic using MEMORY/CPU/DISK_SIZE uses shell arithmetic (which will accept unquoted digits) or converts via yq/jq as numbers.
- ENABLE_MONITORING/ADD_CLUSTER_ADMIN_TO_DEVELOPER/RESET_CRC_SETUP are interpreted by is_positive or similar string-based evaluators.
If needed, I can scan the repo for consumers and adjust accordingly.
Also applies to: 38-38, 42-42, 50-50, 59-59, 63-63
src/roles/kserve/tests/create-devflag/config.yaml (3)
57-57: LGTM: clearer USER_NAME descriptionImproves clarity without affecting behavior.
73-73: LGTM: clarified TARGET_USER_NAMEDescription accurately reflects the remote source case.
76-76: LGTM: clarified TARGET_BRANCH_NAMENo functional changes; improved guidance.
| if [[ ! -d "opendatahub-tests" ]]; then | ||
| git clone git@github.com:opendatahub-io/opendatahub-tests.git | ||
| fi | ||
| cd opendatahub-tests | ||
| #cd /tmp/opendatahub-tests |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Clone/update flow: use HTTPS URL, shallow clone, handle failures, and keep repo fresh
- SSH URL requires keys and often fails in CI; prefer HTTPS read-only.
- If the directory already exists, you should fetch/checkout to a known ref for determinism.
- Add safe fallbacks for cd and clone errors (ShellCheck SC2164 also flagged the cd).
Apply:
- if [[ ! -d "opendatahub-tests" ]]; then
- git clone git@github.com:opendatahub-io/opendatahub-tests.git
- fi
- cd opendatahub-tests
+ if [[ ! -d "opendatahub-tests" ]]; then
+ git clone --depth=1 https://github.com/opendatahub-io/opendatahub-tests.git || {
+ echo "Error: failed to clone opendatahub-tests"; exit 1; }
+ fi
+ cd opendatahub-tests || { echo "Error: cannot cd into opendatahub-tests"; exit 1; }
+ # Ensure the repo is up-to-date and on a deterministic ref (defaults to main if PYTEST_GIT_REF unset)
+ git fetch --all --prune || { echo "Error: git fetch failed"; exit 1; }
+ git checkout "${PYTEST_GIT_REF:-origin/HEAD}" 2>/dev/null || git checkout "${PYTEST_GIT_REF:-main}" || {
+ echo "Error: failed to checkout ${PYTEST_GIT_REF:-main}"; exit 1; }
+ git pull --ff-only || { echo "Error: git pull failed"; exit 1; }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [[ ! -d "opendatahub-tests" ]]; then | |
| git clone git@github.com:opendatahub-io/opendatahub-tests.git | |
| fi | |
| cd opendatahub-tests | |
| #cd /tmp/opendatahub-tests | |
| if [[ ! -d "opendatahub-tests" ]]; then | |
| git clone --depth=1 https://github.com/opendatahub-io/opendatahub-tests.git || { | |
| echo "Error: failed to clone opendatahub-tests"; exit 1; } | |
| fi | |
| cd opendatahub-tests || { echo "Error: cannot cd into opendatahub-tests"; exit 1; } | |
| # Ensure the repo is up-to-date and on a deterministic ref (defaults to main if PYTEST_GIT_REF unset) | |
| git fetch --all --prune || { echo "Error: git fetch failed"; exit 1; } | |
| git checkout "${PYTEST_GIT_REF:-origin/HEAD}" 2>/dev/null || git checkout "${PYTEST_GIT_REF:-main}" || { | |
| echo "Error: failed to checkout ${PYTEST_GIT_REF:-main}"; exit 1; } | |
| git pull --ff-only || { echo "Error: git pull failed"; exit 1; } | |
| #cd /tmp/opendatahub-tests |
🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 35-35: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
| echo "uv run pytest --setup-show --tc=distribution:upstream $ROLE_DIR/opendatahub-tests/${PYTEST_PATH} -m \"${PYTEST_MARKER}\" -s" | ||
| uv run pytest --setup-show --tc=distribution:upstream $ROLE_DIR/opendatahub-tests/${PYTEST_PATH} -m "${PYTEST_MARKER}" -s | ||
| else | ||
| echo "uv run pytest --setup-show --tc=distribution:upstream /tmp/opendatahub-tests/${PYTEST_PATH} -s" | ||
| uv run pytest --setup-show --tc=distribution:upstream /tmp/opendatahub-tests/${PYTEST_PATH} -s | ||
| echo "uv run pytest --setup-show --tc=distribution:upstream $ROLE_DIR/opendatahub-tests/${PYTEST_PATH} -s" | ||
| uv run pytest --setup-show --tc=distribution:upstream $ROLE_DIR/opendatahub-tests/${PYTEST_PATH} -s --pdb | ||
| fi |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Quote path arguments (handle spaces) and avoid redundant absolute paths after cd
- You cd into opendatahub-tests already; you can pass a relative path (${PYTEST_PATH}) and quote it.
- Without quoting, ROLE_DIR or PYTEST_PATH containing spaces will break pytest invocation.
- echo "uv run pytest --setup-show --tc=distribution:upstream $ROLE_DIR/opendatahub-tests/${PYTEST_PATH} -m \"${PYTEST_MARKER}\" -s"
- uv run pytest --setup-show --tc=distribution:upstream $ROLE_DIR/opendatahub-tests/${PYTEST_PATH} -m "${PYTEST_MARKER}" -s
+ echo "uv run pytest --setup-show --tc=distribution:upstream \"${PYTEST_PATH}\" -m \"${PYTEST_MARKER}\" -s"
+ uv run pytest --setup-show --tc=distribution:upstream "${PYTEST_PATH}" -m "${PYTEST_MARKER}" -s
@@
- echo "uv run pytest --setup-show --tc=distribution:upstream $ROLE_DIR/opendatahub-tests/${PYTEST_PATH} -s"
- uv run pytest --setup-show --tc=distribution:upstream $ROLE_DIR/opendatahub-tests/${PYTEST_PATH} -s --pdb
+ echo "uv run pytest --setup-show --tc=distribution:upstream \"${PYTEST_PATH}\" -s"
+ uv run pytest --setup-show --tc=distribution:upstream "${PYTEST_PATH}" -s --pdb📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| echo "uv run pytest --setup-show --tc=distribution:upstream $ROLE_DIR/opendatahub-tests/${PYTEST_PATH} -m \"${PYTEST_MARKER}\" -s" | |
| uv run pytest --setup-show --tc=distribution:upstream $ROLE_DIR/opendatahub-tests/${PYTEST_PATH} -m "${PYTEST_MARKER}" -s | |
| else | |
| echo "uv run pytest --setup-show --tc=distribution:upstream /tmp/opendatahub-tests/${PYTEST_PATH} -s" | |
| uv run pytest --setup-show --tc=distribution:upstream /tmp/opendatahub-tests/${PYTEST_PATH} -s | |
| echo "uv run pytest --setup-show --tc=distribution:upstream $ROLE_DIR/opendatahub-tests/${PYTEST_PATH} -s" | |
| uv run pytest --setup-show --tc=distribution:upstream $ROLE_DIR/opendatahub-tests/${PYTEST_PATH} -s --pdb | |
| fi | |
| echo "uv run pytest --setup-show --tc=distribution:upstream \"${PYTEST_PATH}\" -m \"${PYTEST_MARKER}\" -s" | |
| uv run pytest --setup-show --tc=distribution:upstream "${PYTEST_PATH}" -m "${PYTEST_MARKER}" -s | |
| else | |
| echo "uv run pytest --setup-show --tc=distribution:upstream \"${PYTEST_PATH}\" -s" | |
| uv run pytest --setup-show --tc=distribution:upstream "${PYTEST_PATH}" -s --pdb | |
| fi |
🤖 Prompt for AI Agents
In src/roles/kserve/tests/e2e-test/main.sh around lines 119 to 124, the pytest
invocations use unquoted path variables and an absolute ROLE_DIR path even
though the script has already cd'd into opendatahub-tests; update both echo and
run lines to quote the path arguments (\"${PYTEST_PATH}\") to handle spaces and
drop the redundant \"$ROLE_DIR/opendatahub-tests/\" prefix so you pass the
relative quoted path instead; ensure both the branch with -m and the branch with
--pdb are updated consistently.
| "role": "/home/jooho/temp/test-debug/loopy/src/schema/role.yaml", | ||
| "unit": "/home/jooho/temp/test-debug/loopy/src/schema/unit.yaml", | ||
| "playbook": "/home/jooho/temp/test-debug/loopy/src/schema/playbook.yaml" | ||
| }, | ||
| "default_roles_dirs": [ | ||
| "/tmp/a/loopy/src/roles", | ||
| "/tmp/a/loopy/unofficial_components/roles" | ||
| "/home/jooho/temp/test-debug/loopy/src/roles", | ||
| "/home/jooho/temp/test-debug/loopy/unofficial_components/roles" | ||
| ], | ||
| "default_units_dirs": [ | ||
| "/tmp/a/loopy/unofficial_components/units" | ||
| "/home/jooho/temp/test-debug/loopy/unofficial_components/units" | ||
| ], | ||
| "default_playbooks_dirs": [ | ||
| "/tmp/a/loopy/unofficial_components/playbooks" | ||
| "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks" | ||
| ], |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Hard-coded absolute paths to a local machine break portability; parameterize with an env-driven root
Most paths now point to /home/jooho/temp/test-debug/loopy, which will fail on CI and for other contributors. Use a single LOOPY_ROOT and derive all paths from it (or make them relative to repo root). If your loader supports env expansion, prefer ${LOOPY_ROOT}/...; otherwise, pre-process this file with envsubst.
Illustrative changes (apply similarly to all occurrences):
- "role": "/home/jooho/temp/test-debug/loopy/src/schema/role.yaml",
- "unit": "/home/jooho/temp/test-debug/loopy/src/schema/unit.yaml",
- "playbook": "/home/jooho/temp/test-debug/loopy/src/schema/playbook.yaml"
+ "role": "${LOOPY_ROOT}/src/schema/role.yaml",
+ "unit": "${LOOPY_ROOT}/src/schema/unit.yaml",
+ "playbook": "${LOOPY_ROOT}/src/schema/playbook.yaml"
@@
- "/home/jooho/temp/test-debug/loopy/src/roles",
- "/home/jooho/temp/test-debug/loopy/unofficial_components/roles"
+ "${LOOPY_ROOT}/src/roles",
+ "${LOOPY_ROOT}/unofficial_components/roles"
@@
- "/home/jooho/temp/test-debug/loopy/unofficial_components/units"
+ "${LOOPY_ROOT}/unofficial_components/units"
@@
- "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks"
+ "${LOOPY_ROOT}/unofficial_components/playbooks"
@@
- "/home/jooho/temp/test-debug/loopy/tests/test-data/roles"
+ "${LOOPY_ROOT}/tests/test-data/roles"
@@
- "/home/jooho/temp/test-debug/loopy/tests/test-data/units"
+ "${LOOPY_ROOT}/tests/test-data/units"
@@
- "/home/jooho/temp/test-debug/loopy/tests/test-data/playbooks"
+ "${LOOPY_ROOT}/tests/test-data/playbooks"
@@
- "loopy_root_path": "/home/jooho/temp/test-debug/loopy",
+ "loopy_root_path": "${LOOPY_ROOT}",
@@
- "commands_dir": "/home/jooho/temp/test-debug/loopy/src/cli/commands",
- "cli_dir": "/home/jooho/temp/test-debug/loopy/src/cli",
- "logics_dir": "/home/jooho/temp/test-debug/loopy/src/cli/logics",
- "py_utils_dir": "/home/jooho/temp/test-debug/loopy/commons/python",
+ "commands_dir": "${LOOPY_ROOT}/src/cli/commands",
+ "cli_dir": "${LOOPY_ROOT}/src/cli",
+ "logics_dir": "${LOOPY_ROOT}/src/cli/logics",
+ "py_utils_dir": "${LOOPY_ROOT}/commons/python",
@@
- "loopy_root_path": "/home/jooho/temp/test-debug/loopy",
+ "loopy_root_path": "${LOOPY_ROOT}",
@@
- "commands_dir": "/home/jooho/temp/test-debug/loopy/src/cli/commands",
- "cli_dir": "/home/jooho/temp/test-debug/loopy/src/cli",
- "logics_dir": "/home/jooho/temp/test-debug/loopy/src/cli/logics",
- "py_utils_dir": "/home/jooho/temp/test-debug/loopy/commons/python"
+ "commands_dir": "${LOOPY_ROOT}/src/cli/commands",
+ "cli_dir": "${LOOPY_ROOT}/src/cli",
+ "logics_dir": "${LOOPY_ROOT}/src/cli/logics",
+ "py_utils_dir": "${LOOPY_ROOT}/commons/python"If env interpolation isn’t supported by your consumer:
- Option A: keep this file as a template (tests/custom-context.json.tmpl) and generate tests/custom-context.json via envsubst in your test harness.
- Option B: switch to repo-relative paths and compute absolute paths in code from a single root.
Also applies to: 144-151, 198-208, 210-324, 327-451, 455-561, 565-571
🤖 Prompt for AI Agents
In tests/custom-context.json around lines 97 to 110 (and similarly at 144-151,
198-208, 210-324, 327-451, 455-561, 565-571), the file contains hard-coded
absolute paths like /home/jooho/temp/test-debug/loopy which break portability;
change these entries to derive from a single LOOPY_ROOT (e.g. use
${LOOPY_ROOT}/path/to/… if the loader supports env expansion) or convert to
repo-relative paths and compute absolute paths at runtime, and if env
interpolation is unavailable convert this file into a template
(tests/custom-context.json.tmpl) and generate tests/custom-context.json during
test setup with envsubst (or similar) so all path entries are produced from the
single root variable instead of fixed absolute paths.
| "loopy_result_dir": "/tmp/ms_cli/20250814_1325", | ||
| "output_dir": "/tmp/ms_cli/20250814_1325/output", | ||
| "artifacts_dir": "/tmp/ms_cli/20250814_1325/artifacts", | ||
| "report_file": "/tmp/ms_cli/20250814_1325/report", | ||
| "role_list": [ |
There was a problem hiding this comment.
💡 Verification agent
❓ Verification inconclusive
Avoid hard-coded time-stamped output directories; compute at runtime
Hard-coding loopy_result_dir/output_dir/artifacts_dir/report_file to a specific run-id (20250814_1325) is brittle and will cause collisions and confusion over time. These should be computed by the runner at startup (e.g., loopy_result_dir = "${OUTPUT_ROOT}/$(date +%Y%m%d_%H%M%S)").
If your tooling already computes these, remove them from this static file to avoid conflicting sources of truth.
Avoid hard-coded, time-stamped output directories
Short: tests/custom-context.json hard-codes a run-id (20250814_1325) which will collide across runs — compute these paths at runtime or remove them from the static fixture.
- Location:
- tests/custom-context.json — lines ~204–208
Current snippet:
"loopy_result_dir": "/tmp/ms_cli/20250814_1325",
"output_dir": "/tmp/ms_cli/20250814_1325/output",
"artifacts_dir": "/tmp/ms_cli/20250814_1325/artifacts",
"report_file": "/tmp/ms_cli/20250814_1325/report",
Suggested replacement (use a runtime-generated run id / placeholder):
"loopy_result_dir": "${OUTPUT_ROOT}/${LOOPY_RUN_ID}",
"output_dir": "${OUTPUT_ROOT}/${LOOPY_RUN_ID}/output",
"artifacts_dir": "${OUTPUT_ROOT}/${LOOPY_RUN_ID}/artifacts",
"report_file": "${OUTPUT_ROOT}/${LOOPY_RUN_ID}/report",
Compute LOOPY_RUN_ID at startup (e.g., $(date +%Y%m%d_%H%M%S) or a UUID) in the runner/test setup — or remove these keys from the static test fixture if the runner already provides them.
🤖 Prompt for AI Agents
In tests/custom-context.json around lines 204 to 208 the fixture contains
hard-coded, time-stamped paths (20250814_1325) which will collide across runs;
replace those static paths with placeholders or remove them from the static
fixture and compute them at runtime — e.g., change the keys to reference
environment/runtime variables like OUTPUT_ROOT and LOOPY_RUN_ID (or remove keys
if runner supplies them) and update the test/runner startup to generate
LOOPY_RUN_ID (timestamp or UUID) and expand those placeholders before use.
| "name": "loopy-test-report-playbook", | ||
| "path": "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/loopy-test-report-playbook" | ||
| }, | ||
| { | ||
| "name": "rhoai-stable-kserve-raw-simple-sanity-test-on-existing-cluster", | ||
| "path": "/tmp/a/loopy/unofficial_components/playbooks/rhoai/stable/kserve-raw-simple-sanity-test-on-existing-cluster" | ||
| "name": "odh-fast-install-kserve-raw-on-existing-cluster", | ||
| "path": "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/odh/fast/install-kserve-raw-on-existing-cluster" | ||
| }, | ||
| { | ||
| "name": "rhoai-stable-install-kserve-serverless-on-existing-cluster", | ||
| "path": "/tmp/a/loopy/unofficial_components/playbooks/rhoai/stable/install-kserve-serverless-on-existing-cluster" | ||
| "name": "odh-fast-install-kserve-serverless-on-existing-cluster", | ||
| "path": "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/odh/fast/install-kserve-serverless-on-existing-cluster" | ||
| }, | ||
| { | ||
| "name": "rhoai-stable-install-kserve-raw-on-existing-cluster", | ||
| "path": "/tmp/a/loopy/unofficial_components/playbooks/rhoai/stable/install-kserve-raw-on-existing-cluster" | ||
| "name": "odh-fast-kserve-raw-simple-sanity-test-caikit-on-existing-cluster", | ||
| "path": "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/odh/fast/kserve-raw-simple-sanity-test-caikit-on-existing-cluster" | ||
| }, | ||
| { | ||
| "name": "private-endpoint-poc", | ||
| "path": "/tmp/a/loopy/unofficial_components/playbooks/private-endpoint-poc" | ||
| "name": "odh-fast-kserve-raw-simple-sanity-test-on-existing-cluster", | ||
| "path": "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/odh/fast/kserve-raw-simple-sanity-test-on-existing-cluster" | ||
| }, | ||
| { | ||
| "name": "odh-fast-kserve-serverless-simple-sanity-test-on-existing-cluster", | ||
| "path": "/tmp/a/loopy/unofficial_components/playbooks/odh/fast/kserve-serverless-simple-sanity-test-on-existing-cluster" | ||
| "path": "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/odh/fast/kserve-serverless-simple-sanity-test-on-existing-cluster" | ||
| }, | ||
| { | ||
| "name": "odh-fast-kserve-raw-simple-sanity-test-on-existing-cluster", | ||
| "path": "/tmp/a/loopy/unofficial_components/playbooks/odh/fast/kserve-raw-simple-sanity-test-on-existing-cluster" | ||
| "name": "kserve-execute-e2e-test", | ||
| "path": "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/odh/kserve-e2e-test" | ||
| }, | ||
| { | ||
| "name": "odh-fast-kserve-raw-simple-sanity-test-caikit-on-existing-cluster", | ||
| "path": "/tmp/a/loopy/unofficial_components/playbooks/odh/fast/kserve-raw-simple-sanity-test-caikit-on-existing-cluster" | ||
| "name": "private-endpoint-poc", | ||
| "path": "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/private-endpoint-poc" | ||
| }, | ||
| { | ||
| "name": "odh-fast-install-kserve-serverless-on-existing-cluster", | ||
| "path": "/tmp/a/loopy/unofficial_components/playbooks/odh/fast/install-kserve-serverless-on-existing-cluster" | ||
| "name": "rhoai-stable-install-kserve-raw-on-existing-cluster", | ||
| "path": "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/rhoai/stable/install-kserve-raw-on-existing-cluster" | ||
| }, | ||
| { | ||
| "name": "odh-fast-install-kserve-raw-on-existing-cluster", | ||
| "path": "/tmp/a/loopy/unofficial_components/playbooks/odh/fast/install-kserve-raw-on-existing-cluster" | ||
| "name": "rhoai-stable-install-kserve-serverless-on-existing-cluster", | ||
| "path": "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/rhoai/stable/install-kserve-serverless-on-existing-cluster" | ||
| }, | ||
| { | ||
| "name": "loopy-test-report-playbook", | ||
| "path": "/tmp/a/loopy/unofficial_components/playbooks/loopy-test-report-playbook" | ||
| "name": "rhoai-stable-kserve-raw-simple-sanity-test-on-existing-cluster", | ||
| "path": "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/rhoai/stable/kserve-raw-simple-sanity-test-on-existing-cluster" | ||
| }, | ||
| { | ||
| "name": "llm-d-on-kind", | ||
| "path": "/tmp/a/loopy/unofficial_components/playbooks/llm-d-on-kind" | ||
| "name": "rhoai-stable-kserve-serverless-simple-sanity-test-on-existing-cluster", | ||
| "path": "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/rhoai/stable/kserve-serverless-simple-sanity-test-on-existing-cluster" | ||
| }, | ||
| { | ||
| "name": "kserve-full-sanity-test-on-new-fips-cluster", | ||
| "path": "/tmp/a/loopy/unofficial_components/playbooks/kserve-full-sanity-test-on-new-fips-cluster" | ||
| "name": "setup-rhoai-for-global-cert", | ||
| "path": "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/setup-rhoai-for-global-cert" | ||
| }, | ||
| { | ||
| "name": "demo-rhoai-for-global-cert", | ||
| "path": "/tmp/a/loopy/unofficial_components/playbooks/demo-rhoai-for-global-cert" | ||
| "name": "template", | ||
| "path": "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/template" | ||
| }, | ||
| { | ||
| "name": "cleanup-all-rhoai-kserve", | ||
| "path": "/tmp/a/loopy/unofficial_components/playbooks/cleanup-all-rhoai-kserve" | ||
| "name": "vllm-multi-node-odh-poc", | ||
| "path": "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/vllm-multi-node-odh-poc" | ||
| }, | ||
| { | ||
| "name": "loopy-unit-tests-on-cluster-install-kserve-raw-on-existing-cluster", | ||
| "path": "/tmp/a/loopy/tests/test-data/playbooks/loopy/unit-tests/on-cluster/install-kserve-raw-on-existing-cluster" | ||
| "name": "llm-d-on-kind", | ||
| "path": "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/llm-d-on-kind" | ||
| }, | ||
| { | ||
| "name": "loopy-unit-tests-non-cluster-units", | ||
| "path": "/tmp/a/loopy/tests/test-data/playbooks/loopy/unit-tests/non-cluster-units" | ||
| "name": "setup-dev-env-kind-upstream-kserve", | ||
| "path": "/home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/setup-dev-env-kind-upstream-kserve" | ||
| }, | ||
| { | ||
| "name": "loopy-unit-tests-non-cluster-unit-role", | ||
| "path": "/tmp/a/loopy/tests/test-data/playbooks/loopy/unit-tests/non-cluster-unit-role" | ||
| "name": "loopy-test-report-playbook", | ||
| "path": "/home/jooho/temp/test-debug/loopy/tests/test-data/playbooks/loopy-test-report-playbook" | ||
| }, |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Duplicate playbook name detected: "loopy-test-report-playbook"
This name appears twice with different paths. If your loader keys by name, this will cause collisions or last-one-wins bugs.
Run this script to detect duplicates across role_list, unit_list, and playbook_list:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
FILE="tests/custom-context.json"
echo "Duplicate names in role_list:"
jq -r '.config.role_list[].name' "$FILE" | sort | uniq -d || true
echo
echo "Duplicate names in unit_list:"
jq -r '.config.unit_list[].name' "$FILE" | sort | uniq -d || true
echo
echo "Duplicate names in playbook_list:"
jq -r '.config.playbook_list[].name' "$FILE" | sort | uniq -d || true
echo
echo "Count of absolute user-specific paths (/home/):"
rg -n "/home/" "$FILE" || trueLength of output: 11969
Duplicate playbook name detected — action required
Confirmed: tests/custom-context.json contains two entries named "loopy-test-report-playbook"; a loader that keys by name will collide (last-one-wins).
Locations:
- tests/custom-context.json — line 468: path = /home/jooho/temp/test-debug/loopy/unofficial_components/playbooks/loopy-test-report-playbook
- tests/custom-context.json — line 536: path = /home/jooho/temp/test-debug/loopy/tests/test-data/playbooks/loopy-test-report-playbook
Fix: rename one of the playbooks to a unique name or make the loader use a truly unique key (e.g., full path or explicit id).
🤖 Prompt for AI Agents
In tests/custom-context.json around lines 467 to 537 there are two entries with
the same "name": "loopy-test-report-playbook" (one at ~line 468 and one at ~line
536) which will collide when keyed by name; either rename one of the "name"
fields to a unique value (e.g., "loopy-test-report-playbook-tests" or similar)
so names are unique, or update the loader to key entries by a truly unique
identifier such as the full "path" or an explicit "id" field instead of "name".
Signed-off-by: Jooho Lee <jlee@redhat.com>
There was a problem hiding this comment.
Actionable comments posted: 0
🔭 Outside diff range comments (3)
tests/e2e/commons/cluster-tests/test_py_utils_e2e.py (2)
52-55: Add -n to all delete commands to avoid resource leaks and flakinessThe create paths now explicitly target the default namespace, but the cleanup delete paths do not. If the current kube-context doesn’t default to "default", cleanup might fail or target the wrong namespace. Same issue exists for the oc delete in test_retry_with_oc.
Apply the following diff to scope deletes to the intended namespace:
- ["kubectl", "delete", "pod", pod_name, "--force", "--grace-period=0"], + ["kubectl", "delete", "pod", pod_name, "-n", namespace, "--force", "--grace-period=0"],Repeat the change for all the other cleanup calls in this file:
- subprocess.run( - ["kubectl", "delete", "pod", pod_name, "--force", "--grace-period=0"], - check=True, - ) + subprocess.run( + ["kubectl", "delete", "pod", pod_name, "-n", namespace, "--force", "--grace-period=0"], + check=True, + )And for oc delete in test_retry_with_oc:
- subprocess.run( - ["oc", "delete", "pod", pod_name, "--force", "--grace-period=0"], - check=True, - capture_output=True, - ) + subprocess.run( + ["oc", "delete", "pod", pod_name, "-n", namespace, "--force", "--grace-period=0"], + check=True, + capture_output=True, + )Also applies to: 89-91, 125-127, 161-163, 205-207, 301-304
28-41: Make pod creation semantics explicit: add --restart=Never (and also scope oc run to the namespace)kubectl/oc run semantics differ by version; without --restart=Never, you may create a Deployment instead of a Pod, breaking pod-name-based waits/deletes. Also, oc run currently doesn’t set a namespace explicitly.
Apply:
subprocess.run( [ - "kubectl", - "run", - pod_name, - "-n", - namespace, - "--image=registry.access.redhat.com/rhel7/rhel-tools", + "kubectl", + "run", + pod_name, + "-n", + namespace, + "--restart=Never", + "--image=registry.access.redhat.com/rhel7/rhel-tools", "--", "tail", "-f", "/dev/null", ], check=True, )Do the same for the other kubectl run occurrences in this file.
For oc run in test_retry_with_oc, also add --restart=Never and -n:
subprocess.run( [ - "oc", - "run", - pod_name, - "--image=registry.access.redhat.com/rhel7/rhel-tools", + "oc", + "run", + pod_name, + "-n", + namespace, + "--restart=Never", + "--image=registry.access.redhat.com/rhel7/rhel-tools", "--", "tail", "-f", "/dev/null", ], check=True, capture_output=True, )Also applies to: 66-80, 102-116, 138-152, 182-196, 269-281
tests/e2e/roles/conftest.py (1)
31-41: Fix inverted/confusing ApiException handling (409 path) to avoid masking real errorsTwo places invert the 409 test and then return, which suppresses non-409 errors and may short-circuit OperatorGroup creation after a benign race.
Apply:
except client.exceptions.ApiException as create_e: - if create_e.status != 409: # Ignore 409, raise other errors - print(f"Namespace {namespace} already exists (409)") - return + if create_e.status == 409: # Already exists due to race + print(f"Namespace {namespace} already exists (409)") + else: + raiseAnd:
except client.exceptions.ApiException as create_e: - if create_e.status != 409: # Ignore 409, raise other errors - print( - f"OperatorGroup {operatorgroup_name} already exists in namespace {namespace} (409)" - ) + if create_e.status == 409: + print( + f"OperatorGroup {operatorgroup_name} already exists in namespace {namespace} (409)" + ) + else: + raiseAlso applies to: 72-77
🧹 Nitpick comments (3)
tests/e2e/commons/cluster-tests/test_py_utils_e2e.py (1)
52-55: Optional: ignore NotFound on cleanup to reduce test flakinessDeleting a pod that’s already gone will fail these tests because check=True treats NotFound as an error. Consider ignoring not found.
For example:
- ["kubectl", "delete", "pod", pod_name, "-n", namespace, "--force", "--grace-period=0"], + ["kubectl", "delete", "pod", pod_name, "-n", namespace, "--force", "--grace-period=0", "--ignore-not-found=true"],Also applies to: 89-91, 125-127, 161-163, 205-207
unofficial_components/roles/poc-llm-d-install-with-manifests/config.yaml (1)
15-23: Typos and minor grammar fixes in role description and input_env docsSmall cleanups improve readability and reduce ambiguity.
- This role is used to install Prefill/Decode type deployment ofLLM-D using llm-d-simulator. + This role is used to install a Prefill/Decode type deployment of LLM-D using the llm-d-simulator. - It will create all required resource by manfiests and verify the deployment by sending test request to the LLM-D simulator and wait for the request to be processed. + It will create all required resources via manifests and verify the deployment by sending a test request to the LLM-D simulator and waiting for it to be processed. - pre-requirements: + Prerequisites: - - Cluster should have GPUs If you don't want to use llm-d-simulator. + - The cluster should have GPUs if you don't want to use llm-d-simulator. - Set this, if you want to use specific GIE implementaiton (ex "istio") + Set this if you want to use a specific GIE implementation (e.g., "istio") - Set this, if you want to use specific ISTIO image tag for helm install(ex "1.26") + Set this if you want to use a specific ISTIO image tag for Helm install (e.g., "1.27" or "1.27-alpha.<sha>")Also applies to: 53-55, 58-59
tests/e2e/roles/conftest.py (1)
85-99: Optional: avoid hard-coding metadata.namespace in the YAML to keep the helper genericThe operatorgroup_yaml embeds namespace: openshift-operators. If you ever reuse ensure_namespace_and_operatorgroup with a different namespace, this may drift. Better to inject the namespace just before creation.
For example:
operatorgroup = yaml.safe_load(operatorgroup_yaml) operatorgroup.setdefault("metadata", {})["namespace"] = namespace custom_api.create_namespaced_custom_object( group="operators.coreos.com", version="v1", namespace=namespace, plural="operatorgroups", body=operatorgroup, )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these settings in your CodeRabbit configuration.
📒 Files selected for processing (4)
tests/e2e/commons/cluster-tests/test_py_utils_e2e.py(5 hunks)tests/e2e/roles/cluster-tests/nfs-provisioner-deploy/test_e2e_nfs_provisioner.py(1 hunks)tests/e2e/roles/conftest.py(1 hunks)unofficial_components/roles/poc-llm-d-install-with-manifests/config.yaml(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- tests/e2e/roles/cluster-tests/nfs-provisioner-deploy/test_e2e_nfs_provisioner.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: kind-cluster-lifecycle-test (3.13)
- GitHub Check: cluster-test (3.12)
- GitHub Check: non-cluster-test (3.12)
🔇 Additional comments (3)
tests/e2e/commons/cluster-tests/test_py_utils_e2e.py (1)
32-33: Good hardening: explicitly scoping kubectl run to the namespaceAdding -n to kubectl run makes the tests deterministic regardless of the current kube-context.
Also applies to: 71-73, 107-109, 143-145, 187-189
unofficial_components/roles/poc-llm-d-install-with-manifests/config.yaml (1)
56-60: Default ISTIO_HUB_VERSION bump looks fine; update example and description to stay consistentYou updated the default to 1.27-alpha..., but:
- The example under “To run it” still shows 1.26-alpha...
- The description still uses 1.26 as the example.
Please align both to avoid confusion.
Example diffs:
Update the example:
- -p ISTIO_HUB_VERSION=1.26-alpha.9befed2f1439d883120f8de70fd70d84ca0ebc3d \ + -p ISTIO_HUB_VERSION=1.27-alpha.0551127f00634403cddd4634567e65a8ecc499a7 \Refine the description:
- Set this, if you want to use specific ISTIO image tag for helm install(ex "1.26") + Set this if you want to use a specific ISTIO image tag for Helm install (e.g., "1.27" or "1.27-alpha.<sha>")Also verify the tag exists in gcr.io/istio-testing for all images you consume (pilot, proxyv2, etc.) to prevent pipeline breaks.
tests/e2e/roles/conftest.py (1)
88-96: Rename to global-operators aligns with default OpenShift namingglobal-operators is the standard OperatorGroup name in openshift-operators. This reduces conflicts with pre-existing clusters.
Signed-off-by: Jooho Lee <jlee@redhat.com>
Signed-off-by: Jooho Lee <jlee@redhat.com>
Signed-off-by: Jooho Lee <jlee@redhat.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
src/dev/scripts/debug-pod.sh (1)
31-38: Cleanup ignores NAMESPACEWhen a namespace is provided,
kubectl delete/getin cleanup should also target that namespace; otherwise cleanup may miss the pod or loop incorrectly.Apply this diff to honor the namespace during cleanup:
cleanup() { local pod_name=$1 echo "Cleaning up pod ${pod_name}..." - kubectl delete pod ${pod_name} 2>/dev/null + local ns_flags=() + if [ -n "${NAMESPACE}" ]; then ns_flags=(-n "${NAMESPACE}"); fi + kubectl delete pod "${pod_name}" "${ns_flags[@]}" 2>/dev/null # Wait for pod to be fully deleted - while kubectl get pod ${pod_name} 2>/dev/null; do - sleep 1 - done + while kubectl get pod "${pod_name}" "${ns_flags[@]}" >/dev/null 2>&1; do + sleep 1 + done }
🧹 Nitpick comments (2)
src/dev/scripts/debug-pod.sh (1)
9-9: Clarify the NAMESPACE default in docsConsider clarifying that when empty, kubectl defaults to the current context namespace (commonly "default"), rather than implying a literal empty namespace.
tests/e2e/dev/cluster-tests/debug-pod/test_debug_pod.py (1)
95-95: Ensure cleanup runs even on assertion failuresConsider wrapping the test body in try/finally to guarantee pod deletion even if assertions fail, preventing resource leakage in CI.
Example pattern:
try: # test body: run loopy, waits, checks, etc. ... finally: subprocess.run( ["kubectl", "delete", "pod", pod_name, "-n", namespace, "--force", "--grace-period=0"], capture_output=True, text=True, )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these settings in your CodeRabbit configuration.
📒 Files selected for processing (2)
src/dev/scripts/debug-pod.sh(4 hunks)tests/e2e/dev/cluster-tests/debug-pod/test_debug_pod.py(5 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: non-cluster-test (3.12)
- GitHub Check: cluster-test (3.12)
- GitHub Check: kind-cluster-lifecycle-test (3.13)
🔇 Additional comments (6)
src/dev/scripts/debug-pod.sh (1)
49-53: Namespace support in kubectl run looks goodAppending -n when NAMESPACE is provided, and leaving it off otherwise, matches expected behavior. The main invocation remains compatible.
Also applies to: 72-72
tests/e2e/dev/cluster-tests/debug-pod/test_debug_pod.py (5)
4-4: LGTM: uuid importImporting uuid to generate unique pod names is appropriate and avoids name collisions in parallel runs.
50-52: LGTM: dynamic pod name and explicit namespaceUsing a short UUID segment and setting namespace explicitly improves test isolation and determinism.
60-63: LGTM: passing NAME and NAMESPACE to the runnerSupplying both parameters via -p aligns with the script’s new support and keeps the test self-contained.
73-73: LGTM: readiness wait includes namespaceUpdating wait_for_pod_name_ready to pass namespace prevents false negatives caused by checking the wrong namespace.
81-84: Verify jsonpath quoting in kubectl getThe -o argument immediately following this block uses "jsonpath='{...}'". When using subprocess.run with an argument list (no shell), the inner quotes become literal characters and may cause kubectl to treat it as an invalid format. Prefer jsonpath={...} without embedded quotes.
Proposed call (only the -o argument shown):
"-o", f"jsonpath={{.metadata.annotations}}"If you want a more targeted check for the annotation key:
"-o", f"jsonpath={{.metadata.annotations['sidecar.istio.io/inject']}}"
| ALLOWED_PARAMS=("NAME" "WITH_ISTIO" "NAMESPACE") | ||
|
|
||
| # Default values | ||
| WITH_ISTIO=${WITH_ISTIO:-false} | ||
| NAME=${NAME:-debug-pod} | ||
| NAMESPACE=${NAMESPACE:-} | ||
|
|
There was a problem hiding this comment.
CLEAN param is documented but not allowed/defaulted
CLEAN is referenced in the header and used later but is missing from ALLOWED_PARAMS, and it doesn’t have a default. This can cause validate_script_params to fail or lead to unexpected behavior when CLEAN is unset.
Apply this diff to allow and default the parameter:
-ALLOWED_PARAMS=("NAME" "WITH_ISTIO" "NAMESPACE")
+ALLOWED_PARAMS=("NAME" "WITH_ISTIO" "NAMESPACE" "CLEAN")
WITH_ISTIO=${WITH_ISTIO:-false}
NAME=${NAME:-debug-pod}
NAMESPACE=${NAMESPACE:-}
+CLEAN=${CLEAN:-false}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ALLOWED_PARAMS=("NAME" "WITH_ISTIO" "NAMESPACE") | |
| # Default values | |
| WITH_ISTIO=${WITH_ISTIO:-false} | |
| NAME=${NAME:-debug-pod} | |
| NAMESPACE=${NAMESPACE:-} | |
| ALLOWED_PARAMS=("NAME" "WITH_ISTIO" "NAMESPACE" "CLEAN") | |
| # Default values | |
| WITH_ISTIO=${WITH_ISTIO:-false} | |
| NAME=${NAME:-debug-pod} | |
| NAMESPACE=${NAMESPACE:-} | |
| CLEAN=${CLEAN:-false} |
🤖 Prompt for AI Agents
In src/dev/scripts/debug-pod.sh around lines 18 to 24, the CLEAN parameter is
referenced elsewhere but not listed in ALLOWED_PARAMS and has no default; add
"CLEAN" to the ALLOWED_PARAMS array and set a default value for CLEAN (e.g.,
CLEAN=${CLEAN:-false}) alongside the other default variable assignments so
validate_script_params and subsequent logic behave correctly.
|
/lgtm |
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores