-
Notifications
You must be signed in to change notification settings - Fork 667
CONSOLE-4943: Add Machine Set, MachineConfigPool, and VMs columns to Nodes list view #15909
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@jeff-phillips-18: GitHub didn't allow me to request PR reviews from the following users: kybaker. Note that only openshift members and repo collaborators can review this PR, and authors cannot review their own PRs. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
WalkthroughAdds i18n keys and integrates VirtualMachineInstance data into the Nodes page: introduces a generic resource-watch hook, enriches node rows with machine owner, MachineConfigPool, and VM count, wires filters and conditional VM column, and extends NodeMetrics with runningVms. No behavior outside UI/data-wiring is modified. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In @frontend/packages/console-app/src/components/nodes/NodesPage.tsx:
- Around line 294-306: In NodesPage.tsx update the sort key for the
MachineConfigPool column: locate the column object created when mcpEnabled is
true (the entry using nodeColumnInfo.machineConfigPool.id) and fix the typo in
the sort string from "machineConfigPool.metadta.name" to
"machineConfigPool.metadata.name" so sorting works correctly for that column.
- Around line 204-215: The VM column sort key is wrong: in the node column
definition inside NodesPage (the object using nodeColumnInfo.vms.id) change the
sort property from 'vmCount' to 'vmsCount' to match the NodeRowItem field name
(vmsCount) so column sorting works correctly; update the sort string where the
VMs column is created in NodesPage to 'vmsCount'.
- Around line 798-802: The filter block in NodesPage.tsx uses
filters.machineConfigPools and directly accesses
resource.machineConfigPool.metadata.name but NodeRowItem marks
resource.machineConfigPool as optional, which can cause a TypeError; update the
condition to first guard that resource.machineConfigPool (and its metadata)
exists before checking .metadata.name (e.g., use optional chaining or an
explicit null check on resource.machineConfigPool) so resources without a
MachineConfigPool or CSR resources do not throw when applying
filters.machineConfigPools.
- Around line 971-979: The code sets nodeVMs to undefined when
isKubevirtPluginActive is false, then unconditionally accesses nodeVMs.length
causing a TypeError; update the return object in NodesPage.tsx so that vmsCount
is derived safely (use a conditional or nullish check on nodeVMs, e.g. vmsCount
= nodeVMs ? nodeVMs.length : 0 or vmsCount = nodeVMs?.length ?? 0) referencing
the nodeVMs variable and the vmsCount property in the returned object to avoid
calling .length on undefined.
🧹 Nitpick comments (4)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (4)
985-993: Consider propagating additional resource load errors.The
loadErrorcomputation omitsmachinesLoadError,machineSetsLoadError,controlPlaneMachineSetsLoadError, andmachineConfigPoolsLoadError. WhileownerEnabled/mcpEnableddisable columns on error, users won't see why data is missing.For better UX, consider logging these errors or showing a degraded state indicator when supplementary resources fail to load.
💡 Example enhancement
const loadError = nodesLoadError || csrsLoadError || (isKubevirtPluginActive && vmisLoadError); + + // Log supplementary resource errors for debugging + useEffect(() => { + if (machinesLoadError) console.warn('Failed to load Machines:', machinesLoadError); + if (machineSetsLoadError) console.warn('Failed to load MachineSets:', machineSetsLoadError); + if (controlPlaneMachineSetsLoadError) console.warn('Failed to load ControlPlaneMachineSets:', controlPlaneMachineSetsLoadError); + if (machineConfigPoolsLoadError) console.warn('Failed to load MachineConfigPools:', machineConfigPoolsLoadError); + }, [machinesLoadError, machineSetsLoadError, controlPlaneMachineSetsLoadError, machineConfigPoolsLoadError]);
975-978: Consider filtering ownerReferences by expected kind.Taking
ownerReferences?.[0]assumes the first reference is always the MachineSet or ControlPlaneMachineSet. If a Machine has multiple owners, this could display the wrong resource. Consider filtering by kind:♻️ Proposed improvement
return { ...node, - machineOwner: owner?.metadata.ownerReferences?.[0], + machineOwner: owner?.metadata.ownerReferences?.find( + (ref) => ref.kind === 'MachineSet' || ref.kind === 'ControlPlaneMachineSet', + ), machineConfigPool, vmsCount: nodeVMs?.length, };
664-677: Add null safety for optional array props.
machineSetsandcontrolPlaneMachineSetsare typed as optional (?) inNodeListProps(lines 571-572), but.map()is called directly without null checks. While the current call site always provides arrays fromuseK8sWatchResource, the type contract should be honored for maintainability.♻️ Proposed fix
const machineSetFilterOptions = useMemo<DataViewFilterOption[]>( () => [ - ...machineSets.map((machineSet) => ({ + ...(machineSets ?? []).map((machineSet) => ({ value: machineSet.metadata.name, label: machineSet.metadata.name, })), - ...controlPlaneMachineSets.map((machineSet) => ({ + ...(controlPlaneMachineSets ?? []).map((machineSet) => ({ value: machineSet.metadata.name, label: machineSet.metadata.name, })), ].sort((a, b) => a.label.localeCompare(b.label)), [machineSets, controlPlaneMachineSets], );
679-686: Apply same null safety tomachineConfigPools.Similar to the previous comment,
machineConfigPoolsis optional but.map()is called directly.♻️ Proposed fix
const machineConfigPoolFilterOptions = useMemo<DataViewFilterOption[]>( () => - machineConfigPools.map((machineConfigPool) => ({ + (machineConfigPools ?? []).map((machineConfigPool) => ({ value: machineConfigPool.metadata.name, label: machineConfigPool.metadata.name, })), [machineConfigPools], );
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (2)
frontend/packages/console-app/locales/en/console-app.jsonfrontend/packages/console-app/src/components/nodes/NodesPage.tsx
🧰 Additional context used
🧬 Code graph analysis (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (7)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (3)
K8sGroupVersionKind(106-106)NodeKind(822-847)OwnerReference(29-36)frontend/packages/console-shared/src/sorts/nodes.ts (1)
sortWithCSRResource(38-48)frontend/public/module/k8s/types.ts (4)
MachineSetKind(730-744)ControlPlaneMachineSetKind(746-765)MachineConfigPoolKind(842-850)MachineKind(711-728)frontend/packages/console-app/src/components/data-view/types.ts (1)
ResourceFilters(5-8)frontend/packages/console-dynamic-plugin-sdk/src/utils/k8s/hooks/useK8sWatchResource.ts (1)
useK8sWatchResource(28-69)frontend/public/models/index.ts (3)
MachineModel(894-909)MachineSetModel(911-926)ControlPlaneMachineSetModel(928-943)frontend/public/module/k8s/label-selector.js (1)
LabelSelector(16-278)
🔇 Additional comments (3)
frontend/packages/console-app/locales/en/console-app.json (1)
409-423: LGTM!New localization keys for "Machine set", "VMs", "MachineConfigPool", and their corresponding filter placeholders are correctly added. These align with the new columns and filters introduced in
NodesPage.tsx.frontend/packages/console-app/src/components/nodes/NodesPage.tsx (2)
101-106: Acknowledged temporary VMI integration.The TODO comment correctly identifies this as a workaround until the kubevirt plugin can contribute its own column. The GVK definition is correct for
kubevirt.io/v1/VirtualMachineInstance. Consider tracking this technical debt in an issue to ensure it gets addressed when the plugin extension mechanism supports column contributions.
466-477:getGroupVersionKindForResourceis called withOwnerReference, but the function signature declaresK8sResourceCommon.
OwnerReference(with requiredname,uid,apiVersion,kind) doesn't structurally matchK8sResourceCommon(which has optionalapiVersion,kind, andmetadata). While the function implementation only accessesapiVersionandkindfields—so it works at runtime—this violates the type contract and may trigger strict TypeScript checking. Consider either:
- Widening the function signature to accept types with
apiVersionandkindproperties, or- Casting:
getGroupVersionKindForResource(machineOwner as K8sResourceCommon)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx
Outdated
Show resolved
Hide resolved
frontend/packages/console-app/src/components/nodes/NodesPage.tsx
Outdated
Show resolved
Hide resolved
frontend/packages/console-app/src/components/nodes/NodesPage.tsx
Outdated
Show resolved
Hide resolved
e74ec0b to
a620b03
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (2)
664-686: Add defensive fallbacks for optional array props to prevent runtime errors.The
machineSets,controlPlaneMachineSets, andmachineConfigPoolsprops are typed as optional (?), but the code calls.map()on them directly. WhileuseK8sWatchResourcereturns[]for unloaded lists, the type contract allowsundefined, which would cause aTypeError.♻️ Proposed defensive fix
const machineSetFilterOptions = useMemo<DataViewFilterOption[]>( () => [ - ...machineSets.map((machineSet) => ({ + ...(machineSets ?? []).map((machineSet) => ({ value: machineSet.metadata.name, label: machineSet.metadata.name, })), - ...controlPlaneMachineSets.map((machineSet) => ({ + ...(controlPlaneMachineSets ?? []).map((machineSet) => ({ value: machineSet.metadata.name, label: machineSet.metadata.name, })), ].sort((a, b) => a.label.localeCompare(b.label)), [machineSets, controlPlaneMachineSets], ); const machineConfigPoolFilterOptions = useMemo<DataViewFilterOption[]>( () => - machineConfigPools.map((machineConfigPool) => ({ + (machineConfigPools ?? []).map((machineConfigPool) => ({ value: machineConfigPool.metadata.name, label: machineConfigPool.metadata.name, })), [machineConfigPools], );
970-973: Consider memoizing LabelSelector instances to reduce allocations.A new
LabelSelectoris instantiated for every MCP during each node iteration, resulting in O(nodes × MCPs) object allocations on every render cycle. For large clusters this could impact performance.♻️ Pre-compute MCP selectors outside the node loop
+ const mcpSelectors = useMemo( + () => + machineConfigPools.map((mcp) => ({ + mcp, + selector: new LabelSelector(mcp.spec?.nodeSelector || {}), + })), + [machineConfigPools], + ); const data = useMemo(() => { // ... return [ ...csrBundle, ...nodes.map((node) => { // ... - const machineConfigPool = machineConfigPools.find((mcp) => { - const labelSelector = new LabelSelector(mcp.spec.nodeSelector || {}); - return labelSelector.matches(node); - }); + const machineConfigPool = mcpSelectors.find(({ selector }) => + selector.matches(node), + )?.mcp; // ... }), ]; - }, [csrs, nodes, machines, machineConfigPools, vmis, isKubevirtPluginActive]); + }, [csrs, nodes, machines, mcpSelectors, vmis, isKubevirtPluginActive]);
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (2)
frontend/packages/console-app/locales/en/console-app.jsonfrontend/packages/console-app/src/components/nodes/NodesPage.tsx
🧰 Additional context used
🧬 Code graph analysis (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (6)
frontend/packages/console-dynamic-plugin-sdk/src/api/dynamic-core-api.ts (1)
getGroupVersionKindForResource(413-413)frontend/public/module/k8s/types.ts (4)
MachineSetKind(730-744)ControlPlaneMachineSetKind(746-765)MachineConfigPoolKind(842-850)MachineKind(711-728)frontend/packages/console-app/src/components/data-view/types.ts (1)
ResourceFilters(5-8)frontend/packages/console-dynamic-plugin-sdk/src/utils/k8s/hooks/useK8sWatchResource.ts (1)
useK8sWatchResource(28-69)frontend/public/models/index.ts (4)
MachineModel(894-909)MachineSetModel(911-926)ControlPlaneMachineSetModel(928-943)MachineConfigPoolModel(962-977)frontend/public/module/k8s/label-selector.js (1)
LabelSelector(16-278)
🔇 Additional comments (7)
frontend/packages/console-app/locales/en/console-app.json (1)
409-423: LGTM — i18n keys correctly added for new Node columns and filters.The translation keys align with the UI elements introduced in
NodesPage.tsx. Capitalization is consistent with existing conventions (e.g.,MachineConfigPoolmatching line 81).frontend/packages/console-app/src/components/nodes/NodesPage.tsx (6)
101-106: Reasonable temporary integration for KubeVirt VMI awareness.The GVK definition is correct for
kubevirt.io/v1/VirtualMachineInstance. The TODO comment clearly signals this should be refactored once the kubevirt-plugin can contribute the column directly via extension points.
1011-1018: Conditional column enablement provides reasonable graceful degradation.The logic correctly enables columns based on resource availability:
ownerEnabledrequires machines and at least one of MachineSets/ControlPlaneMachineSetsmcpEnabledrequires both machines and MCPsvmsEnabledgates on KubeVirt feature flagThis is a solid UX pattern for optional functionality.
904-917: KubeVirt integration follows correct patterns.The double-gate (
KUBEVIRT_DYNAMICflag + plugin presence check) is appropriately defensive. Conditionally passingundefinedtouseK8sWatchResourceis the standard pattern to skip unnecessary watches.
167-248: Column definitions cleanly implement feature-flagged columns.The conditional spreading pattern for
machineOwner,vms, andmachineConfigPoolcolumns is idiomatic. MovingRolestoadditional: truealigns with the updated default column set in the PR description.
466-490: Cell rendering correctly uses dynamic GVK resolution for ResourceLinks.Using
getGroupVersionKindForResource()is the right approach for rendering links to owner references and MCPs where the exact GVK is determined at runtime.
836-848: Type definitions accurately model the extended node data and filter state.The optional fields on
NodeRowItemand the new filter arrays inNodeFilterscorrectly reflect the runtime data model.
frontend/packages/console-app/src/components/nodes/NodesPage.tsx
Outdated
Show resolved
Hide resolved
frontend/packages/console-app/src/components/nodes/NodesPage.tsx
Outdated
Show resolved
Hide resolved
a620b03 to
602755a
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @frontend/packages/console-app/src/components/nodes/NodesPage.tsx:
- Around line 664-686: The useMemo hooks assume machineSets,
controlPlaneMachineSets, and machineConfigPools are always arrays but they are
optional in NodeListProps; update the two hooks (machineSetFilterOptions and
machineConfigPoolFilterOptions) to defensively handle undefined by using a
fallback empty array when mapping (e.g., use (machineSets ?? []) and
(controlPlaneMachineSets ?? []) and (machineConfigPools ?? []) before .map()),
so the map and sort operations never run on undefined and no runtime error
occurs.
🧹 Nitpick comments (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (1)
904-908: Consider defensive check forwindow.SERVER_FLAGS.The
Array.isArray(window.SERVER_FLAGS.consolePlugins)check is good, butwindow.SERVER_FLAGSitself could theoretically be undefined in certain environments (unit tests, SSR edge cases). A more defensive approach would prevent potential test failures.💡 Optional: Add null check for SERVER_FLAGS
const isKubevirtPluginActive = - Array.isArray(window.SERVER_FLAGS.consolePlugins) && - window.SERVER_FLAGS.consolePlugins.includes('kubevirt-plugin') && + Array.isArray(window.SERVER_FLAGS?.consolePlugins) && + window.SERVER_FLAGS?.consolePlugins.includes('kubevirt-plugin') && kubevirtFeature;
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (2)
frontend/packages/console-app/locales/en/console-app.jsonfrontend/packages/console-app/src/components/nodes/NodesPage.tsx
🧰 Additional context used
🧬 Code graph analysis (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (8)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (1)
K8sGroupVersionKind(106-106)frontend/packages/console-dynamic-plugin-sdk/src/api/dynamic-core-api.ts (1)
getGroupVersionKindForResource(413-413)frontend/public/module/k8s/types.ts (3)
MachineSetKind(730-744)ControlPlaneMachineSetKind(746-765)MachineConfigPoolKind(842-850)frontend/packages/console-app/src/components/data-view/ConsoleDataView.tsx (1)
initialFiltersDefault(30-30)frontend/packages/console-app/src/components/data-view/types.ts (1)
ResourceFilters(5-8)frontend/public/models/index.ts (4)
MachineModel(894-909)MachineSetModel(911-926)ControlPlaneMachineSetModel(928-943)MachineConfigPoolModel(962-977)frontend/packages/console-dynamic-plugin-sdk/src/utils/flags.ts (1)
useFlag(11-12)frontend/public/module/k8s/label-selector.js (1)
LabelSelector(16-278)
🔇 Additional comments (8)
frontend/packages/console-app/locales/en/console-app.json (1)
409-423: LGTM! New i18n keys are well-structured.The new translation keys for Machine set, VMs, MachineConfigPool, and associated filter placeholders follow existing conventions and align with the NodesPage.tsx usage. Good to go.
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (7)
101-106: Acknowledged: Temporary VMI GVK definition.The hardcoded
VirtualMachineInstanceGroupVersionKindwith the TODO note is a reasonable interim approach. Once the KubeVirt plugin can contribute the VMs column via extension points, this can be removed. The GVK values are correct for the kubevirt.io API.
115-132: LGTM! Column info entries are well-defined.The new column identifiers (
machineOwner,vms,machineConfigPool) and the repositionedrolecolumn follow the existing pattern. These IDs correctly map to the cell rendering logic ingetNodeDataViewRows.Also applies to: 148-150
167-171: LGTM! Conditional column rendering is well-implemented.The feature-flagged column pattern using conditional spreading (
...(flag ? [...] : [])) is clean and maintains type safety. The sort paths correctly reference the augmentedNodeRowItemproperties. Theadditional: truemarking on secondary columns aligns with the column management UX pattern.Also applies to: 192-215, 240-248, 294-306, 343-343
792-802: LGTM! Filter logic handles optional properties correctly.The optional chaining (
resource.machineOwner?.name,resource.machineConfigPool?.metadata.name) ensures nodes without these associations are safely filtered out when the respective filters are active. This is the expected UX behavior.
994-1003: Graceful degradation pattern is well-designed.The separation of concerns here is solid: machine-related load errors disable the associated columns/filters rather than failing the entire Nodes page. The explicit comment on line 1002 clarifies the intent.
One consideration: the
loadedstate waits for all resources including Machines/MachineSets/MCPs even when they'll be hidden due to errors. If these APIs are slow or unavailable on certain cluster configurations (e.g., non-OCP clusters), the initial load could be delayed. This is likely acceptable for the target environment (OpenShift Console).
1017-1024: Consider: DoesmcpEnabledneed to depend onmachinesLoadError?The
mcpEnabledcondition includes!machinesLoadError, but the MachineConfigPool association (lines 970-973) uses onlyLabelSelectormatching against node labels—it doesn't require the Machines API.If Machines fail to load but MachineConfigPools load successfully, the MCP column would be unnecessarily hidden.
Was the
machinesLoadErrordependency intentional formcpEnabled? If not:💡 Potential fix
- mcpEnabled={!machinesLoadError && !machineConfigPoolsLoadError} + mcpEnabled={!machineConfigPoolsLoadError}
970-973: No action needed - emptynodeSelectorhandling is correct.
LabelSelectorwith an empty selector{}returnsfalsewhen matching nodes, nottrue. Whenmcp.spec.nodeSelectoris undefined, the fallback to{}results in zero conjuncts, causingisEmpty()to returntrue. SinceemptySelectsAlldefaults tofalse, the selector correctly returnsfalsefor all nodes. MCPs without explicit node selectors will not incorrectly match any nodes.The
.find()returning only the first matching MCP is acceptable for typical MachineConfigPool association patterns.
602755a to
4581817
Compare
|
This part looks good, we might need to modify the default columns later. |
rhamilto
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work, @jeff-phillips-18! Claude highlighted a few areas to address.
| } | ||
|
|
||
| if (filters.machineOwners.length > 0) { | ||
| if (!filters.machineOwners.includes(resource.machineOwner?.name)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Claude suggests the following to fix a potential null reference:
| if (!filters.machineOwners.includes(resource.machineOwner?.name)) { | |
| if (!resource.machineOwner || !filters.machineOwners.includes(resource.machineOwner.name)) { |
| } | ||
|
|
||
| if (filters.machineConfigPools.length > 0) { | ||
| if (!filters.machineConfigPools.includes(resource.machineConfigPool?.metadata.name)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Claude suggests the following to fix a potential null reference:
| if (!filters.machineConfigPools.includes(resource.machineConfigPool?.metadata.name)) { | |
| if (!filters.machineConfigPools || !filters.machineConfigPools.includes(resource.machineConfigPool?.metadata.name)) { |
4581817 to
6b123bb
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @frontend/packages/console-app/src/components/nodes/NodesPage.tsx:
- Around line 949-951: The machinesByName Map keys only use m.metadata.name
which can collide across namespaces; update the useMemo that creates
machinesByName to key each entry by a namespaced identifier like
`${m.metadata.namespace}/${m.metadata.name}` (refer to the machinesByName
variable and its useMemo), and then update any places that look up machines from
this Map (e.g., the data lookup that currently uses just the name) to construct
and use the same `${namespace}/${name}` key so machines in different namespaces
are distinguished.
- Around line 984-992: The machine-related watches in NodesPage.tsx must be
gated by RBAC like the CSRs pattern: add useAccessReview checks for the
Machines, MachineSets, ControlPlaneMachineSets, and MachineConfigPools resources
and, where you currently call useK8sWatchResource for those resources, pass
undefined when access is denied (same approach used around the csrs/CSR
accessReview and useK8sWatchResource calls). This ensures useK8sWatchResource
returns loaded=true when the user lacks access so machinesLoaded,
machineSetsLoaded, controlPlaneMachineSetsLoaded, and machineConfigPoolsLoaded
won’t block the overall loaded flag.
🧹 Nitpick comments (2)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (2)
373-373: Consider: Type consistency for runningVMs default.Line 373 defaults to
'0'(string) while the metric value is a number. This works for display but consider using0(number) and letting React coerce it, or explicitlyString(runningVMs)for clarity. Minor nit.Also applies to: 480-482
679-686: Minor: Consider sorting MachineConfigPool filter options.
machineSetFilterOptionsis sorted alphabetically (line 675), butmachineConfigPoolFilterOptionsis not. Consider adding.sort((a, b) => a.label.localeCompare(b.label))for consistency.♻️ Suggested fix
const machineConfigPoolFilterOptions = useMemo<DataViewFilterOption[]>( () => machineConfigPools.map((machineConfigPool) => ({ value: machineConfigPool.metadata.name, label: machineConfigPool.metadata.name, - })), + })).sort((a, b) => a.label.localeCompare(b.label)), [machineConfigPools], );
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (3)
frontend/packages/console-app/locales/en/console-app.jsonfrontend/packages/console-app/src/components/nodes/NodesPage.tsxfrontend/public/actions/ui.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/packages/console-app/locales/en/console-app.json
🧰 Additional context used
🧬 Code graph analysis (1)
frontend/public/actions/ui.ts (1)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (1)
MetricValuesByName(869-871)
🔇 Additional comments (13)
frontend/public/actions/ui.ts (1)
43-52: LGTM! Clean type extension for VM metrics.The
runningVmsfield follows the establishedMetricValuesByNamepattern used by other node metrics. This additive change maintains backward compatibility.frontend/packages/console-app/src/components/nodes/NodesPage.tsx (12)
42-60: LGTM! Well-organized imports for machine-related functionality.The new model and kind imports are properly grouped and all appear to be utilized in the implementation below.
106-141: Clean column info structure for new columns.The
machineOwner,vms, andmachineConfigPoolcolumn definitions follow the established pattern.
158-162: Good approach with feature flags for conditional columns.The hook signature change to accept boolean flags enables clean conditional rendering of feature-gated columns. This aligns well with the KubeVirt plugin activation pattern.
231-239: Verify: Roles column demoted to additional.The
additional: trueflag means Roles will no longer be shown by default, aligning with the PR objective stating default columns are: Name, Status, Machine set, VMs, Pods, Memory, CPU. Users can still enable it via column management.
470-479: LGTM! Correct handling for cluster-scoped MachineConfigPool.MachineConfigPool is cluster-scoped, so omitting the
namespaceprop onResourceLinkis correct.
566-598: LGTM! Well-structured props for feature flags.Clean separation of concerns with boolean enable flags and corresponding data arrays. Default empty arrays prevent undefined access errors.
792-803: LGTM! Filter logic handles optional properties correctly.The optional chaining ensures CSR resources (which lack
machineOwnerandmachineConfigPool) are properly handled. They'll be filtered out when these filters are active, which is appropriate since CSRs don't have machine associations.
904-908: Good: Dual-check for KubeVirt activation.Requiring both the
KUBEVIRT_DYNAMICfeature flag AND plugin presence inconsolePluginsis a resilient pattern that prevents edge cases where one condition is true but not the other.
953-982: LGTM! Correct association logic for machine ownership and MCPs.The
LabelSelectormatching for MachineConfigPools andownerReferencestraversal for machine ownership are correct approaches. The O(n×m) complexity is acceptable given typical cluster sizes have few MCPs.
1006-1012: LGTM! Graceful degradation for machine API access.The boolean logic correctly enables features only when required APIs are accessible. The OR condition for
machineSetsLoadError || controlPlaneMachineSetsLoadErrorallows the Machine Set column to show even if one of the machine set APIs fails.
543-550: No action needed — KubeVirt metric correctly exposes node labels.The
kubevirt_vmi_phase_countmetric includes anodelabel per KubeVirt's metric specification, so the query correctly returns per-node VM counts. The code's use ofdata.metric.instance || data.metric.nodeon line 557 will properly key results by node.
458-469: No issue—getGroupVersionKindForResourcecorrectly handlesOwnerReference.The
machineOwneris anOwnerReferencefrommetadata.ownerReferences, which has bothapiVersionandkindfields. The helper only extracts these two fields, so it works seamlessly withOwnerReferenceobjects. The implementation is sound.
frontend/packages/console-app/src/components/nodes/NodesPage.tsx
Outdated
Show resolved
Hide resolved
6b123bb to
49e3cb5
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (3)
679-686: Consider sorting MachineConfigPool filter options for consistency.
machineSetFilterOptionssorts the combined list alphabetically (line 675), butmachineConfigPoolFilterOptionsdoes not. For consistent UX across filters, consider adding.sort((a, b) => a.label.localeCompare(b.label)).♻️ Optional: Add sorting to MCP filter options
const machineConfigPoolFilterOptions = useMemo<DataViewFilterOption[]>( () => machineConfigPools.map((machineConfigPool) => ({ value: machineConfigPool.metadata.name, label: machineConfigPool.metadata.name, - })), + })).sort((a, b) => a.label.localeCompare(b.label)), [machineConfigPools], );
839-843:vmsCountis declared but never populated on row items.The
NodeRowItemtype includesvmsCount?: number, but the data mapping inuseMemo(lines 961-984) doesn't actually set this field. VM counts are retrieved fromnodeMetrics.runningVmsingetNodeDataViewRowsinstead. Consider removingvmsCountfrom the type if it's intentionally not used on the row object, or document this design choice.♻️ Remove unused vmsCount from NodeRowItem
type NodeRowItem = (NodeKind | NodeCertificateSigningRequestKind) & { machineOwner?: OwnerReference; machineConfigPool?: MachineConfigPoolKind; - vmsCount?: number; };
1009-1016: Feature enablement logic is correct but could use a clarifying comment.The boolean logic for
ownerEnabledensures the column/filter appears when machines load successfully AND at least one owner resource type (MachineSets OR ControlPlaneMachineSets) is available. This is correct—clusters may have only one type of machine set. Consider adding a brief inline comment explaining this OR logic for future maintainers.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (3)
frontend/packages/console-app/locales/en/console-app.jsonfrontend/packages/console-app/src/components/nodes/NodesPage.tsxfrontend/public/actions/ui.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/public/actions/ui.ts
🔇 Additional comments (15)
frontend/packages/console-app/locales/en/console-app.json (2)
409-411: LGTM! New i18n keys for node columns are well-placed.These localization entries for
Machine set,VMs, andMachineConfigPoolfollow existing conventions and are positioned near related node terminology. Good to see i18n being handled properly from the start.
422-423: Filter labels follow established patterns.The filter placeholder strings
Filter by machine setandFilter by MachineConfigPoolare consistent with existing filter patterns in this file (e.g.,Filter by status,Filter by roles).frontend/packages/console-app/src/components/nodes/NodesPage.tsx (13)
42-48: Model imports align with the feature scope.Importing
MachineModel,MachineConfigPoolModel,MachineSetModel, andControlPlaneMachineSetModelis appropriate for the new column data requirements. These are well-established OpenShift API models.
106-123: Column info structure follows established patterns.The new
machineOwner,vms, and latermachineConfigPoolentries innodeColumnInfoare well-structured. Therolecolumn being marked asadditional: trueinuseNodesColumnsaligns with the PR objective of updating default columns.
158-162: Clean feature-flag-driven column composition.The
useNodesColumnssignature acceptingownerEnabled,mcpEnabled, andvmsEnabledis a solid approach for conditional column rendering. This keeps the UI clean when backing resources aren't available or features are disabled.
470-479: MachineConfigPool cell correctly handles cluster-scoped resource.Good attention to detail—
MachineConfigPoolis cluster-scoped, so nonamespaceprop is passed toResourceLink. UsinggetGroupVersionKindForResource(machineConfigPool)on the full resource object is the right approach here.
688-698: Filter initialization is properly structured.The
initialFilterscorrectly initializesmachineOwnersandmachineConfigPoolsas empty arrays, ensuring no filters are pre-selected. Good use of spreadinginitialFiltersDefaultfor base properties.
792-805: Filter matching logic is correct.The
machineOwnersandmachineConfigPoolsfilter logic properly handles cases where the resource lacks these associations. Note: Line 801 hasresource.machineConfigPool?.metadata.namebut the outer condition already confirmsresource.machineConfigPoolexists—the optional chaining is harmless but redundant.
907-911: Robust KubeVirt plugin detection.The three-way check (
Array.isArray,includes, and feature flag) is appropriately defensive. This ensures VM columns/metrics only appear when KubeVirt is both installed and enabled, avoiding UI confusion on clusters without virtualization support.
961-985: Data composition is well-structured.The augmentation logic correctly:
- Uses
Mapfor O(1) machine lookups- Matches MCPs using
LabelSelector.matches(node)- Finds owner references filtering for
MachineSetorControlPlaneMachineSetkindsThe O(nodes × MCPs) complexity for MCP matching is acceptable for typical cluster sizes.
987-995: Intentional graceful degradation for machine-related errors.The comment at line 994 clarifies the design: machine load errors hide columns/filters rather than blocking the entire page. This is a good UX pattern—users can still see nodes even if machine-related data isn't available. The
ownerEnabledandmcpEnabledprops correctly reflect this at lines 1009-1014.
895-929: Six concurrent K8s watches may impact performance on large clusters.The page now maintains watches for Nodes, Machines, MachineSets, ControlPlaneMachineSets, MachineConfigPools, and CSRs. While this is a common pattern, consider:
- Could any watches be made conditional based on feature flags (e.g., skip MCP watch if feature is known-disabled)?
- For very large clusters, this increases memory footprint from held resource lists.
This isn't a blocker—the pattern is established in console—but worth monitoring if users report sluggishness on large deployments.
933-949: Metrics polling implementation is solid.The 30-second interval with proper cleanup prevents memory leaks. The dependency on
isKubevirtPluginActiveensures VM metrics are fetched when the feature becomes available. Good defensive error handling with the console.error fallback.
458-469: No action needed—OwnerReferenceis fully compatible withgetGroupVersionKindForResource.The
OwnerReferencetype includesapiVersionandkindas required fields, matching exactly whatgetGroupVersionKindForResourceexpects. The utility safely returnsundefinedif either property is missing. Additionally, the code already guards against null/undefinedmachineOwnerwith the conditional rendering (lines 460–468), falling back toDASH. This is sound.
543-550: The query is correct as-is. KubeVirt'skubevirt_vmi_phase_countmetric natively exposes anodelabel for each VMI, so the reducer at line 557 will correctly receivedata.metric.nodewithout requiring an explicit node selector in the query. This is the right design: unlikekubelet_running_pods(which needssum by(node)aggregation), KubeVirt metrics already carry per-VMI granularity with node attribution included.Likely an incorrect or invalid review comment.
49e3cb5 to
70ab099
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (1)
856-874: Consider improving type safety for the empty array fallback.The hook logic is correct—it properly gates resource watching behind RBAC checks. The
!checkIsLoading && loadedensures we wait for the access review before reporting loaded state.The type assertion
[] as Ron line 873 works becauseR extends K8sResourceCommon[], but a slightly safer pattern would be to use a type predicate or simply return the empty array without assertion since TypeScript infers correctly for array types.♻️ Optional: Minor type improvement
- return [resources || ([] as R), !checkIsLoading && loaded, loadError]; + return [(resources ?? []) as R, !checkIsLoading && loaded, loadError];Or if you want to avoid the assertion entirely:
const useWatchResourcesIfAllowed = <R extends K8sResourceCommon>( model: K8sModel, ): [R[], boolean, unknown] => { // ... implementation return [resources || [], !checkIsLoading && loaded, loadError]; };
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (4)
frontend/packages/console-app/locales/en/console-app.jsonfrontend/packages/console-app/src/components/nodes/NodesPage.tsxfrontend/packages/console-shared/src/sorts/nodes.tsfrontend/public/actions/ui.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- frontend/public/actions/ui.ts
- frontend/packages/console-app/locales/en/console-app.json
🧰 Additional context used
🧬 Code graph analysis (2)
frontend/packages/console-shared/src/sorts/nodes.ts (1)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (1)
NodeKind(822-847)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (9)
frontend/packages/console-shared/src/sorts/nodes.ts (1)
nodeRunningVms(27-28)frontend/packages/console-dynamic-plugin-sdk/src/api/dynamic-core-api.ts (1)
getGroupVersionKindForResource(413-413)frontend/public/module/k8s/types.ts (4)
MachineSetKind(730-744)ControlPlaneMachineSetKind(746-765)MachineConfigPoolKind(842-850)MachineKind(711-728)frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (5)
NodeKind(822-847)NodeCertificateSigningRequestKind(865-867)OwnerReference(29-36)K8sResourceCommon(67-71)CertificateSigningRequestKind(849-863)frontend/packages/console-app/src/components/data-view/types.ts (1)
ResourceFilters(5-8)frontend/packages/console-dynamic-plugin-sdk/src/api/common-types.ts (1)
K8sModel(33-60)frontend/public/models/index.ts (4)
MachineModel(894-909)MachineSetModel(911-926)ControlPlaneMachineSetModel(928-943)MachineConfigPoolModel(962-977)frontend/packages/console-dynamic-plugin-sdk/src/utils/flags.ts (1)
useFlag(11-12)frontend/packages/console-app/src/components/nodes/csr.ts (1)
getNodeClientCSRs(36-75)
🔇 Additional comments (12)
frontend/packages/console-shared/src/sorts/nodes.ts (1)
27-28: LGTM!Clean addition that follows the established pattern for metric accessors in this file (e.g.,
nodeCPU,nodePods). Consistent with the newrunningVmsmetric being wired up inui.ts.frontend/packages/console-app/src/components/nodes/NodesPage.tsx (11)
103-158: LGTM!Column info structure is well-organized and consistent with existing patterns. The new
machineOwner,vms, andmachineConfigPoolentries follow the established convention.
162-340: LGTM!The conditional column inclusion pattern using spread operators is clean and the
useMemodependencies correctly track the feature flags. This approach properly hides columns when their backing resources aren't accessible.
924-928: LGTM!Good defensive check for KubeVirt integration. The
Array.isArrayguard prevents runtime errors ifconsolePluginsis undefined, and combining with the feature flag ensures the feature is properly gated.
963-971: LGTM on MCP matching logic.Using
LabelSelector.matches()is the correct approach for determining which MachineConfigPool applies to a node based on its labels. The owner reference lookup correctly filters forMachineSetandControlPlaneMachineSetkinds.
1004-1011: LGTM on feature gating logic.The
ownerEnabledcondition correctly allows the column when machines load AND at least one of MachineSet or ControlPlaneMachineSet is accessible. ThemcpEnabledandvmsEnabledconditions are straightforward. This provides graceful degradation when RBAC restricts access to certain resources.
668-690: LGTM on filter options construction.Combining MachineSets and ControlPlaneMachineSets into a single sorted list for the filter dropdown provides a consistent UX. The alphabetical sort is a nice touch for usability.
982-990: LGTM on error handling strategy.The design to exclude machine-related errors from
loadErrorwhile hiding their columns/filters is a solid resilience pattern. Users can still view core node information even if they lack RBAC access to machine resources.
843-854: LGTM on type definitions.The
NodeRowItemintersection type cleanly extends the base node types with optional machine-related fields. TheNodeFiltersextension properly models the new filter state.
190-202: All i18n keys for new columns and filters are properly defined.The locale strings—
Machine set,VMs,MachineConfigPool,Filter by machine set, andFilter by MachineConfigPool—are all present infrontend/packages/console-app/locales/en/console-app.jsonand translated across all supported locales (English, Spanish, French, Japanese, Korean, and Chinese). The namespace prefix convention (console-app~) is correctly applied.
547-554: Metric verification confirmed.The
kubevirt_vmi_phase_count{phase='running'}metric is correct per KubeVirt documentation, which confirms it reports VMI counts per phase and node. The code defensively accesses bothinstanceandnodelabels (line 561), ensuring robustness across different metric exposures.
462-486: No action required—type compatibility verified.
OwnerReferenceis structurally compatible withK8sResourceCommon. BothapiVersionandkindare required fields onOwnerReference, satisfying the function's needs. The implementation uses optional chaining defensively, and this pattern is consistent across the codebase (VPA, exports, etc.). The cell renderers are correct.
70ab099 to
f652608
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @frontend/packages/console-app/src/components/nodes/NodesPage.tsx:
- Around line 1006-1012: The mcpEnabled prop incorrectly gates MCP UI on
machinesLoadError; change the prop passed to mcpEnabled in NodesPage (currently
using machinesLoadError and machineConfigPoolsLoadError) so it only checks
!machineConfigPoolsLoadError. Locate the mcpEnabled usage in NodesPage.tsx and
update it to rely solely on machineConfigPoolsLoadError (since MCP assignments
are derived from node labels via LabelSelector(mcp.spec.nodeSelector) and do not
require Machines to be loaded), leaving machineConfigPools and any LabelSelector
logic untouched.
- Around line 547-554: The kubevirt query for the 'runningVms' series is missing
an explicit aggregation and may not yield a data.metric.node or
data.metric.instance label expected by the reducer (see reducer at line 561);
update the query string for the object with key 'runningVms' (inside the
vmsEnabled branch) to use an explicit aggregation such as sum
by(node)(kubevirt_vmi_phase_count{phase='running'}) so the returned series
include the node label and remain consistent with the other metrics.
🧹 Nitpick comments (2)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (2)
858-876: Type cast on empty array may be too permissive.The
[] as Rcast on line 875 bypasses type safety. While functionally fine for array types, consider using a more explicit type or adjusting the generic constraint:-const useWatchResourcesIfAllowed = <R extends K8sResourceCommon[]>( +const useWatchResourcesIfAllowed = <T extends K8sResourceCommon>( model: K8sModel, -): [R, boolean, unknown] => { +): [T[], boolean, unknown] => { // ... - return [resources || ([] as R), !checkIsLoading && loaded, loadError]; + return [resources || [], !checkIsLoading && loaded, loadError]; };This avoids the explicit cast and makes the return type clearer. Low priority given the current usage is correct.
377-377: Inconsistent default value for VMs compared to other metrics.The
podsmetric (line 382) defaults toDASHwhen unavailable, butrunningVMsdefaults to'0'. This could be misleading – a missing metric versus "0 VMs running" are different states:- const runningVMs = nodeMetrics?.runningVms?.[nodeName] ?? '0'; + const runningVMs = nodeMetrics?.runningVms?.[nodeName] ?? DASH;Alternatively, if
0is intentional because VM-capable nodes with no VMs should show "0", consider adding a comment explaining the rationale.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (4)
frontend/packages/console-app/locales/en/console-app.jsonfrontend/packages/console-app/src/components/nodes/NodesPage.tsxfrontend/packages/console-shared/src/sorts/nodes.tsfrontend/public/actions/ui.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/packages/console-app/locales/en/console-app.json
🧰 Additional context used
🧬 Code graph analysis (3)
frontend/public/actions/ui.ts (1)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (1)
MetricValuesByName(869-871)
frontend/packages/console-shared/src/sorts/nodes.ts (1)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (1)
NodeKind(822-847)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (10)
frontend/public/components/factory/table.tsx (1)
TableColumn(662-669)frontend/packages/console-shared/src/sorts/nodes.ts (2)
sortWithCSRResource(40-50)nodeRunningVms(27-28)frontend/packages/console-dynamic-plugin-sdk/src/api/dynamic-core-api.ts (1)
getGroupVersionKindForResource(413-413)frontend/public/module/k8s/types.ts (2)
MachineSetKind(730-744)ControlPlaneMachineSetKind(746-765)frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (4)
NodeKind(822-847)NodeCertificateSigningRequestKind(865-867)K8sResourceCommon(67-71)CertificateSigningRequestKind(849-863)frontend/packages/console-app/src/components/data-view/types.ts (1)
ResourceFilters(5-8)frontend/packages/console-dynamic-plugin-sdk/src/api/common-types.ts (1)
K8sModel(33-60)frontend/packages/console-dynamic-plugin-sdk/src/utils/k8s/hooks/useK8sWatchResource.ts (1)
useK8sWatchResource(28-69)frontend/public/models/index.ts (4)
MachineModel(894-909)MachineSetModel(911-926)ControlPlaneMachineSetModel(928-943)MachineConfigPoolModel(962-977)frontend/packages/console-app/src/components/nodes/csr.ts (1)
getNodeClientCSRs(36-75)
🔇 Additional comments (9)
frontend/public/actions/ui.ts (1)
43-52: LGTM – Type extension follows established patterns.The new
runningVmsfield aligns with the existing metric naming convention and type structure. Backward compatible addition.frontend/packages/console-shared/src/sorts/nodes.ts (1)
27-28: LGTM – Consistent with existing metric accessors.Follows the established pattern used by
nodeCPUandnodePods. TheNumber()wrapper correctly handles the conversion from the metric store.frontend/packages/console-app/src/components/nodes/NodesPage.tsx (7)
965-968: First-match behavior for MCP is acceptable but worth noting.A node could theoretically match multiple MachineConfigPools based on label selectors. The current
.find()returns only the first match. This is fine for the UI column display, but ensure this aligns with how OpenShift MCO actually assigns nodes to pools (typically nodes belong to one primary pool).
984-992: Loaded state aggregation is reasonable.The decision to wait for all resources before rendering prevents partial/inconsistent data. The comment on line 991 correctly explains that machine-related load errors hide columns rather than fail the page. Good defensive approach.
187-210: i18n handling is correct for new columns.All new column titles and filter labels are properly wrapped with
t()using theconsole-app~namespace. Follows established patterns.
926-930: Solid KubeVirt feature detection.The dual check (plugin presence via
SERVER_FLAGS.consolePluginsAND feature flag viauseFlag('KUBEVIRT_DYNAMIC')) is a good defensive pattern. TheArray.isArrayguard handles undefined cases.
951-982: Data memoization and mapping looks correct.Dependencies are properly tracked. The
machinesByNameMap optimization (lines 901-904) provides O(1) lookups instead of repeated array scans. Node enrichment withmachineOwnerandmachineConfigPoolis cleanly implemented.
922-924: CSR watching now properly RBAC-gated.Using
useWatchResourcesIfAllowedfor CSRs ensures the watch is only attempted when the user has list permissions. This prevents console errors for users without CSR access.
462-473: No issue here.getGroupVersionKindForResourceonly requiresapiVersionandkindproperties—both of whichOwnerReferenceprovides. The function signature works with any resource type that includes these fields, not exclusively withK8sResourceCommon.
frontend/packages/console-app/src/components/nodes/NodesPage.tsx
Outdated
Show resolved
Hide resolved
frontend/packages/console-app/src/components/nodes/NodesPage.tsx
Outdated
Show resolved
Hide resolved
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@frontend/packages/console-app/src/components/nodes/NodesPage.tsx`:
- Around line 492-494: The VM count cell should distinguish between "no data"
(undefined) and an empty list (0); update the assignment for
[nodeColumnInfo.vms.id] so it sets cell to a stringified length when
virtualMachines is defined (e.g., String(virtualMachines.length) or explicit
check virtualMachines !== undefined ? `${virtualMachines.length}` : undefined)
rather than virtualMachines?.length, so getNodeDataViewRows (which falls back to
DASH when cell is undefined) shows DASH for unavailable data but "0" when
KubeVirt is active with no VMs.
🧹 Nitpick comments (5)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (5)
104-114: Consider extracting the inline model to a shared location.The inline
VirtualMachineInstanceModelwith the TODO comment is acceptable as a temporary measure, but for long-term maintainability, this model should live in a shared models file. The TODO correctly notes this should be removed if/when the KubeVirt plugin can add its own column.One minor observation: the
crd: trueproperty is set, which is correct for KubeVirt CRDs.
886-889: Consider using standardnamespace/namekey format for consistency.The Map key uses
${m.metadata.name}/${m.metadata.namespace}format, which is inverted from the typical Kubernetes convention ofnamespace/name. While this works correctly (lookup at line 975 uses the same format), consider aligning with the standard pattern for maintainability:- () => new Map(machines.map((m) => [`${m.metadata.name}/${m.metadata.namespace}`, m])), + () => new Map(machines.map((m) => [`${m.metadata.namespace}/${m.metadata.name}`, m])),And update the lookup accordingly. This is a minor consistency improvement.
917-942: Consider memoizing the empty Map to avoid unnecessary re-renders.When KubeVirt is inactive or there are load errors, a new empty
Mapis created on each render (line 925). For stability:+const EMPTY_VMS_MAP = new Map<string, K8sResourceKind[]>(); + const vmsByNode = useMemo(() => { if ( !isKubevirtPluginActive || !nodesLoaded || nodesLoadError || !virtualMachinesLoaded || virtualMachinesError ) { - return new Map<string, K8sResourceKind[]>(); + return EMPTY_VMS_MAP; } // ... }, [/* deps */]);This prevents downstream consumers from seeing a "new" object reference when the data hasn't actually changed.
790-797: Minor: redundant optional chaining after null guard.Line 792 checks
!resource.machineConfigPool, so line 793'sresource.machineConfigPool?.metadata.namedoesn't need the optional chaining—resource.machineConfigPool.metadata.namewould suffice. Not a bug, just slightly redundant.
960-960: Stale dependency:isKubevirtPluginActivein useEffect.The
fetchNodeMetrics()function no longer uses KubeVirt-related queries (VM data now comes from the VMI watch). TheisKubevirtPluginActivedependency in the useEffect is no longer necessary and could be removed for clarity.
98f8807 to
6d758dd
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@frontend/packages/console-app/src/components/nodes/NodesPage.tsx`:
- Around line 494-496: The vmi count cell uses virtualMachines length but the
hook useWatchResourcesIfAllowed collapses “no access” to an empty array so every
node shows "0"; modify useWatchResourcesIfAllowed to return an explicit allowed
flag (e.g., { resources, allowed } or [resources, allowed]) and then in the
aggregation used for nodeColumnInfo.vms.id (and the other similar aggregations
around the other occurrences) only compute String(virtualMachines.length) when
allowed is true, otherwise set the cell value to DASH (the existing UX
fallback); update the vmsByNode computation to check the returned allowed signal
and fall back to DASH when not allowed, and apply the same pattern to the other
referenced aggregations.
|
@jeff-phillips-18
The changes to the column name and help icon look good to me |
Yeah, I'm torn about @jseseCCS Do you have any suggestions here? |
6d758dd to
e8e17ce
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@frontend/packages/console-shared/src/sorts/nodes.ts`:
- Around line 27-28: The sort function nodeRunningVms currently calls
UIActions.getNodeMetric(node, 'runningVms') but fetchNodeMetrics does not query
runningVms, so the sort always returns 0; either add a Prometheus query for
runningVms to fetchNodeMetrics (and include it in the NodeMetrics type) so
UIActions.getNodeMetric can return real values, or change nodeRunningVms to
derive its value from the same source the VM cell uses (virtualMachines /
vmsByNode) by reading the vmsByNode map used in NodesPage.tsx and returning the
VM count for the node; pick one approach and update nodeRunningVms and
fetchNodeMetrics (or the VM data access) accordingly so the sort uses a
populated data source.
🧹 Nitpick comments (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (1)
105-106: Remove duplicated TODO.
Same TODO appears twice; keep one for clarity.🧹 Suggested cleanup
-// TODO: Remove VMI retrieval and VMs count column if/when the plugin is able to add the VMs count column -// TODO: Remove VMI retrieval and VMs count column if/when the plugin is able to add the VMs count column +// TODO: Remove VMI retrieval and VMs count column if/when the plugin is able to add the VMs count column
| export const nodeRunningVms = (node: NodeKind): number => | ||
| Number(UIActions.getNodeMetric(node, 'runningVms')); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
rg -n "runningVms" --type ts --type tsx -A 2 -B 2Repository: openshift/console
Length of output: 88
🏁 Script executed:
# Also search for where metrics are set/populated
rg -n "getNodeMetric|setNodeMetric" --type ts --type tsx | head -20Repository: openshift/console
Length of output: 88
🏁 Script executed:
# Find the VM cell that renders virtualMachines
rg -n "virtualMachines" --type ts --type tsx -A 2 -B 2 | head -30Repository: openshift/console
Length of output: 88
🏁 Script executed:
# Check UIActions to understand the metrics pipeline
fd -e ts -e tsx -type f -path "*UIActions*" | head -10Repository: openshift/console
Length of output: 231
🏁 Script executed:
rg -n "runningVms" --type ts -A 2 -B 2Repository: openshift/console
Length of output: 852
🏁 Script executed:
rg -n "getNodeMetric" --type ts -A 2 -B 2 | head -40Repository: openshift/console
Length of output: 2693
🏁 Script executed:
rg -n "virtualMachines" --type ts -A 2 -B 2 | head -40Repository: openshift/console
Length of output: 2267
🏁 Script executed:
fd -e ts -path "*UIActions*"Repository: openshift/console
Length of output: 229
🏁 Script executed:
rg -n "runningVms\s*=" --type tsRepository: openshift/console
Length of output: 43
🏁 Script executed:
rg -n "metrics.*node" --type ts -A 3 -B 1 | head -60Repository: openshift/console
Length of output: 2223
🏁 Script executed:
# Check where node metrics are set in the store
rg -n "setIn.*metrics" --type ts | head -20Repository: openshift/console
Length of output: 735
🏁 Script executed:
# Look for vmsByNode to understand the separate pipeline
rg -n "vmsByNode" --type ts -A 2 -B 2Repository: openshift/console
Length of output: 1429
🏁 Script executed:
rg -n "SetNodeMetrics|setNodeMetrics" --type ts -B 3 -A 3Repository: openshift/console
Length of output: 4015
🏁 Script executed:
# Find the action dispatch that populates node metrics
rg -n "dispatch.*SetNodeMetrics" --type ts -B 5 -A 5Repository: openshift/console
Length of output: 43
🏁 Script executed:
# Check for any place where runningVms is mentioned in metrics construction
rg -n "runningVms|running_vms" --type tsRepository: openshift/console
Length of output: 237
🏁 Script executed:
# Look for the metrics creation/calculation
rg -n "NodeMetrics|nodeMetrics" --type ts -B 2 -A 2 | head -80Repository: openshift/console
Length of output: 7138
🏁 Script executed:
rg -n "fetchNodeMetrics" --type ts -A 30 | head -80Repository: openshift/console
Length of output: 6493
🏁 Script executed:
# Get the complete NodeMetrics type definition
rg -n "export type NodeMetrics" --type ts -A 15Repository: openshift/console
Length of output: 985
🏁 Script executed:
rg -n "fetchNodeMetrics" --type ts -A 50 | head -120Repository: openshift/console
Length of output: 10466
🏁 Script executed:
# Also check if there's a query for runningVms
rg -n "runningVms.*query|key.*runningVms" --type ts -B 2 -A 2Repository: openshift/console
Length of output: 43
🏁 Script executed:
rg -n "nodeRunningVms" --type ts -B 2 -A 2Repository: openshift/console
Length of output: 1675
Remove runningVms from the sort or populate the metric.
runningVms is defined in the NodeMetrics type but never fetched from Prometheus in fetchNodeMetrics() (only the seven metrics at NodesPage.tsx lines 521–554 are queried). The sort will always return 0 for all nodes, making it ineffective. The VM cell displays from virtualMachines (sourced from KubeVirt's vmsByNode map), not from metrics. Either add a Prometheus query for running VMs to the metrics pipeline or change the sort to use the same data source as the cell.
🤖 Prompt for AI Agents
In `@frontend/packages/console-shared/src/sorts/nodes.ts` around lines 27 - 28,
The sort function nodeRunningVms currently calls UIActions.getNodeMetric(node,
'runningVms') but fetchNodeMetrics does not query runningVms, so the sort always
returns 0; either add a Prometheus query for runningVms to fetchNodeMetrics (and
include it in the NodeMetrics type) so UIActions.getNodeMetric can return real
values, or change nodeRunningVms to derive its value from the same source the VM
cell uses (virtualMachines / vmsByNode) by reading the vmsByNode map used in
NodesPage.tsx and returning the VM count for the node; pick one approach and
update nodeRunningVms and fetchNodeMetrics (or the VM data access) accordingly
so the sort uses a populated data source.
|
@jeff-phillips-18 As requested, tooltip options for VMs section in UI: The count includes only virtual machines you have access to. OR This count is based on your access permissions and might not include all virtual machines. You and I agree that the second one is better. |
@XiyunZhao @jseseCCS @kybaker Based on this, I think we should show @kybaker Do you have thoughts on the tooltip message? |
|
@jeff-phillips-18 This message looks good to me - "This count is based on your access permissions and might not include all virtual machines." |
e8e17ce to
7352046
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (1)
391-492: Show “0” VM count instead of DASH.
virtualMachinesis numeric, but the truthy check drops 0 and the fallback renders DASH.🐛 Proposed fix
- [nodeColumnInfo.vms.id]: { - cell: virtualMachines ? String(virtualMachines) : undefined, - }, + [nodeColumnInfo.vms.id]: { + cell: virtualMachines === undefined ? undefined : String(virtualMachines), + },
🧹 Nitpick comments (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (1)
104-105: Remove duplicate TODO line.
It’s repeated back-to-back and adds noise.🧹 Suggested cleanup
-// TODO: Remove VMI retrieval and VMs count column if/when the plugin is able to add the VMs count column -// TODO: Remove VMI retrieval and VMs count column if/when the plugin is able to add the VMs count column +// TODO: Remove VMI retrieval and VMs count column if/when the plugin is able to add the VMs count column
7352046 to
653ece0
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@frontend/packages/console-app/src/components/nodes/NodesPage.tsx`:
- Around line 487-489: The current usage of String(virtualMachines) for
nodeColumnInfo.vms.id causes the literal "undefined" to be rendered when
virtualMachines is undefined and bypasses the cell || DASH fallback; change the
cell value to only convert to string when virtualMachines is non-null (e.g., use
a ternary or nullish check so cell is undefined/null when virtualMachines is
undefined) so the existing fallback on line with cell || DASH will display DASH
as intended.
- Around line 973-979: In the map that builds the augmented node objects, the
expression vmsByNode?.get(node.metadata.name).length can throw if .get(...)
returns undefined; update that return to safely access the length using optional
chaining (and a safe fallback if desired) for the
vmsByNode.get(node.metadata.name) result so virtualMachines is computed without
throwing (refer to the vmsByNode variable and node.metadata.name used in the
return object inside the map).
🧹 Nitpick comments (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (1)
909-916: VMI watch lacks explicit RBAC gating—consider adding for UX consistency.Unlike machines/machineSets/etc., VMIs are watched directly without
useWatchResourcesIfAllowed. When RBAC denies access,vmisLoadErroris set, which setsvmsByNodetoundefined, andvirtualMachinesbecomesundefined. Combined with the cell rendering fix suggested above, this will correctly show DASH.However, if you want to distinguish "user has no access to VMIs" from "VMI load failed for other reasons" (to potentially show different UI feedback or skip API calls entirely), you could refactor to use
useWatchResourcesIfAllowedfor VMIs as well.♻️ Optional: Use RBAC-gated watch for VMIs
- const [vmis, vmisLoaded, vmisLoadError] = useK8sWatchResource<K8sResourceKind[]>( - isKubevirtPluginActive - ? { - isList: true, - groupVersionKind: VirtualMachineInstanceGroupVersionKind, - } - : undefined, - ); + const VirtualMachineInstanceModel: K8sModel = { + apiGroup: 'kubevirt.io', + apiVersion: 'v1', + kind: 'VirtualMachineInstance', + plural: 'virtualmachineinstances', + // ...other fields as needed + }; + + const [vmis, vmisLoaded, vmisLoadError] = useWatchResourcesIfAllowed<K8sResourceKind[]>( + isKubevirtPluginActive ? VirtualMachineInstanceModel : null, + );This would require either importing or defining a proper
K8sModelfor VMIs and adjusting the hook to acceptnullwhen disabled.
653ece0 to
0b961a0
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@jeff-phillips-18: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
Thanks @jeff-phillips-18 for the quick fix and confirmation. The latest change looks good to me on my side |
|
@XiyunZhao: This PR has been marked as verified by DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |





Description
Add columns to the Nodes list view:
Machine set,MachineConfigPools, andVMs.Add filters for
Machine setandMachineConfigPools.Notes:
MachineSetorControlPlaneMachineSetName
Status
Machine set
VMs
Pods
Memory
CPU
Screenshots
/cc @kybaker
Summary by CodeRabbit
New Features
Documentation / Localization
Chores
✏️ Tip: You can customize this high-level summary in your review settings.