-
Notifications
You must be signed in to change notification settings - Fork 139
add runbooks for new alerts #363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
@aruniiird @weirdwiz please do have a look. |
9a36696 to
9773226
Compare
weirdwiz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
needs some changes
alerts/openshift-container-storage-operator/ODFCorePodRestarted.md
Outdated
Show resolved
Hide resolved
alerts/openshift-container-storage-operator/ODFCorePodRestarted.md
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no mitigation section? please add mitigation steps,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure of what mitigation steps should be added here, so I left it empty for now!!
@weirdwiz if you have any suggestions, we can discuss offline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mitigation for this is to either move workloads to other storage systems or (preferred) add more disks.
Ceph is one of the few storage systems that grows IO performance linearly with capacity... so more disks = more performance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here, add mitigation steps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The MTU runbook should mention how to verify jumbo frames work end-to-end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure about this, maybe we can work on it once you are back.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can find many "Jumbo Frame" test instructions on the internet - for example this one:
https://blah.cloud/networks/test-jumbo-frames-working/
In the end you use ping with a certain icmp size (which different for the different OSs) and you tell the network stack not to fragment the package (but send it whole).
As a mitigation, customers need to ensure the node network interfaces are configured for 9000 bytes AND that all switches in between the nodes also support 9000 bytes on their ports.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
existing runbooks reference shared helper documents like:
- helpers/podDebug.md
- helpers/troubleshootCeph.md
- helpers/gatherLogs.md
- helpers/networkConnectivity.md
the new runbooks embed all commands inline instead of referencing these. consider using helper links for consistency and maintainability.
| ping <node-internal-ip> | ||
| ``` | ||
| 4. Use mtr or traceroute to analyze path and hops. | ||
| 5. Verify if the node is under high CPU or network load: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| 5. Verify if the node is under high CPU or network load: | |
| 5. Verify if the node is under high CPU or network load: | |
| oc debug node/<node> | |
| top -b -n 1 | head -20 | |
| sar -u 1 5 |
| sar -n DEV 1 5 | ||
| ``` | ||
| 3. Use Prometheus to graph: | ||
| ```prompql |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| ```prompql | |
| ```promql |
9773226 to
2deb710
Compare
|
@weirdwiz updated the PR except for the 2 comments, we can work on them once you are back. |
2deb710 to
4efb275
Compare
|
|
||
| ## Impact | ||
|
|
||
| * Brief service interruption (e.g., MON restart may cause quorum re-election). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
service interruption sounds worse than it is... unless the MONs cannot agree to a quorum any more, there is no "downtime".
Instead let's put all of the Impact points in relative points... Since Ceph is very resilient, Pod restarts should only have an affect if they happen frequently (more than 10 times in a 5min window).
| ## Impact | ||
|
|
||
| * Brief service interruption (e.g., MON restart may cause quorum re-election). | ||
| * OSD restart triggers PG peering and potential recovery. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Someone who doesn't know Ceph will not understand this :) (even though it is factually correct)
How do you like my proposal:
If OSDs are restarted frequently or do not start up within 5 minutes, the cluster might decide to rebalance the data onto other more reliable disks. If this happens, the cluster will temporarily be slightly less performant.
|
|
||
| ## Impact | ||
|
|
||
| * Increased I/O latency for RBD/CephFS clients. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RBD and CephFS are Ceph terms. Let's keep it simple and just call them Block, Object and File (all of these would be affected)
| ## Impact | ||
|
|
||
| * Increased I/O latency for RBD/CephFS clients. | ||
| * Slower OSD response times, risking heartbeat timeouts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that's true. If the underlying storage is busy, the process should still be able to send heartbeats?!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mitigation for this is to either move workloads to other storage systems or (preferred) add more disks.
Ceph is one of the few storage systems that grows IO performance linearly with capacity... so more disks = more performance
| 5. Review Ceph monitor logs if the node hosts MONs: | ||
| ```bash | ||
| oc logs -l app=rook-ceph-mon -n openshift-storage | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another step could be to check switch / networking monitoring to see if any ports are too busy
| ## Diagnosis | ||
|
|
||
|
|
||
| 1. Identify affected node(s): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we have this step if we get the node name and IP in step #2 from the alert?
|
|
||
| ## Mitigation | ||
|
|
||
| 1. Network tuning: Ensure jumbo frames (MTU ≥ 9000) are enabled end-to-end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure Jumbo Frames will help with latency? Why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not exactly sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can find many "Jumbo Frame" test instructions on the internet - for example this one:
https://blah.cloud/networks/test-jumbo-frames-working/
In the end you use ping with a certain icmp size (which different for the different OSs) and you tell the network stack not to fragment the package (but send it whole).
As a mitigation, customers need to ensure the node network interfaces are configured for 9000 bytes AND that all switches in between the nodes also support 9000 bytes on their ports.
|
|
||
| ## Mitigation | ||
|
|
||
| 1. Short term: Throttle non-essential traffic on the node. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how?
4efb275 to
0f8b761
Compare
there are new alerts introduced for odf health score calculation. This commit adds runbooks for each of them Signed-off-by: yati1998 <ypadia@redhat.com>
0f8b761 to
5ca12a0
Compare
|
@yati1998: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
mulbc
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm - small error corrections where I added recommendations. Aside from these I'm good with the PR
|
|
||
| * If OSDs are restarted frequently or do not start up within 5 minutes, | ||
| the cluster might decide to rebalance the data onto other more reliable | ||
| disks.If this happens, the cluster will temporarily be slightly less |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| disks.If this happens, the cluster will temporarily be slightly less | |
| disks. If this happens, the cluster will temporarily be slightly less |
|
|
||
| ## Mitigation | ||
|
|
||
| * Increase more disks to enhance the performance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| * Increase more disks to enhance the performance. | |
| * Add more disks to the cluster to enhance the performance. |
| ## Diagnosis | ||
|
|
||
| 1. From the alert, note the instance (node IP). | ||
| 2. Confirm the node does not run OSDs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is an OSD node and triggers the >100ms alert, we're in trouble :P
So I think this check does not provide any value (we're not doing anything with the data we gather with this)
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: mulbc, yati1998 The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
there are new alerts introduced for odf
health score calculation. This commit adds
runbooks for each of them