feat(workload): implement ServiceBatchUpdate for performance optimization#1573
feat(workload): implement ServiceBatchUpdate for performance optimization#1573shivansh-gohem wants to merge 5 commits intokmesh-net:mainfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
Summary of ChangesHello @shivansh-gohem, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly improves the efficiency of the workload processor by transitioning from individual service updates to a batched approach. By collecting multiple service changes and applying them to the BPF map in a single operation, the system call overhead is drastically reduced, leading to better performance, especially when dealing with a high volume of service updates in the mesh. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. A single syscall, a batch so grand, Updates flow, across the land. No more small calls, a burden light, Performance soars, with all its might. Footnotes
|
f369a1d to
8e4cf93
Compare
There was a problem hiding this comment.
Code Review
This pull request successfully implements a batch update mechanism for service BPF maps, addressing the performance optimization goal outlined in the description. The ServiceBatchUpdate function has been added to bpfcache/service.go, and its usage is correctly integrated into handleServicesAndWorkloads and updateServiceMap in workload_processor.go. The changes ensure that service updates are aggregated and flushed in a single syscall, which is a significant improvement for large mesh updates. Unit tests have been appropriately updated to reflect the new function signatures. The overall approach is sound and directly tackles the identified performance bottleneck.
- Implemented ServiceBatchUpdate in bpfcache to reduce system calls - Refactored handleServicesAndWorkloads to aggregate updates - Updated unit tests to support new batch signature Signed-off-by: Shivansh Sahu <sahushivansh142@gmail.com>
7e291a3 to
59b0b2b
Compare
Codecov Report❌ Patch coverage is
... and 1 file with indirect coverage changes Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
Signed-off-by: Shivansh Sahu <sahushivansh142@gmail.com>
733884b to
687b261
Compare
Signed-off-by: Shivansh Sahu <sahushivansh142@gmail.com>
Signed-off-by: Shivansh Sahu <sahushivansh142@gmail.com>
Signed-off-by: Shivansh Sahu <sahushivansh142@gmail.com>
This PR optimizes the workload processor by implementing a batch update mechanism for the Service BPF map. Previously, service updates were processed individually, causing high system call overhead during large mesh updates. This change aggregates updates and flushes them to the kernel in a single syscall using ServiceBatchUpdate.
Fixes: #1549
Key Changes
(i)pkg/controller/workload/bpfcache: Added ServiceBatchUpdate to support bulk map operations.
(ii)pkg/controller/workload/workload_processor.go:
-> Refactored handleServicesAndWorkloads to initialize batch slices.
-> Updated handleService and updateServiceMap to append to batch slices instead of triggering immediate BPF updates.
-> Implemented logic to flush all collected updates in one ServiceBatchUpdate call.
(iii)pkg/controller/workload/workload_processor_test.go: Updated unit tests to align with the new function signatures (passing nil for batch arguments where appropriate).
Verification
(i)Unit Tests: Ran go test -v ./pkg/controller/workload/... (All PASSED).
(ii)Linting: Ran golangci-lint (No errors).
(iii)Manual Check: Verified BPF map update logic.