Skip to content

feat(workload): implement ServiceBatchUpdate for performance optimization#1573

Open
shivansh-gohem wants to merge 5 commits intokmesh-net:mainfrom
shivansh-gohem:feat/1549-batch-updates
Open

feat(workload): implement ServiceBatchUpdate for performance optimization#1573
shivansh-gohem wants to merge 5 commits intokmesh-net:mainfrom
shivansh-gohem:feat/1549-batch-updates

Conversation

@shivansh-gohem
Copy link
Contributor

This PR optimizes the workload processor by implementing a batch update mechanism for the Service BPF map. Previously, service updates were processed individually, causing high system call overhead during large mesh updates. This change aggregates updates and flushes them to the kernel in a single syscall using ServiceBatchUpdate.

Fixes: #1549

Key Changes

(i)pkg/controller/workload/bpfcache: Added ServiceBatchUpdate to support bulk map operations.

(ii)pkg/controller/workload/workload_processor.go:

-> Refactored handleServicesAndWorkloads to initialize batch slices.

-> Updated handleService and updateServiceMap to append to batch slices instead of triggering immediate BPF updates.

-> Implemented logic to flush all collected updates in one ServiceBatchUpdate call.

(iii)pkg/controller/workload/workload_processor_test.go: Updated unit tests to align with the new function signatures (passing nil for batch arguments where appropriate).

Verification

(i)Unit Tests: Ran go test -v ./pkg/controller/workload/... (All PASSED).

(ii)Linting: Ran golangci-lint (No errors).

(iii)Manual Check: Verified BPF map update logic.

@kmesh-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign nlgwcy for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@gemini-code-assist
Copy link

Summary of Changes

Hello @shivansh-gohem, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the efficiency of the workload processor by transitioning from individual service updates to a batched approach. By collecting multiple service changes and applying them to the BPF map in a single operation, the system call overhead is drastically reduced, leading to better performance, especially when dealing with a high volume of service updates in the mesh.

Highlights

  • Performance Optimization: Implemented a batch update mechanism for the Service BPF map, aggregating multiple service updates into a single syscall to reduce overhead during large mesh updates.
  • New API for Batch Updates: Introduced a new ServiceBatchUpdate method in pkg/controller/workload/bpfcache/service.go to support bulk operations on the BPF map.
  • Refactored Service Handling Logic: Modified handleServicesAndWorkloads, handleService, and updateServiceMap in pkg/controller/workload/workload_processor.go to collect service updates into batch slices, deferring the actual BPF map write until all updates are processed.
  • Updated Unit Tests: Adjusted existing unit tests in pkg/controller/workload/workload_processor_test.go to align with the new function signatures for handleService and updateServiceMap.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


A single syscall, a batch so grand, Updates flow, across the land. No more small calls, a burden light, Performance soars, with all its might.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully implements a batch update mechanism for service BPF maps, addressing the performance optimization goal outlined in the description. The ServiceBatchUpdate function has been added to bpfcache/service.go, and its usage is correctly integrated into handleServicesAndWorkloads and updateServiceMap in workload_processor.go. The changes ensure that service updates are aggregated and flushed in a single syscall, which is a significant improvement for large mesh updates. Unit tests have been appropriately updated to reflect the new function signatures. The overall approach is sound and directly tackles the identified performance bottleneck.

- Implemented ServiceBatchUpdate in bpfcache to reduce system calls
- Refactored handleServicesAndWorkloads to aggregate updates
- Updated unit tests to support new batch signature

Signed-off-by: Shivansh Sahu <sahushivansh142@gmail.com>
@codecov
Copy link

codecov bot commented Jan 21, 2026

Codecov Report

❌ Patch coverage is 63.15789% with 7 lines in your changes missing coverage. Please review.
✅ Project coverage is 39.66%. Comparing base (3018401) to head (2238606).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
pkg/controller/workload/workload_processor.go 56.25% 2 Missing and 5 partials ⚠️
Files with missing lines Coverage Δ
pkg/controller/workload/bpfcache/service.go 30.00% <100.00%> (+30.00%) ⬆️
pkg/controller/workload/workload_processor.go 58.46% <56.25%> (+0.70%) ⬆️

... and 1 file with indirect coverage changes


Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7c77714...2238606. Read the comment docs.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Signed-off-by: Shivansh Sahu <sahushivansh142@gmail.com>
@shivansh-gohem shivansh-gohem force-pushed the feat/1549-batch-updates branch from 733884b to 687b261 Compare January 21, 2026 16:54
Signed-off-by: Shivansh Sahu <sahushivansh142@gmail.com>
@kmesh-bot kmesh-bot added size/L and removed size/M labels Jan 21, 2026
Signed-off-by: Shivansh Sahu <sahushivansh142@gmail.com>
Signed-off-by: Shivansh Sahu <sahushivansh142@gmail.com>
@shivansh-gohem
Copy link
Contributor Author

/cc @LiZhenCheng9527 @hzxuzhonghu

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature/Optimization] Optimize kmesh-daemon CPU usage during massive xDS configuration updates

2 participants