Skip to content

Optimization: Change clean-up of pooled connections to be conditional#46

Open
jduo wants to merge 2 commits intovalkey-io:mainfrom
jduo:optimize-connection-reuse
Open

Optimization: Change clean-up of pooled connections to be conditional#46
jduo wants to merge 2 commits intovalkey-io:mainfrom
jduo:optimize-connection-reuse

Conversation

@jduo
Copy link

@jduo jduo commented Jan 14, 2026

Only send UNWATCH requests when returning a connection to the pool if there have actually been watched keys.
This increases the performance by avoiding a network roundtrip when executing a non-batched operation (for scalar operations this halves the number of roundtrips).

  • You have read the Spring Data contribution guidelines.
  • You use the code formatters provided here and have them applied to your changes. Don’t submit any formatting related changes.
  • You submit test cases (unit or integration tests) that back your changes.
  • You added yourself as author in the headers of the classes you touched. Amend the date range in the Apache license header if needed. For new types, add the license header (copy from another file and set the current year only).

Only send UNWATCH requests when returning a connection to the pool if there have
actually been watched keys.

Signed-off-by: James Duong <duong.james@gmail.com>
@jduo jduo closed this Jan 14, 2026
@jduo jduo force-pushed the optimize-connection-reuse branch from 9f326a1 to 52d927e Compare January 14, 2026 19:15
@jduo jduo reopened this Jan 14, 2026
@jduo
Copy link
Author

jduo commented Jan 14, 2026

When running the performance test locally, this roughly doubled the ops/second

Signed-off-by: James Duong <duong.james@gmail.com>
@jeremyprime jeremyprime requested a review from ikolomi January 14, 2026 22:09
@ikolomi
Copy link
Collaborator

ikolomi commented Jan 15, 2026

There is a reason i did it this way - to be on the safe side

  1. What does this PR improve
  2. This is very intricate piece of code, please add or point on the inegration test that ensures watched keys are removed.

I will be looking into this a bit more - if there could be situations in which WATCH is issued w/o updated the watchedKeys

@jduo
Copy link
Author

jduo commented Jan 15, 2026

For reference these are the numbers I got running the benchmark and valkey-server on my local machine before/after this patch (ops/s):

Client SET GET DELETE
ValkeyGlide 2936 2955 3141
ValkeyGlide (optimized) 4962 5889 5868

@jduo
Copy link
Author

jduo commented Jan 15, 2026

An alternative way to implement this if we want to be extra cautious:

  • asynchronously return connections to the pool instead of blocking right away. This is basically just appending to the returned future returned by GlideClient the step factory.releaseConnection(...)

This would have the effect of issuing the cleanup on a background thread and not blocking the app thread from continuing operations. The app thread might need to create a new connection the for the first few operations because it wouldn't finish cleaning them up by the time they are needed again, but after a few are in the pool they should be readily reusable.

@jduo
Copy link
Author

jduo commented Jan 18, 2026

An alternative way to implement this if we want to be extra cautious:

  • asynchronously return connections to the pool instead of blocking right away. This is basically just appending to the returned future returned by GlideClient the step factory.releaseConnection(...)

This would have the effect of issuing the cleanup on a background thread and not blocking the app thread from continuing operations. The app thread might need to create a new connection the for the first few operations because it wouldn't finish cleaning them up by the time they are needed again, but after a few are in the pool they should be readily reusable.

Demo of this path in #46.

@jeremyprime jeremyprime self-requested a review January 19, 2026 17:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants