Skip to content

Conversation

@joamaki
Copy link
Contributor

@joamaki joamaki commented Jan 19, 2026

Add Reconciler.WaitUntilReconciled method to allow waiting for the reconciler to catch up processing to a given revision.

Example usage:

wtxn := db.WriteTxn(table)
table.Insert(wtxn, &Obj{ID: 1, Status: reconciler.StatusPending()})
table.Insert(wtxn, &Obj{ID: 2, Status: reconciler.StatusPending()})
table.Insert(wtxn, &Obj{ID: 3, Status: reconciler.StatusPending()})
revToWaitFor := table.Revision(wtxn)
wtxn.Commit()

// Block until reconciler has catched up to [revToWaitFor] or [ctx]
// is cancelled.
rev, retryLowWatermark, err := myReconciler.WaitUntilReconciled(ctx, revToWaitFor)

// [rev] is the revision to which we reconciled up to (can be past [revToWaitFor])
// [retryLowWatermark] is the lowest revision in the retry queue.
// [err] is non-ni if [ctx] is cancelled.

@joamaki joamaki requested a review from gandro January 19, 2026 14:28
// table changes up to untilRevision. Returns ctx.Err() if the context
// is cancelled.
// Note: errors from Update/Delete are treated as reconciled.
WaitUntilReconciled(ctx context.Context, untilRevision statedb.Revision) error
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a use-case for waiting for objects to be reconciled successfully (e.g. all marked StatusDone) until a given revision?

It would basically require a) wait for the reconciler to catch up to a given revision and b) wait until there's nothing in the retry queue whose "original revision" is lower than untilRevision. Much more complicated what is being done here so would be nice to not have to support that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As opposed to ignoring errors you mean? The main use case for for WaitUntilReconciled that I had in the past was always with regards to a BPF map. The goal being to ensure that the contents of the StateDB table has been pushed down to the BPF map and is visible to a BPF program.

So if there were transient errors (that are expected to resolve quickly), then yeah, in such a case it would be "more correct" to wait for those transient errors to resolve before continuing. I'm not sure if our current bpf.NewMapOps implementation can return transient errors, or if all errors are basically persistent.

For persistent errors (that can basically only be resolved by retracting the update), I don't think there's much point in trying to wait for them. I would expect callers of an error-aware WaitUntilReconciled to time out in such a case anyway.

Having said that, just waiting for nothing to be pending anymore for a certain revision is already a big improvement over the status quo, so I'm already happy with this as is.

@github-actions
Copy link

github-actions bot commented Jan 19, 2026

$ make
go build ./...
go: downloading github.com/cilium/hive v0.0.0-20250731144630-28e7a35ed227
go: downloading go.yaml.in/yaml/v3 v3.0.3
go: downloading golang.org/x/time v0.5.0
go: downloading github.com/spf13/cobra v1.8.0
go: downloading github.com/spf13/pflag v1.0.5
go: downloading github.com/cilium/stream v0.0.0-20240209152734-a0792b51812d
go: downloading github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de
go: downloading github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc
go: downloading github.com/mitchellh/mapstructure v1.5.0
go: downloading go.uber.org/dig v1.17.1
go: downloading golang.org/x/term v0.16.0
go: downloading github.com/spf13/viper v1.18.2
go: downloading golang.org/x/sys v0.17.0
go: downloading golang.org/x/tools v0.17.0
go: downloading github.com/spf13/cast v1.6.0
go: downloading github.com/fsnotify/fsnotify v1.7.0
go: downloading github.com/sagikazarmark/slog-shim v0.1.0
go: downloading github.com/spf13/afero v1.11.0
go: downloading github.com/subosito/gotenv v1.6.0
go: downloading github.com/hashicorp/hcl v1.0.0
go: downloading gopkg.in/ini.v1 v1.67.0
go: downloading github.com/magiconair/properties v1.8.7
go: downloading github.com/pelletier/go-toml/v2 v2.1.0
go: downloading gopkg.in/yaml.v3 v3.0.1
go: downloading golang.org/x/text v0.14.0
STATEDB_VALIDATE=1 go test ./... -cover -vet=all -test.count 1
go: downloading github.com/stretchr/testify v1.8.4
go: downloading go.uber.org/goleak v1.3.0
go: downloading golang.org/x/exp v0.0.0-20240119083558-1b970713d09a
go: downloading github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2
ok  	github.com/cilium/statedb	412.103s	coverage: 78.5% of statements
ok  	github.com/cilium/statedb/index	0.005s	coverage: 28.7% of statements
ok  	github.com/cilium/statedb/internal	0.014s	coverage: 42.9% of statements
ok  	github.com/cilium/statedb/lpm	4.311s	coverage: 77.9% of statements
ok  	github.com/cilium/statedb/part	60.972s	coverage: 87.5% of statements
ok  	github.com/cilium/statedb/reconciler	0.297s	coverage: 92.3% of statements
	github.com/cilium/statedb/reconciler/benchmark		coverage: 0.0% of statements
	github.com/cilium/statedb/reconciler/example		coverage: 0.0% of statements
go test -race ./... -test.count 1
ok  	github.com/cilium/statedb	39.213s
ok  	github.com/cilium/statedb/index	1.014s
ok  	github.com/cilium/statedb/internal	1.033s
ok  	github.com/cilium/statedb/lpm	2.797s
ok  	github.com/cilium/statedb/part	36.233s
ok  	github.com/cilium/statedb/reconciler	1.370s
?   	github.com/cilium/statedb/reconciler/benchmark	[no test files]
?   	github.com/cilium/statedb/reconciler/example	[no test files]
go test ./... -bench . -benchmem -test.run xxx
goos: linux
goarch: amd64
pkg: github.com/cilium/statedb
cpu: AMD EPYC 7763 64-Core Processor                
BenchmarkDB_WriteTxn_1-4                      	  689313	      1665 ns/op	    600550 objects/sec	    1000 B/op	      16 allocs/op
BenchmarkDB_WriteTxn_10-4                     	 1707465	       703.8 ns/op	   1420819 objects/sec	     520 B/op	       8 allocs/op
BenchmarkDB_WriteTxn_100-4                    	 2201055	       549.9 ns/op	   1818437 objects/sec	     490 B/op	       7 allocs/op
BenchmarkDB_WriteTxn_1000-4                   	 1953026	       612.5 ns/op	   1632721 objects/sec	     447 B/op	       7 allocs/op
BenchmarkDB_WriteTxn_100_SecondaryIndex-4     	  800332	      1310 ns/op	    763072 objects/sec	    1007 B/op	      20 allocs/op
BenchmarkDB_WriteTxn_CommitOnly_100Tables-4   	  878740	      1184 ns/op	    1112 B/op	       5 allocs/op
BenchmarkDB_WriteTxn_CommitOnly_1Table-4      	 1625504	       737.2 ns/op	     224 B/op	       5 allocs/op
BenchmarkDB_NewWriteTxn-4                     	 1645346	       705.5 ns/op	     200 B/op	       4 allocs/op
BenchmarkDB_WriteTxnCommit100-4               	 1000000	      1173 ns/op	    1096 B/op	       5 allocs/op
BenchmarkDB_NewReadTxn-4                      	641406246	         1.867 ns/op	       0 B/op	       0 allocs/op
BenchmarkDB_Modify-4                          	    1696	    688638 ns/op	   1452143 objects/sec	  479652 B/op	    8072 allocs/op
BenchmarkDB_GetInsert-4                       	    1570	    765356 ns/op	   1306581 objects/sec	  455640 B/op	    8072 allocs/op
BenchmarkDB_RandomInsert-4                    	    1921	    638673 ns/op	   1565746 objects/sec	  447630 B/op	    7072 allocs/op
BenchmarkDB_RandomReplace-4                   	     459	   2638072 ns/op	    379065 objects/sec	 1924794 B/op	   29102 allocs/op
BenchmarkDB_SequentialInsert-4                	    1905	    622920 ns/op	   1605342 objects/sec	  447631 B/op	    7072 allocs/op
BenchmarkDB_SequentialInsert_Prefix-4         	     472	   2552820 ns/op	    391724 objects/sec	 3563170 B/op	   45541 allocs/op
BenchmarkDB_Changes_Baseline-4                	    1604	    751785 ns/op	   1330168 objects/sec	  507775 B/op	    9163 allocs/op
BenchmarkDB_Changes-4                         	     933	   1324429 ns/op	    755043 objects/sec	  709497 B/op	   12314 allocs/op
BenchmarkDB_RandomLookup-4                    	   22256	     53988 ns/op	  18522640 objects/sec	       0 B/op	       0 allocs/op
BenchmarkDB_SequentialLookup-4                	   26757	     45255 ns/op	  22097073 objects/sec	       0 B/op	       0 allocs/op
BenchmarkDB_Prefix_SecondaryIndex-4           	    6711	    164499 ns/op	   6079077 objects/sec	  124920 B/op	    1025 allocs/op
BenchmarkDB_FullIteration_All-4               	    1030	   1134810 ns/op	  88120510 objects/sec	     104 B/op	       4 allocs/op
BenchmarkDB_FullIteration_Prefix-4            	     960	   1207533 ns/op	  82813467 objects/sec	     136 B/op	       5 allocs/op
BenchmarkDB_FullIteration_Get-4               	     223	   5377458 ns/op	  18596148 objects/sec	       0 B/op	       0 allocs/op
BenchmarkDB_FullIteration_Get_Secondary-4     	     100	  10111489 ns/op	   9889740 objects/sec	       0 B/op	       0 allocs/op
BenchmarkDB_FullIteration_ReadTxnGet-4        	     218	   5480166 ns/op	  18247621 objects/sec	       0 B/op	       0 allocs/op
BenchmarkDB_PropagationDelay-4                	  621520	      1784 ns/op	        15.00 50th_µs	        20.00 90th_µs	        50.00 99th_µs	    1120 B/op	      19 allocs/op
BenchmarkDB_WriteTxn_100_LPMIndex-4           	  515412	      2332 ns/op	    428775 objects/sec	    1778 B/op	      37 allocs/op
BenchmarkDB_WriteTxn_1_LPMIndex-4             	  133654	     15050 ns/op	     66446 objects/sec	   15781 B/op	      81 allocs/op
BenchmarkDB_LPMIndex_Get-4                    	     403	   2966228 ns/op	   3371285 objects/sec	       0 B/op	       0 allocs/op
BenchmarkWatchSet_4-4                         	 2227581	       532.0 ns/op	     296 B/op	       4 allocs/op
BenchmarkWatchSet_16-4                        	  737546	      1591 ns/op	    1096 B/op	       5 allocs/op
BenchmarkWatchSet_128-4                       	   86974	     13736 ns/op	    8904 B/op	       5 allocs/op
BenchmarkWatchSet_1024-4                      	    8810	    135877 ns/op	   73743 B/op	       5 allocs/op
PASS
ok  	github.com/cilium/statedb	43.611s
PASS
ok  	github.com/cilium/statedb/index	0.004s
goos: linux
goarch: amd64
pkg: github.com/cilium/statedb/internal
cpu: AMD EPYC 7763 64-Core Processor                
Benchmark_SortableMutex-4   	 6209107	       193.0 ns/op	       0 B/op	       0 allocs/op
PASS
ok  	github.com/cilium/statedb/internal	1.202s
goos: linux
goarch: amd64
pkg: github.com/cilium/statedb/lpm
cpu: AMD EPYC 7763 64-Core Processor                
Benchmark_txn_insert/batchSize=1-4         	    1903	    633413 ns/op	   1578749 objects/sec	  838410 B/op	   13975 allocs/op
Benchmark_txn_insert/batchSize=10-4        	    3092	    391412 ns/op	   2554853 objects/sec	  385196 B/op	    6668 allocs/op
Benchmark_txn_insert/batchSize=100-4       	    3268	    374564 ns/op	   2669768 objects/sec	  345614 B/op	    6027 allocs/op
Benchmark_txn_delete/batchSize=1-4         	    1525	    761756 ns/op	   1312756 objects/sec	 1286470 B/op	   13976 allocs/op
Benchmark_txn_delete/batchSize=10-4        	    3223	    391764 ns/op	   2552558 objects/sec	  372417 B/op	    5769 allocs/op
Benchmark_txn_delete/batchSize=100-4       	    3530	    340421 ns/op	   2937541 objects/sec	  286753 B/op	    5038 allocs/op
Benchmark_LPM_Lookup-4                     	    7791	    151401 ns/op	   6604957 objects/sec	       0 B/op	       0 allocs/op
Benchmark_LPM_All-4                        	  135538	      8980 ns/op	 111363964 objects/sec	      32 B/op	       1 allocs/op
Benchmark_LPM_Prefix-4                     	  133660	      9076 ns/op	 110181428 objects/sec	      32 B/op	       1 allocs/op
Benchmark_LPM_LowerBound-4                 	  243444	      4863 ns/op	 102813488 objects/sec	     288 B/op	       2 allocs/op
PASS
ok  	github.com/cilium/statedb/lpm	12.072s
goos: linux
goarch: amd64
pkg: github.com/cilium/statedb/part
cpu: AMD EPYC 7763 64-Core Processor                
Benchmark_Uint64Map_Random-4                  	    1575	    735370 ns/op	   1359859 items/sec	 2526723 B/op	    6035 allocs/op
Benchmark_Uint64Map_Sequential-4              	    1885	    620206 ns/op	   1612368 items/sec	 2216726 B/op	    5754 allocs/op
Benchmark_Uint64Map_Sequential_Insert-4       	    2162	    563368 ns/op	   1775039 items/sec	 2208720 B/op	    4753 allocs/op
Benchmark_Uint64Map_Sequential_Txn_Insert-4   	   10000	    115560 ns/op	   8653506 items/sec	   86352 B/op	    2028 allocs/op
Benchmark_Uint64Map_Random_Insert-4           	    1827	    657351 ns/op	   1521257 items/sec	 2518999 B/op	    5034 allocs/op
Benchmark_Uint64Map_Random_Txn_Insert-4       	    6703	    172703 ns/op	   5790281 items/sec	  118480 B/op	    2401 allocs/op
Benchmark_Insert_RootOnlyWatch-4              	   10000	    106909 ns/op	   9353728 objects/sec	   71504 B/op	    2033 allocs/op
Benchmark_Insert-4                            	    7855	    154531 ns/op	   6471195 objects/sec	  186937 B/op	    3060 allocs/op
Benchmark_Modify-4                            	   13387	     89680 ns/op	  11150799 objects/sec	   58224 B/op	    1007 allocs/op
Benchmark_GetInsert-4                         	    9255	    124193 ns/op	   8051996 objects/sec	   58224 B/op	    1007 allocs/op
Benchmark_Replace-4                           	17038954	        70.80 ns/op	  14124507 objects/sec	      48 B/op	       1 allocs/op
Benchmark_Replace_RootOnlyWatch-4             	 2942209	       411.2 ns/op	   2432168 objects/sec	     211 B/op	       2 allocs/op
Benchmark_txn_1-4                             	 5617563	       203.4 ns/op	   4915422 objects/sec	     168 B/op	       3 allocs/op
Benchmark_txn_10-4                            	10708483	       111.7 ns/op	   8953607 objects/sec	      86 B/op	       2 allocs/op
Benchmark_txn_100-4                           	12569450	        95.57 ns/op	  10463864 objects/sec	      80 B/op	       2 allocs/op
Benchmark_txn_1000-4                          	10818282	       108.7 ns/op	   9201894 objects/sec	      65 B/op	       2 allocs/op
Benchmark_txn_delete_1-4                      	 4902568	       244.6 ns/op	   4088500 objects/sec	     664 B/op	       4 allocs/op
Benchmark_txn_delete_10-4                     	10244782	       115.0 ns/op	   8698508 objects/sec	     106 B/op	       1 allocs/op
Benchmark_txn_delete_100-4                    	11185138	       107.0 ns/op	   9346324 objects/sec	      47 B/op	       1 allocs/op
Benchmark_txn_delete_1000-4                   	13077106	        90.54 ns/op	  11045121 objects/sec	      24 B/op	       1 allocs/op
Benchmark_Get-4                               	   44545	     26969 ns/op	  37078966 objects/sec	       0 B/op	       0 allocs/op
Benchmark_All-4                               	  136245	     10030 ns/op	  99703261 objects/sec	       0 B/op	       0 allocs/op
Benchmark_Iterator_All-4                      	  114870	     10428 ns/op	  95892365 objects/sec	       0 B/op	       0 allocs/op
Benchmark_Iterator_Next-4                     	  158444	      7532 ns/op	 132759225 objects/sec	     896 B/op	       1 allocs/op
Benchmark_Hashmap_Insert-4                    	   14589	     82198 ns/op	  12165771 objects/sec	   74264 B/op	      20 allocs/op
Benchmark_Hashmap_Get_Uint64-4                	  136278	      8794 ns/op	 113716557 objects/sec	       0 B/op	       0 allocs/op
Benchmark_Hashmap_Get_Bytes-4                 	  111468	     10760 ns/op	  92935477 objects/sec	       0 B/op	       0 allocs/op
Benchmark_Delete_Random-4                     	      67	  17333617 ns/op	   5769137 objects/sec	 2111903 B/op	  102364 allocs/op
Benchmark_find16-4                            	240466989	         4.989 ns/op	       0 B/op	       0 allocs/op
Benchmark_findIndex16-4                       	100000000	        11.14 ns/op	       0 B/op	       0 allocs/op
Benchmark_find4-4                             	424073743	         2.830 ns/op	       0 B/op	       0 allocs/op
Benchmark_findIndex4-4                        	320141737	         3.739 ns/op	       0 B/op	       0 allocs/op
PASS
ok  	github.com/cilium/statedb/part	39.410s
PASS
ok  	github.com/cilium/statedb/reconciler	0.005s
?   	github.com/cilium/statedb/reconciler/benchmark	[no test files]
?   	github.com/cilium/statedb/reconciler/example	[no test files]
go run ./reconciler/benchmark -quiet
1000000 objects reconciled in 2.03 seconds (batch size 1000)
Throughput 491407.10 objects per second
817MB total allocated, 6015186 in-use objects, 338MB bytes in use

@joamaki joamaki marked this pull request as ready for review January 20, 2026 09:48
@joamaki joamaki requested a review from a team as a code owner January 20, 2026 09:48
@joamaki joamaki requested review from bimmlerd and removed request for a team January 20, 2026 09:48
Copy link
Member

@bimmlerd bimmlerd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, few nits

@joamaki joamaki force-pushed the pr/joamaki/wait-until-reconciled branch from 8e11846 to 833ab84 Compare January 21, 2026 08:17
@gandro
Copy link
Member

gandro commented Jan 21, 2026

I looked again at the code that spawned my original issue #58

One that that is particular to that code:

I suggested a callback exactly because I originally experimented with an approach where I had a condvar and one go routine per "wait for revision" query. I think for the metrics use-case that is too much overhead.
I'm fine with it being observable instead of a callback, but I'd like to avoid having to spawn one Go routine per queried revision.

I think it would be interesting the progress tracker also exposes a method to get the reconciled revision. It could be as simple as returning the current revision in WaitUntilReconciled. That way, I can avoid having to spawn one go routine per revision, and a single go routine that acts whenver a revision has been reconciled and updates the metrics related to all revisions lower and equal than that.

@joamaki
Copy link
Contributor Author

joamaki commented Jan 21, 2026

I looked again at the code that spawned my original issue #58

One that that is particular to that code:

I suggested a callback exactly because I originally experimented with an approach where I had a condvar and one go routine per "wait for revision" query. I think for the metrics use-case that is too much overhead.
I'm fine with it being observable instead of a callback, but I'd like to avoid having to spawn one Go routine per queried revision.

I think it would be interesting the progress tracker also exposes a method to get the reconciled revision. It could be as simple as returning the current revision in WaitUntilReconciled. That way, I can avoid having to spawn one go routine per revision, and a single go routine that acts whenver a revision has been reconciled and updates the metrics related to all revisions lower and equal than that.

Ah yeah good idea. I'll add the current revision to the return value.

@gandro
Copy link
Member

gandro commented Jan 21, 2026

Another idea that just spawned while looking at my client code:

While it is hard to handle WaitForRevision that includes retries - how hard would it be to indicate that there are still retries pending with revision <= x? That way, if WaitForRevision would return that it has reconciled up to revision x, but it would also indicate that there are still retries pending for revision x (or earlier). This gives the caller at least some information that they could potentially even bubble up to the user.

@joamaki joamaki force-pushed the pr/joamaki/wait-until-reconciled branch from 833ab84 to 96fb77d Compare January 21, 2026 08:43
@joamaki
Copy link
Contributor Author

joamaki commented Jan 21, 2026

Another idea that just spawned while looking at my client code:

While it is hard to handle WaitForRevision that includes retries - how hard would it be to indicate that there are still retries pending with revision <= x? That way, if WaitForRevision would return that it has reconciled up to revision x, but it would also indicate that there are still retries pending for revision x (or earlier). This gives the caller at least some information that they could potentially even bubble up to the user.

Ended up implementing this. It required some changes we'll need to think carefully about: it split the committing of results into two write transactions, one to commit the normal incremental results and another one to commit retry results. This was needed since we only enqueue retries after we've done the CompareAndSwap as there's no point queuing a retry if the object anyway had changed in the meantime. It's also computing the low watermark revision of the retry queue every time the entry with the lowest revision is pop'd. I think this should be fine as it wouldn't be iterating over the whole queue that often but is something to think about. The other option is to have another priority queue for storing by revision but that seems excessive...

Let's raise the minimum required Go version for the next minor StateDB release.
This will allow using 'synctest' in the tests here.

Signed-off-by: Jussi Maki <jussi.maki@isovalent.com>
If we use 'synctest' then we can't close channels outside the synctest bubble.

The Changes() method creates a `*changeIterator` and registers a finalizer for
it that unregisters the delete tracker from the table. As the delete trackers
are stored in a 'part.Map` there's a watch channel that gets closed and this
triggers a panic if this happens in a synctest.

To avoid this issue add a 'Close()' method to the 'ChangeIterator' interface
to allow optionally closing the iterator and avoiding the finalizer.

Signed-off-by: Jussi Maki <jussi.maki@isovalent.com>
Add `Reconciler.WaitUntilReconciled` method to allow waiting for the
reconciler to catch up processing to a given revision.

Example usage:
```
wtxn := db.WriteTxn(table)
table.Insert(wtxn, &Obj{ID: 1, Status: reconciler.StatusPending()})
table.Insert(wtxn, &Obj{ID: 2, Status: reconciler.StatusPending()})
table.Insert(wtxn, &Obj{ID: 3, Status: reconciler.StatusPending()})
revToWaitFor := table.Revision(wtxn)
wtxn.Commit()

// Block until reconciler has catched up to [revToWaitFor] or [ctx]
// is cancelled.
myReconciler.WaitUntilReconciled(ctx, revToWaitFor)
```

Signed-off-by: Jussi Maki <jussi.maki@isovalent.com>
These were never used by Cilium and they are very inefficient way
of waiting for objects to be reconciled. Just drop these as we now
have [Reconciler.WaitUntilReconciled].

Signed-off-by: Jussi Maki <jussi.maki@isovalent.com>
@joamaki joamaki force-pushed the pr/joamaki/wait-until-reconciled branch from 4f14e49 to de07298 Compare January 21, 2026 14:58
@joamaki joamaki requested review from a team as code owners January 21, 2026 14:58
@joamaki joamaki requested review from brlbil and removed request for a team January 21, 2026 14:58
@joamaki joamaki force-pushed the pr/joamaki/wait-until-reconciled branch from de07298 to 622c12a Compare January 21, 2026 16:03
@joamaki joamaki force-pushed the pr/joamaki/wait-until-reconciled branch from 622c12a to 13d54f0 Compare January 22, 2026 11:30
Copy link
Member

@bimmlerd bimmlerd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

still LGTM

@joamaki joamaki force-pushed the pr/joamaki/wait-until-reconciled branch from 13d54f0 to ea9c4e5 Compare January 22, 2026 13:33
Extend [Reconciler.WaitUntilReconciled] to also indicate whether retries are
pending for any objects with a revision below or equal to [untilRevision].

The committing of results is split into two: one after normal incremental
processing of pending objects and one after processing retries. This way the
entries that failed to reconcile are pushed to the retry queue and we can check
the low watermark to produce 'retriesPending'.

Signed-off-by: Jussi Maki <jussi.maki@isovalent.com>
@joamaki joamaki force-pushed the pr/joamaki/wait-until-reconciled branch from ea9c4e5 to 0886cb6 Compare January 22, 2026 13:51
Copy link
Member

@gandro gandro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I lack understanding for the more subtle internals of the reconciler, so I can't vouch for the priority queue changes. But the API now looks very usable to me! Thanks for tackling this

@joamaki joamaki merged commit 13a6357 into main Jan 22, 2026
1 of 2 checks passed
@joamaki joamaki deleted the pr/joamaki/wait-until-reconciled branch January 22, 2026 15:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants