Skip to content

Conversation

@joamaki
Copy link
Contributor

@joamaki joamaki commented Jan 15, 2026

As there's starting to be large number of StateDB tables in Cilium it makes now sense to reduce the cost of it by
storing the tableEntry by pointer and making cloning of the root cheaper.

  • Add a benchmark to check the cost of WriteTxn+Commit with 1 and 100 tables
  • Change part to return the tree by value instead of by pointer to save a heap allocation
  • Change StateDB root to be []*tableEntry instead of []tableEntry to avoid large copy with many tables

Before:

BenchmarkDB_WriteTxn_1-8                         1213676               988.8 ns/op         1011352 objects/sec       944 B/op         17 allocs/op
BenchmarkDB_WriteTxn_10-8                        2889960               407.2 ns/op         2455560 objects/sec       470 B/op          8 allocs/op
BenchmarkDB_WriteTxn_100-8                       3744349               325.0 ns/op         3077056 objects/sec       452 B/op          7 allocs/op
BenchmarkDB_WriteTxn_1000-8                      3100946               384.9 ns/op         2598194 objects/sec       404 B/op          7 allocs/op
BenchmarkDB_WriteTxn_100_SecondaryIndex-8        1574554               761.2 ns/op         1313743 objects/sec       971 B/op         20 allocs/op
BenchmarkDB_WriteTxn_CommitOnly_100Tables-8       516098              2170 ns/op            8331 B/op          4 allocs/op
BenchmarkDB_WriteTxn_CommitOnly_1Table-8         2395129               501.1 ns/op           216 B/op          4 allocs/op

After:

BenchmarkDB_WriteTxn_1-8                         1209261               990.4 ns/op         1009715 objects/sec       952 B/op         16 allocs/op
BenchmarkDB_WriteTxn_10-8                        2843983               419.9 ns/op         2381473 objects/sec       500 B/op          8 allocs/op
BenchmarkDB_WriteTxn_100-8                       3573542               334.1 ns/op         2992874 objects/sec       485 B/op          7 allocs/op
BenchmarkDB_WriteTxn_1000-8                      2969852               399.3 ns/op         2504096 objects/sec       437 B/op          7 allocs/op
BenchmarkDB_WriteTxn_100_SecondaryIndex-8        1440140               797.7 ns/op         1253590 objects/sec      1004 B/op         20 allocs/op
BenchmarkDB_WriteTxn_CommitOnly_100Tables-8      1429353               840.1 ns/op          1112 B/op          5 allocs/op
BenchmarkDB_WriteTxn_CommitOnly_1Table-8         2361004               505.9 ns/op           224 B/op          5 allocs/op
BenchmarkDB_WriteTxn_100_LPMIndex-8               793759              1336 ns/op            748245 objects/sec      1796 B/op         37 allocs/op
BenchmarkDB_WriteTxn_1_LPMIndex-8                 163100              9520 ns/op            105044 objects/sec     16326 B/op         83 allocs/op

Changing part to return tree by value shaved off an allocation in the WriteTxn_1 case. Storing tableEntry by pointer reduced the WriteTxn+Commit cost from 2170ns/op to 840ns/op without impacting write throughput.

@github-actions
Copy link

github-actions bot commented Jan 15, 2026

$ make
go build ./...
go: downloading go1.24.0 (linux/amd64)
go: downloading go.yaml.in/yaml/v3 v3.0.3
go: downloading github.com/cilium/hive v0.0.0-20250731144630-28e7a35ed227
go: downloading golang.org/x/time v0.5.0
go: downloading github.com/spf13/cobra v1.8.0
go: downloading github.com/spf13/pflag v1.0.5
go: downloading github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de
go: downloading github.com/cilium/stream v0.0.0-20240209152734-a0792b51812d
go: downloading github.com/spf13/viper v1.18.2
go: downloading go.uber.org/dig v1.17.1
go: downloading golang.org/x/term v0.16.0
go: downloading github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc
go: downloading github.com/mitchellh/mapstructure v1.5.0
go: downloading golang.org/x/sys v0.17.0
go: downloading golang.org/x/tools v0.17.0
go: downloading github.com/spf13/cast v1.6.0
go: downloading github.com/fsnotify/fsnotify v1.7.0
go: downloading github.com/sagikazarmark/slog-shim v0.1.0
go: downloading github.com/spf13/afero v1.11.0
go: downloading github.com/subosito/gotenv v1.6.0
go: downloading github.com/hashicorp/hcl v1.0.0
go: downloading gopkg.in/ini.v1 v1.67.0
go: downloading github.com/magiconair/properties v1.8.7
go: downloading github.com/pelletier/go-toml/v2 v2.1.0
go: downloading gopkg.in/yaml.v3 v3.0.1
go: downloading golang.org/x/text v0.14.0
STATEDB_VALIDATE=1 go test ./... -cover -vet=all -test.count 1
go: downloading github.com/stretchr/testify v1.8.4
go: downloading go.uber.org/goleak v1.3.0
go: downloading golang.org/x/exp v0.0.0-20240119083558-1b970713d09a
go: downloading github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2
ok  	github.com/cilium/statedb	454.571s	coverage: 78.4% of statements
ok  	github.com/cilium/statedb/index	0.005s	coverage: 28.7% of statements
ok  	github.com/cilium/statedb/internal	0.018s	coverage: 42.9% of statements
ok  	github.com/cilium/statedb/lpm	4.210s	coverage: 77.9% of statements
ok  	github.com/cilium/statedb/part	66.112s	coverage: 87.5% of statements
ok  	github.com/cilium/statedb/reconciler	0.289s	coverage: 88.7% of statements
	github.com/cilium/statedb/reconciler/benchmark		coverage: 0.0% of statements
	github.com/cilium/statedb/reconciler/example		coverage: 0.0% of statements
go test -race ./... -test.count 1
ok  	github.com/cilium/statedb	37.055s
ok  	github.com/cilium/statedb/index	1.014s
ok  	github.com/cilium/statedb/internal	1.026s
ok  	github.com/cilium/statedb/lpm	2.777s
ok  	github.com/cilium/statedb/part	34.262s
ok  	github.com/cilium/statedb/reconciler	1.344s
?   	github.com/cilium/statedb/reconciler/benchmark	[no test files]
?   	github.com/cilium/statedb/reconciler/example	[no test files]
go test ./... -bench . -benchmem -test.run xxx
goos: linux
goarch: amd64
pkg: github.com/cilium/statedb
cpu: AMD EPYC 7763 64-Core Processor                
BenchmarkDB_WriteTxn_1-4                      	  710838	      1662 ns/op	    601631 objects/sec	    1032 B/op	      17 allocs/op
BenchmarkDB_WriteTxn_10-4                     	 1684320	       717.7 ns/op	   1393409 objects/sec	     523 B/op	       8 allocs/op
BenchmarkDB_WriteTxn_100-4                    	 2113587	       565.1 ns/op	   1769532 objects/sec	     490 B/op	       7 allocs/op
BenchmarkDB_WriteTxn_1000-4                   	 1736913	       650.8 ns/op	   1536492 objects/sec	     447 B/op	       7 allocs/op
BenchmarkDB_WriteTxn_100_SecondaryIndex-4     	  829405	      1294 ns/op	    773054 objects/sec	    1007 B/op	      22 allocs/op
BenchmarkDB_WriteTxn_CommitOnly_100Tables-4   	 1000000	      1157 ns/op	    1112 B/op	       5 allocs/op
BenchmarkDB_WriteTxn_CommitOnly_1Table-4      	 1685738	       710.4 ns/op	     224 B/op	       5 allocs/op
BenchmarkDB_NewWriteTxn-4                     	 1821790	       659.3 ns/op	     200 B/op	       4 allocs/op
BenchmarkDB_WriteTxnCommit100-4               	 1000000	      1139 ns/op	    1096 B/op	       5 allocs/op
BenchmarkDB_NewReadTxn-4                      	481091816	         2.494 ns/op	       0 B/op	       0 allocs/op
BenchmarkDB_Modify-4                          	    1754	    687453 ns/op	   1454646 objects/sec	  479683 B/op	    8073 allocs/op
BenchmarkDB_GetInsert-4                       	    1554	    767832 ns/op	   1302369 objects/sec	  455684 B/op	    8073 allocs/op
BenchmarkDB_RandomInsert-4                    	    1825	    651650 ns/op	   1534567 objects/sec	  447670 B/op	    7073 allocs/op
BenchmarkDB_RandomReplace-4                   	     463	   2555580 ns/op	    391301 objects/sec	 1924833 B/op	   31104 allocs/op
BenchmarkDB_SequentialInsert-4                	    1868	    630968 ns/op	   1584867 objects/sec	  447669 B/op	    7073 allocs/op
BenchmarkDB_SequentialInsert_Prefix-4         	     477	   2500018 ns/op	    399997 objects/sec	 3564325 B/op	   45544 allocs/op
BenchmarkDB_Changes_Baseline-4                	    1592	    755745 ns/op	   1323197 objects/sec	  507844 B/op	    9167 allocs/op
BenchmarkDB_Changes-4                         	     915	   1298956 ns/op	    769849 objects/sec	  709397 B/op	   12316 allocs/op
BenchmarkDB_RandomLookup-4                    	   22154	     54060 ns/op	  18498082 objects/sec	       0 B/op	       0 allocs/op
BenchmarkDB_SequentialLookup-4                	   26656	     45115 ns/op	  22165713 objects/sec	       0 B/op	       0 allocs/op
BenchmarkDB_Prefix_SecondaryIndex-4           	    6904	    163981 ns/op	   6098265 objects/sec	  124922 B/op	    1026 allocs/op
BenchmarkDB_FullIteration_All-4               	     925	   1213210 ns/op	  82425936 objects/sec	     104 B/op	       5 allocs/op
BenchmarkDB_FullIteration_Prefix-4            	     994	   1179117 ns/op	  84809210 objects/sec	     136 B/op	       6 allocs/op
BenchmarkDB_FullIteration_Get-4               	     217	   5432722 ns/op	  18406979 objects/sec	       0 B/op	       0 allocs/op
BenchmarkDB_FullIteration_Get_Secondary-4     	     118	  10096406 ns/op	   9904515 objects/sec	       0 B/op	       0 allocs/op
BenchmarkDB_FullIteration_ReadTxnGet-4        	     222	   5384084 ns/op	  18573260 objects/sec	       0 B/op	       0 allocs/op
BenchmarkDB_PropagationDelay-4                	  662092	      1709 ns/op	        14.00 50th_µs	        17.00 90th_µs	        40.00 99th_µs	    1127 B/op	      20 allocs/op
BenchmarkDB_WriteTxn_100_LPMIndex-4           	  518727	      2354 ns/op	    424870 objects/sec	    1778 B/op	      37 allocs/op
BenchmarkDB_WriteTxn_1_LPMIndex-4             	  130077	     14112 ns/op	     70864 objects/sec	   15743 B/op	      81 allocs/op
BenchmarkDB_LPMIndex_Get-4                    	     404	   2956333 ns/op	   3382569 objects/sec	       0 B/op	       0 allocs/op
BenchmarkWatchSet_4-4                         	 2188171	       545.0 ns/op	     320 B/op	       5 allocs/op
BenchmarkWatchSet_16-4                        	  767908	      1552 ns/op	    1096 B/op	       5 allocs/op
BenchmarkWatchSet_128-4                       	   89792	     13310 ns/op	    8904 B/op	       5 allocs/op
BenchmarkWatchSet_1024-4                      	    8842	    133049 ns/op	   73744 B/op	       5 allocs/op
PASS
ok  	github.com/cilium/statedb	43.687s
PASS
ok  	github.com/cilium/statedb/index	0.003s
goos: linux
goarch: amd64
pkg: github.com/cilium/statedb/internal
cpu: AMD EPYC 7763 64-Core Processor                
Benchmark_SortableMutex-4   	 6210824	       193.2 ns/op	       0 B/op	       0 allocs/op
PASS
ok  	github.com/cilium/statedb/internal	1.204s
goos: linux
goarch: amd64
pkg: github.com/cilium/statedb/lpm
cpu: AMD EPYC 7763 64-Core Processor                
Benchmark_txn_insert/batchSize=1-4         	    1867	    662796 ns/op	   1508759 objects/sec	  838420 B/op	   13975 allocs/op
Benchmark_txn_insert/batchSize=10-4        	    3105	    389905 ns/op	   2564726 objects/sec	  385199 B/op	    6669 allocs/op
Benchmark_txn_insert/batchSize=100-4       	    3260	    368813 ns/op	   2711399 objects/sec	  345617 B/op	    6028 allocs/op
Benchmark_txn_delete/batchSize=1-4         	    1581	    758187 ns/op	   1318935 objects/sec	 1286485 B/op	   13976 allocs/op
Benchmark_txn_delete/batchSize=10-4        	    3157	    376628 ns/op	   2655143 objects/sec	  372422 B/op	    5769 allocs/op
Benchmark_txn_delete/batchSize=100-4       	    3502	    340112 ns/op	   2940211 objects/sec	  286756 B/op	    5038 allocs/op
Benchmark_LPM_Lookup-4                     	    7984	    148485 ns/op	   6734696 objects/sec	       0 B/op	       0 allocs/op
Benchmark_LPM_All-4                        	  133240	      9298 ns/op	 107550013 objects/sec	      32 B/op	       1 allocs/op
Benchmark_LPM_Prefix-4                     	  133249	      9428 ns/op	 106066123 objects/sec	      32 B/op	       1 allocs/op
Benchmark_LPM_LowerBound-4                 	  238797	      5016 ns/op	  99684061 objects/sec	     288 B/op	       2 allocs/op
PASS
ok  	github.com/cilium/statedb/lpm	12.120s
goos: linux
goarch: amd64
pkg: github.com/cilium/statedb/part
cpu: AMD EPYC 7763 64-Core Processor                
Benchmark_Uint64Map_Random-4                  	    1522	    752664 ns/op	   1328615 items/sec	 2525087 B/op	    6034 allocs/op
Benchmark_Uint64Map_Sequential-4              	    1908	    632102 ns/op	   1582022 items/sec	 2216744 B/op	    5755 allocs/op
Benchmark_Uint64Map_Sequential_Insert-4       	    2080	    569807 ns/op	   1754980 items/sec	 2208740 B/op	    4754 allocs/op
Benchmark_Uint64Map_Sequential_Txn_Insert-4   	   10000	    107474 ns/op	   9304567 items/sec	   86353 B/op	    2028 allocs/op
Benchmark_Uint64Map_Random_Insert-4           	    1806	    677763 ns/op	   1475442 items/sec	 2519666 B/op	    5034 allocs/op
Benchmark_Uint64Map_Random_Txn_Insert-4       	    7101	    168980 ns/op	   5917876 items/sec	  119507 B/op	    2414 allocs/op
Benchmark_Insert_RootOnlyWatch-4              	   10000	    109576 ns/op	   9126076 objects/sec	   71505 B/op	    2033 allocs/op
Benchmark_Insert-4                            	    7652	    167760 ns/op	   5960904 objects/sec	  186939 B/op	    3060 allocs/op
Benchmark_Modify-4                            	   13132	     91354 ns/op	  10946442 objects/sec	   58225 B/op	    1007 allocs/op
Benchmark_GetInsert-4                         	    9358	    127697 ns/op	   7831031 objects/sec	   58225 B/op	    1007 allocs/op
Benchmark_Replace-4                           	15765522	        75.75 ns/op	  13201208 objects/sec	      48 B/op	       1 allocs/op
Benchmark_Replace_RootOnlyWatch-4             	 3207235	       384.2 ns/op	   2602657 objects/sec	     207 B/op	       2 allocs/op
Benchmark_txn_1-4                             	 5941744	       191.5 ns/op	   5221395 objects/sec	     168 B/op	       3 allocs/op
Benchmark_txn_10-4                            	10194180	       116.6 ns/op	   8577933 objects/sec	      86 B/op	       2 allocs/op
Benchmark_txn_100-4                           	11477167	       102.7 ns/op	   9739496 objects/sec	      80 B/op	       2 allocs/op
Benchmark_txn_1000-4                          	10629146	       110.9 ns/op	   9014063 objects/sec	      65 B/op	       2 allocs/op
Benchmark_txn_delete_1-4                      	 4882778	       246.7 ns/op	   4052812 objects/sec	     664 B/op	       4 allocs/op
Benchmark_txn_delete_10-4                     	10750101	       109.6 ns/op	   9125071 objects/sec	     106 B/op	       1 allocs/op
Benchmark_txn_delete_100-4                    	11678282	       101.2 ns/op	   9884968 objects/sec	      47 B/op	       1 allocs/op
Benchmark_txn_delete_1000-4                   	14209730	        83.63 ns/op	  11957831 objects/sec	      24 B/op	       1 allocs/op
Benchmark_Get-4                               	   45381	     26571 ns/op	  37635025 objects/sec	       0 B/op	       0 allocs/op
Benchmark_All-4                               	  115304	     10524 ns/op	  95023716 objects/sec	       0 B/op	       0 allocs/op
Benchmark_Iterator_All-4                      	  120765	     10475 ns/op	  95463668 objects/sec	       0 B/op	       0 allocs/op
Benchmark_Iterator_Next-4                     	  157812	      7495 ns/op	 133425143 objects/sec	     896 B/op	       1 allocs/op
Benchmark_Hashmap_Insert-4                    	   14856	     80733 ns/op	  12386455 objects/sec	   74265 B/op	      20 allocs/op
Benchmark_Hashmap_Get_Uint64-4                	  139810	      8545 ns/op	 117027897 objects/sec	       0 B/op	       0 allocs/op
Benchmark_Hashmap_Get_Bytes-4                 	  111793	     10741 ns/op	  93099438 objects/sec	       0 B/op	       0 allocs/op
Benchmark_Delete_Random-4                     	      78	  15053464 ns/op	   6642989 objects/sec	 2111888 B/op	  102364 allocs/op
Benchmark_find16-4                            	223009902	         5.336 ns/op	       0 B/op	       0 allocs/op
Benchmark_findIndex16-4                       	78910934	        13.73 ns/op	       0 B/op	       0 allocs/op
Benchmark_find4-4                             	412899004	         2.909 ns/op	       0 B/op	       0 allocs/op
Benchmark_findIndex4-4                        	296615764	         4.052 ns/op	       0 B/op	       0 allocs/op
PASS
ok  	github.com/cilium/statedb/part	39.382s
PASS
ok  	github.com/cilium/statedb/reconciler	0.004s
?   	github.com/cilium/statedb/reconciler/benchmark	[no test files]
?   	github.com/cilium/statedb/reconciler/example	[no test files]
go run ./reconciler/benchmark -quiet
1000000 objects reconciled in 2.05 seconds (batch size 1000)
Throughput 488905.26 objects per second
817MB total allocated, 6015184 in-use objects, 338MB bytes in use

@joamaki joamaki force-pushed the pr/joamaki/alloc-optimizations branch from 95498c3 to f8a2b95 Compare January 15, 2026 10:22
@joamaki joamaki marked this pull request as ready for review January 15, 2026 12:29
@joamaki joamaki requested a review from a team as a code owner January 15, 2026 12:29
@joamaki joamaki requested review from bimmlerd and derailed and removed request for a team January 15, 2026 12:29
opts options
prevTxn atomic.Pointer[Txn[T]] // the previous txn for reusing the allocation
prevTxnID uint64 // the transaction ID that produced this tree
prevTxn *atomic.Pointer[Txn[T]] // the previous txn for reusing the allocation
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is surprising to me. I think this has three states - a nil pointer, a non-nil pointer to an atomic nil pointer or a non-nil pointer to a non-nil atomic pointer. Why do we need three?

Copy link
Contributor Author

@joamaki joamaki Jan 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not about states it's about making sure the atomic pointer is heap allocated and shared by all the Tree timelines. prevTxn also would never be nil. The only reason we have atomic.Pointer here is because part.Map and part.Set might "fork", e.g. we might do multiple transactions against the same original tree. In those cases it's important that only one branch gets to reuse the txn (and since Tree is by value we might've copied it so it's not enough that (*Tree).Txn nil's it from the current copy). However I think we care more about the txn reuse in StateDB and less in part.Map and part.Set so I ended up dropping the atomic.Pointer and instead adding an explicit ReuseTxn option to part.New to string along the *Txn[T].

Could you take a look and see if this makes sense?

This did make part.Map about ~20% slower so will need to think if that's a reasonable trade-off or not...

EDIT: oh wait failing tests.. didn't push yet

Copy link
Contributor Author

@joamaki joamaki Jan 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Argh don't like how error-prone this makes it and didn't like about the perf regression on part.Map. I also couldn't quite figure out why the change caused bunch of StateDB tests to fail while none of the part tests did (even after changing it to default to reusing). I think the heap-allocated atomic pointer still makes most sense and it's much safer and easier to reason about. E.g. it's essentially a "sync.Pool" of size 1.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still seeing this change so did you end up pushing the ReuseTxn?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No I didn't push the ReuseTxn idea. The original *atomic.Pointer[Txn[T]] change is the way to go.

Add 1 and 100 table commit-only benchmark to test for impact of lots of
registered tables on the cost of [dbRoot] cloning.

Signed-off-by: Jussi Maki <jussi.maki@isovalent.com>
@joamaki joamaki force-pushed the pr/joamaki/alloc-optimizations branch from f8a2b95 to b20de17 Compare January 16, 2026 14:37
@joamaki joamaki requested a review from bimmlerd January 16, 2026 14:39
This avoids allocating [part.Tree] in the heap and saves an
allocation on the write path.

Signed-off-by: Jussi Maki <jussi.maki@isovalent.com>
As StateDB usage grows in Cilium we start to reach the point where cloning the
[dbRoot] is becoming increasingly expensive. At the cost of an additional heap
allocation change to storing the tableEntry by pointer.

Before:
BenchmarkDB_WriteTxn_1-8                         1222287               981.1 ns/op         1019299 objects/sec       944 B/op         15 allocs/op
BenchmarkDB_WriteTxn_10-8                        2851748               418.0 ns/op         2392425 objects/sec       499 B/op          8 allocs/op
BenchmarkDB_WriteTxn_100-8                       3561373               334.3 ns/op         2991648 objects/sec       485 B/op          7 allocs/op
BenchmarkDB_WriteTxn_1000-8                      3045235               393.8 ns/op         2539482 objects/sec       437 B/op          7 allocs/op
BenchmarkDB_WriteTxn_100_SecondaryIndex-8        1502962               796.5 ns/op         1255539 objects/sec      1004 B/op         20 allocs/op
BenchmarkDB_WriteTxn_CommitOnly_100Tables-8       542452              2213 ns/op            8332 B/op          4 allocs/op
BenchmarkDB_WriteTxn_CommitOnly_1Table-8         2411512               498.2 ns/op           216 B/op          4 allocs/op

After:
BenchmarkDB_WriteTxn_1-8                         1205449               994.8 ns/op         1005270 objects/sec       952 B/op         16 allocs/op
BenchmarkDB_WriteTxn_10-8                        2862157               416.8 ns/op         2399386 objects/sec       500 B/op          8 allocs/op
BenchmarkDB_WriteTxn_100-8                       3564074               333.1 ns/op         3001994 objects/sec       485 B/op          7 allocs/op
BenchmarkDB_WriteTxn_1000-8                      2716454               392.7 ns/op         2546569 objects/sec       437 B/op          7 allocs/op
BenchmarkDB_WriteTxn_100_SecondaryIndex-8        1507063               798.2 ns/op         1252773 objects/sec      1004 B/op         20 allocs/op
BenchmarkDB_WriteTxn_CommitOnly_100Tables-8      1427040               846.3 ns/op          1112 B/op          5 allocs/op
BenchmarkDB_WriteTxn_CommitOnly_1Table-8         2386636               505.4 ns/op           224 B/op          5 allocs/op

Signed-off-by: Jussi Maki <jussi.maki@isovalent.com>
@joamaki joamaki force-pushed the pr/joamaki/alloc-optimizations branch from b20de17 to b128250 Compare January 16, 2026 14:43
opts options
prevTxn atomic.Pointer[Txn[T]] // the previous txn for reusing the allocation
prevTxnID uint64 // the transaction ID that produced this tree
prevTxn *atomic.Pointer[Txn[T]] // the previous txn for reusing the allocation
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still seeing this change so did you end up pushing the ReuseTxn?

@joamaki joamaki requested a review from bimmlerd January 19, 2026 13:18
@joamaki joamaki merged commit 77960fe into main Jan 19, 2026
1 check passed
@joamaki joamaki deleted the pr/joamaki/alloc-optimizations branch January 19, 2026 13:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants