Skip to content

Conversation

@kalbasit
Copy link
Owner

@kalbasit kalbasit commented Jan 31, 2026

This commit integrates content-defined chunking into the core cache logic:

  • Updated PutNar to support streaming CDC and deduplication.
  • Updated GetNar to transparently reassemble chunks into NAR streams.
  • Implemented Garbage Collection (GC) for orphaned chunks in the LRU cleaner.
  • Added MigrateNarToChunks for background migration of traditional NARs to chunks.
  • Added new database queries to identify and delete orphaned chunks.
  • Regenerated database wrappers and updated metrics for chunk evictions.
  • Includes comprehensive integration tests for deduplication and reassembly.

Part of #322

@dosubot dosubot bot added size:XXL This PR changes 1000+ lines, ignoring generated files. enhancement New feature or request go Pull requests that update go code labels Jan 31, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @kalbasit, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the cache's storage efficiency and performance by integrating Content-Defined Chunking (CDC). This allows NARs to be broken down into smaller, deduplicated chunks, reducing redundant storage and improving data retrieval. The changes also introduce a robust garbage collection system for these chunks and a migration path for existing NARs, ensuring the cache can adapt and optimize its data management over time.

Highlights

  • Content-Defined Chunking (CDC) Integration: The core cache logic now supports streaming CDC for PutNar operations, enabling efficient deduplication of data. GetNar has been updated to transparently reassemble NARs from their constituent chunks.
  • Chunk-Level Garbage Collection (GC): A new garbage collection mechanism has been implemented to identify and delete orphaned chunks within the LRU cleaner, ensuring efficient storage utilization.
  • NAR Migration to Chunks: Functionality to migrate traditional NARs to content-defined chunks in the background has been added, allowing for a smooth transition to the new chunking strategy.
  • Database Support for Chunks: New database queries and schema updates have been introduced across MySQL, PostgreSQL, and SQLite to manage chunks, their references to NAR files, and to identify orphaned chunks for GC.
  • Enhanced Observability and Testing: New metrics for tracking chunk evictions have been added, and comprehensive integration tests are included to validate the CDC, deduplication, and reassembly processes.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant feature by integrating content-defined chunking (CDC) into the cache, enabling storage deduplication. The changes are extensive, affecting core cache logic, database queries, and adding new tests. My review focuses on performance and correctness. I've identified key areas for performance improvement in the CDC implementation, particularly concerning disk I/O and database transaction management. Additionally, I've suggested an optimization for the orphaned chunk cleanup query and a small correctness fix in the new tests. Overall, this is a valuable addition, and addressing these points will make the implementation more robust and efficient.

@kalbasit kalbasit force-pushed the 01-30-feat_integrate_cdc_with_cache_and_implement_chunk_gc branch from 9f7cef1 to 24941e7 Compare January 31, 2026 02:53
@kalbasit kalbasit force-pushed the 01-30-feat_add_s3_storage_support_for_chunks branch from ae32c52 to 1643466 Compare January 31, 2026 02:53
@kalbasit kalbasit force-pushed the 01-30-feat_integrate_cdc_with_cache_and_implement_chunk_gc branch from 24941e7 to aad1081 Compare January 31, 2026 03:02
@kalbasit kalbasit force-pushed the 01-30-feat_add_s3_storage_support_for_chunks branch from 1643466 to 8bb122a Compare January 31, 2026 03:02
@kalbasit kalbasit force-pushed the 01-30-feat_integrate_cdc_with_cache_and_implement_chunk_gc branch from aad1081 to e20990e Compare January 31, 2026 06:03
@kalbasit kalbasit force-pushed the 01-30-feat_add_s3_storage_support_for_chunks branch 2 times, most recently from 29bc630 to b92988e Compare January 31, 2026 06:33
@kalbasit kalbasit force-pushed the 01-30-feat_integrate_cdc_with_cache_and_implement_chunk_gc branch from e20990e to be66a95 Compare January 31, 2026 06:33
@kalbasit kalbasit force-pushed the 01-30-feat_add_s3_storage_support_for_chunks branch from b92988e to e7a90b5 Compare January 31, 2026 06:46
@kalbasit kalbasit force-pushed the 01-30-feat_integrate_cdc_with_cache_and_implement_chunk_gc branch 2 times, most recently from 508ea4b to 4ac6f9b Compare January 31, 2026 20:23
@kalbasit kalbasit force-pushed the 01-30-feat_add_s3_storage_support_for_chunks branch from e7a90b5 to c228c33 Compare January 31, 2026 20:23
@kalbasit kalbasit force-pushed the 01-30-feat_integrate_cdc_with_cache_and_implement_chunk_gc branch from 4ac6f9b to 64f7254 Compare January 31, 2026 22:10
@kalbasit kalbasit force-pushed the 01-30-feat_add_s3_storage_support_for_chunks branch from c228c33 to 4bc44ac Compare January 31, 2026 22:10
@kalbasit kalbasit force-pushed the 01-30-feat_integrate_cdc_with_cache_and_implement_chunk_gc branch from 64f7254 to a4e6430 Compare January 31, 2026 23:33
@kalbasit kalbasit force-pushed the 01-30-feat_add_s3_storage_support_for_chunks branch from 4bc44ac to 2a45a7a Compare January 31, 2026 23:33
@kalbasit kalbasit force-pushed the 01-30-feat_integrate_cdc_with_cache_and_implement_chunk_gc branch from a4e6430 to e3253d1 Compare January 31, 2026 23:39
@kalbasit kalbasit force-pushed the 01-30-feat_add_s3_storage_support_for_chunks branch 2 times, most recently from 736ed2e to 87d7b35 Compare January 31, 2026 23:52
@kalbasit kalbasit force-pushed the 01-30-feat_integrate_cdc_with_cache_and_implement_chunk_gc branch from e3253d1 to d34b9f5 Compare January 31, 2026 23:52
@kalbasit kalbasit force-pushed the 01-30-feat_integrate_cdc_with_cache_and_implement_chunk_gc branch from 47839b0 to 5ce3f0d Compare February 1, 2026 03:08
@kalbasit kalbasit force-pushed the 01-30-feat_add_s3_storage_support_for_chunks branch 2 times, most recently from 3066fbd to e7890b6 Compare February 1, 2026 03:10
@kalbasit kalbasit force-pushed the 01-30-feat_integrate_cdc_with_cache_and_implement_chunk_gc branch from 5ce3f0d to 989e833 Compare February 1, 2026 03:10
@kalbasit kalbasit force-pushed the 01-30-feat_add_s3_storage_support_for_chunks branch from e7890b6 to 64c004f Compare February 1, 2026 03:14
@kalbasit kalbasit force-pushed the 01-30-feat_integrate_cdc_with_cache_and_implement_chunk_gc branch from 989e833 to 4a71540 Compare February 1, 2026 03:14
@kalbasit kalbasit force-pushed the 01-30-feat_add_s3_storage_support_for_chunks branch from 64c004f to 7983dd4 Compare February 1, 2026 03:55
@kalbasit kalbasit force-pushed the 01-30-feat_integrate_cdc_with_cache_and_implement_chunk_gc branch 2 times, most recently from 0a7cc32 to 3e5e83b Compare February 1, 2026 04:06
Base automatically changed from 01-30-feat_add_s3_storage_support_for_chunks to main February 1, 2026 04:18
@kalbasit kalbasit force-pushed the 01-30-feat_integrate_cdc_with_cache_and_implement_chunk_gc branch from e3d163a to f56283d Compare February 1, 2026 04:23
@kalbasit
Copy link
Owner Author

kalbasit commented Feb 1, 2026

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces content-defined chunking (CDC) for NAR storage, which is a significant feature for deduplication. The changes include new database queries for managing chunks, integration into the core cache logic for GetNar and PutNar, and an update to the LRU garbage collector to clean up orphaned chunks. New tests for the CDC functionality have also been added.

My review focuses on potential performance issues, correctness, and consistency. I've identified a performance bottleneck in how chunk metadata is written to the database, a bug in file handling during NAR migration, and some configuration file inconsistencies. I also noted a logical inconsistency in the use of ref_count for chunks, which could lead to issues in the future.

@kalbasit kalbasit force-pushed the 01-30-feat_integrate_cdc_with_cache_and_implement_chunk_gc branch from f56283d to a1cfc79 Compare February 1, 2026 04:47
This commit integrates content-defined chunking into the core cache logic:
- Updated PutNar to support streaming CDC and deduplication.
- Updated GetNar to transparently reassemble chunks into NAR streams.
- Implemented Garbage Collection (GC) for orphaned chunks in the LRU cleaner.
- Added MigrateNarToChunks for background migration of traditional NARs to chunks.
- Added new database queries to identify and delete orphaned chunks.
- Regenerated database wrappers and updated metrics for chunk evictions.
- Includes comprehensive integration tests for deduplication and reassembly.
Replace NOT IN subquery with LEFT JOIN IS NULL pattern for better
performance on large tables. This optimization is applied across all
three database engines (PostgreSQL, SQLite, MySQL).
The ref_count column was redundant as it can be computed from the
nar_file_chunks join table. The current GC logic already uses
reachability-based approach (GetOrphanedChunks) which is more reliable.

This change:
- Removes ref_count column from chunks table via migrations
- Updates CreateChunk query to use DO NOTHING on conflict
- Removes unused IncrementChunkRefCount and DecrementChunkRefCount queries
- Updates sqlc.yml to remove ref_count type overrides
- Regenerates database wrappers
Previously, each chunk created a new database transaction, causing
significant performance degradation for files with many small chunks.

This change refactors storeNarWithCDC to:
1. Collect all chunk metadata during the chunking loop
2. Store chunks in the chunk store as before
3. Perform all database operations (CreateChunk + LinkNarFileToChunk)
   in a single transaction after chunking completes

This eliminates the per-chunk transaction overhead and greatly improves
performance for files with many chunks.
This reverts commit 70a78b24d7f13fb1699558be1db4f4382b1b5f03.
@kalbasit kalbasit force-pushed the 01-30-feat_integrate_cdc_with_cache_and_implement_chunk_gc branch from b1ea8a4 to bd319b4 Compare February 1, 2026 05:33
@kalbasit
Copy link
Owner Author

kalbasit commented Feb 1, 2026

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces content-defined chunking (CDC) for NAR storage, which is a significant feature for deduplication and storage efficiency. The changes are comprehensive, touching the database schema, queries, and core cache logic for PutNar, GetNar, and garbage collection. New tests for CDC functionality are also included.

Overall, the implementation is well-thought-out. I've identified a critical issue regarding transaction atomicity when storing chunks, which could lead to data corruption. I've also made a couple of suggestions to improve the maintainability and consistency of the SQL queries. Once these points are addressed, this will be a solid contribution.

Comment on lines +1230 to +1291
for {
select {
case <-ctx.Done():
return 0, ctx.Err()
case err := <-errChan:
if err != nil {
return 0, fmt.Errorf("chunking error: %w", err)
}
case chunkMetadata, ok := <-chunksChan:
if !ok {
// All chunks processed.
return totalSize, nil
}

// Process chunk
data := make([]byte, chunkMetadata.Size)

_, err := f.ReadAt(data, chunkMetadata.Offset)
if err != nil {
return 0, fmt.Errorf("error reading chunk data: %w", err)
}

// Store in chunkStore if new
_, err = c.chunkStore.PutChunk(ctx, chunkMetadata.Hash, data)
if err != nil {
return 0, fmt.Errorf("error storing chunk: %w", err)
}

// Record in database
err = c.withTransaction(ctx, "storeNarWithCDC.RecordChunk", func(qtx database.Querier) error {
// Create or increment ref count.
// For MySQL, CreateChunk doesn't return the ID, so we use a different approach.
// But our generated wrapper handles the DB type differences.
ch, err := qtx.CreateChunk(ctx, database.CreateChunkParams{
Hash: chunkMetadata.Hash,
Size: chunkMetadata.Size,
})
if err != nil {
return fmt.Errorf("error creating chunk record: %w", err)
}

// Link to NAR file
err = qtx.LinkNarFileToChunk(ctx, database.LinkNarFileToChunkParams{
NarFileID: narFileID,
ChunkID: ch.ID,
//nolint:gosec // G115: Chunk count is bounded by file size
ChunkIndex: int32(chunkCount),
})
if err != nil {
return fmt.Errorf("error linking chunk: %w", err)
}

return nil
})
if err != nil {
return 0, err
}

totalSize += int64(chunkMetadata.Size)
chunkCount++
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation of storeNarWithCDC uses a separate database transaction for each chunk. This is inefficient for NARs with many chunks and, more importantly, can lead to an inconsistent state. If an error occurs midway through processing, the NAR file will be left in a partially chunked state in the database. This will cause getNarFromChunks to return a corrupted (truncated) NAR on subsequent requests.

To ensure atomicity and improve performance, all database operations for a single NAR should be performed in a single transaction. I recommend refactoring this to first process all chunks and then commit the database changes together.

Here's a suggested refactoring of the logic:

	// 2. Start chunking and collect all metadata
	chunksChan, errChan := c.chunker.Chunk(ctx, f)
	var allChunks []chunker.Chunk
	for chunkMetadata := range chunksChan {
		allChunks = append(allChunks, chunkMetadata)
	}
	if err := <-errChan; err != nil {
		return 0, fmt.Errorf("chunking error: %w", err)
	}

	// 3. Store all chunks in the chunk store
	// This part is outside the DB transaction. If it fails, orphaned chunks
	// may be left in the store, which will be cleaned up by the GC later.
	for _, chunkMetadata := range allChunks {
		data := make([]byte, chunkMetadata.Size)
		if _, err := f.ReadAt(data, chunkMetadata.Offset); err != nil {
			return 0, fmt.Errorf("error reading chunk data: %w", err)
		}
		// We can ignore the 'created' bool here. If it already exists, that's fine.
		if _, err := c.chunkStore.PutChunk(ctx, chunkMetadata.Hash, data); err != nil {
			return 0, fmt.Errorf("error storing chunk: %w", err)
		}
	}

	// 4. Record all chunks in the database in a single transaction
	err = c.withTransaction(ctx, "storeNarWithCDC.RecordAllChunks", func(qtx database.Querier) error {
		for i, chunkMetadata := range allChunks {
			ch, err := qtx.CreateChunk(ctx, database.CreateChunkParams{
				Hash: chunkMetadata.Hash,
				Size: chunkMetadata.Size,
			})
			if err != nil {
				return fmt.Errorf("error creating chunk record for %s: %w", chunkMetadata.Hash, err)
			}

			err = qtx.LinkNarFileToChunk(ctx, database.LinkNarFileToChunkParams{
				NarFileID:  narFileID,
				ChunkID:    ch.ID,
				ChunkIndex: int32(i),
			})
			if err != nil {
				return fmt.Errorf("error linking chunk %s: %w", chunkMetadata.Hash, err)
			}
		}
		return nil
	})
	if err != nil {
		return 0, err
	}

	return int64(fileSize), nil

WHERE id = $1;

-- name: GetChunksByNarFileID :many
SELECT c.*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better maintainability and consistency with the MySQL version of this query, it's recommended to explicitly list the columns from the chunks table instead of using c.*. While sqlc expands this correctly, being explicit in the source SQL file makes the query's intent clearer and less prone to issues if the table schema changes.

SELECT c.id, c.hash, c.size, c.created_at, c.updated_at

WHERE id = ?;

-- name: GetChunksByNarFileID :many
SELECT c.*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better maintainability and consistency with the MySQL version of this query, it's recommended to explicitly list the columns from the chunks table instead of using c.*. While sqlc expands this correctly, being explicit in the source SQL file makes the query's intent clearer and less prone to issues if the table schema changes.

SELECT c.id, c.hash, c.size, c.created_at, c.updated_at

@codecov
Copy link

codecov bot commented Feb 1, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 79.16%. Comparing base (bb7b11c) to head (bd319b4).

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #669   +/-   ##
=======================================
  Coverage   79.16%   79.16%           
=======================================
  Files           2        2           
  Lines         456      456           
=======================================
  Hits          361      361           
  Misses         83       83           
  Partials       12       12           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request go Pull requests that update go code size:XXL This PR changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants