-
-
Notifications
You must be signed in to change notification settings - Fork 10
feat: integrate CDC with cache and implement chunk GC #669
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: integrate CDC with cache and implement chunk GC #669
Conversation
Summary of ChangesHello @kalbasit, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the cache's storage efficiency and performance by integrating Content-Defined Chunking (CDC). This allows NARs to be broken down into smaller, deduplicated chunks, reducing redundant storage and improving data retrieval. The changes also introduce a robust garbage collection system for these chunks and a migration path for existing NARs, ensuring the cache can adapt and optimize its data management over time. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a significant feature by integrating content-defined chunking (CDC) into the cache, enabling storage deduplication. The changes are extensive, affecting core cache logic, database queries, and adding new tests. My review focuses on performance and correctness. I've identified key areas for performance improvement in the CDC implementation, particularly concerning disk I/O and database transaction management. Additionally, I've suggested an optimization for the orphaned chunk cleanup query and a small correctness fix in the new tests. Overall, this is a valuable addition, and addressing these points will make the implementation more robust and efficient.
9f7cef1 to
24941e7
Compare
ae32c52 to
1643466
Compare
24941e7 to
aad1081
Compare
1643466 to
8bb122a
Compare
aad1081 to
e20990e
Compare
29bc630 to
b92988e
Compare
e20990e to
be66a95
Compare
b92988e to
e7a90b5
Compare
508ea4b to
4ac6f9b
Compare
e7a90b5 to
c228c33
Compare
4ac6f9b to
64f7254
Compare
c228c33 to
4bc44ac
Compare
64f7254 to
a4e6430
Compare
4bc44ac to
2a45a7a
Compare
a4e6430 to
e3253d1
Compare
736ed2e to
87d7b35
Compare
e3253d1 to
d34b9f5
Compare
47839b0 to
5ce3f0d
Compare
3066fbd to
e7890b6
Compare
5ce3f0d to
989e833
Compare
e7890b6 to
64c004f
Compare
989e833 to
4a71540
Compare
64c004f to
7983dd4
Compare
0a7cc32 to
3e5e83b
Compare
e3d163a to
f56283d
Compare
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces content-defined chunking (CDC) for NAR storage, which is a significant feature for deduplication. The changes include new database queries for managing chunks, integration into the core cache logic for GetNar and PutNar, and an update to the LRU garbage collector to clean up orphaned chunks. New tests for the CDC functionality have also been added.
My review focuses on potential performance issues, correctness, and consistency. I've identified a performance bottleneck in how chunk metadata is written to the database, a bug in file handling during NAR migration, and some configuration file inconsistencies. I also noted a logical inconsistency in the use of ref_count for chunks, which could lead to issues in the future.
f56283d to
a1cfc79
Compare
This commit integrates content-defined chunking into the core cache logic: - Updated PutNar to support streaming CDC and deduplication. - Updated GetNar to transparently reassemble chunks into NAR streams. - Implemented Garbage Collection (GC) for orphaned chunks in the LRU cleaner. - Added MigrateNarToChunks for background migration of traditional NARs to chunks. - Added new database queries to identify and delete orphaned chunks. - Regenerated database wrappers and updated metrics for chunk evictions. - Includes comprehensive integration tests for deduplication and reassembly.
Replace NOT IN subquery with LEFT JOIN IS NULL pattern for better performance on large tables. This optimization is applied across all three database engines (PostgreSQL, SQLite, MySQL).
The ref_count column was redundant as it can be computed from the nar_file_chunks join table. The current GC logic already uses reachability-based approach (GetOrphanedChunks) which is more reliable. This change: - Removes ref_count column from chunks table via migrations - Updates CreateChunk query to use DO NOTHING on conflict - Removes unused IncrementChunkRefCount and DecrementChunkRefCount queries - Updates sqlc.yml to remove ref_count type overrides - Regenerates database wrappers
Previously, each chunk created a new database transaction, causing significant performance degradation for files with many small chunks. This change refactors storeNarWithCDC to: 1. Collect all chunk metadata during the chunking loop 2. Store chunks in the chunk store as before 3. Perform all database operations (CreateChunk + LinkNarFileToChunk) in a single transaction after chunking completes This eliminates the per-chunk transaction overhead and greatly improves performance for files with many chunks.
This reverts commit 70a78b24d7f13fb1699558be1db4f4382b1b5f03.
b1ea8a4 to
bd319b4
Compare
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces content-defined chunking (CDC) for NAR storage, which is a significant feature for deduplication and storage efficiency. The changes are comprehensive, touching the database schema, queries, and core cache logic for PutNar, GetNar, and garbage collection. New tests for CDC functionality are also included.
Overall, the implementation is well-thought-out. I've identified a critical issue regarding transaction atomicity when storing chunks, which could lead to data corruption. I've also made a couple of suggestions to improve the maintainability and consistency of the SQL queries. Once these points are addressed, this will be a solid contribution.
| for { | ||
| select { | ||
| case <-ctx.Done(): | ||
| return 0, ctx.Err() | ||
| case err := <-errChan: | ||
| if err != nil { | ||
| return 0, fmt.Errorf("chunking error: %w", err) | ||
| } | ||
| case chunkMetadata, ok := <-chunksChan: | ||
| if !ok { | ||
| // All chunks processed. | ||
| return totalSize, nil | ||
| } | ||
|
|
||
| // Process chunk | ||
| data := make([]byte, chunkMetadata.Size) | ||
|
|
||
| _, err := f.ReadAt(data, chunkMetadata.Offset) | ||
| if err != nil { | ||
| return 0, fmt.Errorf("error reading chunk data: %w", err) | ||
| } | ||
|
|
||
| // Store in chunkStore if new | ||
| _, err = c.chunkStore.PutChunk(ctx, chunkMetadata.Hash, data) | ||
| if err != nil { | ||
| return 0, fmt.Errorf("error storing chunk: %w", err) | ||
| } | ||
|
|
||
| // Record in database | ||
| err = c.withTransaction(ctx, "storeNarWithCDC.RecordChunk", func(qtx database.Querier) error { | ||
| // Create or increment ref count. | ||
| // For MySQL, CreateChunk doesn't return the ID, so we use a different approach. | ||
| // But our generated wrapper handles the DB type differences. | ||
| ch, err := qtx.CreateChunk(ctx, database.CreateChunkParams{ | ||
| Hash: chunkMetadata.Hash, | ||
| Size: chunkMetadata.Size, | ||
| }) | ||
| if err != nil { | ||
| return fmt.Errorf("error creating chunk record: %w", err) | ||
| } | ||
|
|
||
| // Link to NAR file | ||
| err = qtx.LinkNarFileToChunk(ctx, database.LinkNarFileToChunkParams{ | ||
| NarFileID: narFileID, | ||
| ChunkID: ch.ID, | ||
| //nolint:gosec // G115: Chunk count is bounded by file size | ||
| ChunkIndex: int32(chunkCount), | ||
| }) | ||
| if err != nil { | ||
| return fmt.Errorf("error linking chunk: %w", err) | ||
| } | ||
|
|
||
| return nil | ||
| }) | ||
| if err != nil { | ||
| return 0, err | ||
| } | ||
|
|
||
| totalSize += int64(chunkMetadata.Size) | ||
| chunkCount++ | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation of storeNarWithCDC uses a separate database transaction for each chunk. This is inefficient for NARs with many chunks and, more importantly, can lead to an inconsistent state. If an error occurs midway through processing, the NAR file will be left in a partially chunked state in the database. This will cause getNarFromChunks to return a corrupted (truncated) NAR on subsequent requests.
To ensure atomicity and improve performance, all database operations for a single NAR should be performed in a single transaction. I recommend refactoring this to first process all chunks and then commit the database changes together.
Here's a suggested refactoring of the logic:
// 2. Start chunking and collect all metadata
chunksChan, errChan := c.chunker.Chunk(ctx, f)
var allChunks []chunker.Chunk
for chunkMetadata := range chunksChan {
allChunks = append(allChunks, chunkMetadata)
}
if err := <-errChan; err != nil {
return 0, fmt.Errorf("chunking error: %w", err)
}
// 3. Store all chunks in the chunk store
// This part is outside the DB transaction. If it fails, orphaned chunks
// may be left in the store, which will be cleaned up by the GC later.
for _, chunkMetadata := range allChunks {
data := make([]byte, chunkMetadata.Size)
if _, err := f.ReadAt(data, chunkMetadata.Offset); err != nil {
return 0, fmt.Errorf("error reading chunk data: %w", err)
}
// We can ignore the 'created' bool here. If it already exists, that's fine.
if _, err := c.chunkStore.PutChunk(ctx, chunkMetadata.Hash, data); err != nil {
return 0, fmt.Errorf("error storing chunk: %w", err)
}
}
// 4. Record all chunks in the database in a single transaction
err = c.withTransaction(ctx, "storeNarWithCDC.RecordAllChunks", func(qtx database.Querier) error {
for i, chunkMetadata := range allChunks {
ch, err := qtx.CreateChunk(ctx, database.CreateChunkParams{
Hash: chunkMetadata.Hash,
Size: chunkMetadata.Size,
})
if err != nil {
return fmt.Errorf("error creating chunk record for %s: %w", chunkMetadata.Hash, err)
}
err = qtx.LinkNarFileToChunk(ctx, database.LinkNarFileToChunkParams{
NarFileID: narFileID,
ChunkID: ch.ID,
ChunkIndex: int32(i),
})
if err != nil {
return fmt.Errorf("error linking chunk %s: %w", chunkMetadata.Hash, err)
}
}
return nil
})
if err != nil {
return 0, err
}
return int64(fileSize), nil| WHERE id = $1; | ||
|
|
||
| -- name: GetChunksByNarFileID :many | ||
| SELECT c.* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For better maintainability and consistency with the MySQL version of this query, it's recommended to explicitly list the columns from the chunks table instead of using c.*. While sqlc expands this correctly, being explicit in the source SQL file makes the query's intent clearer and less prone to issues if the table schema changes.
SELECT c.id, c.hash, c.size, c.created_at, c.updated_at| WHERE id = ?; | ||
|
|
||
| -- name: GetChunksByNarFileID :many | ||
| SELECT c.* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For better maintainability and consistency with the MySQL version of this query, it's recommended to explicitly list the columns from the chunks table instead of using c.*. While sqlc expands this correctly, being explicit in the source SQL file makes the query's intent clearer and less prone to issues if the table schema changes.
SELECT c.id, c.hash, c.size, c.created_at, c.updated_at
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #669 +/- ##
=======================================
Coverage 79.16% 79.16%
=======================================
Files 2 2
Lines 456 456
=======================================
Hits 361 361
Misses 83 83
Partials 12 12 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|

This commit integrates content-defined chunking into the core cache logic:
Part of #322