Skip to content

Conversation

@derrickstolee
Copy link

There have been a number of customer-reported problems with errors of the form

error: inflate: data stream error (unknown compression method)
error: unable to unpack a163b1302d4729ebdb0a12d3876ca5bca4e1a8c3 header
error: files 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/pack/tempPacks/t-20260106-014520-049919-0001.temp' and 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/a1/63b1302d4729ebdb0a12d3876ca5bca4e1a8c3' differ in contents
error: gvfs-helper error: 'could not install loose object 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/a1/63b1302d4729ebdb0a12d3876ca5bca4e1a8c3': from GET a163b1302d4729ebdb0a12d3876ca5bca4e1a8c3'

or

Receiving packfile 1/1 with 1 objects (bytes received): 17367934, done.
Receiving packfile 1/1 with 1 objects [retry 1/6] (bytes received): 17367934, done.
Waiting to retry after network error (sec): 100% (8/8), done.
Receiving packfile 1/1 with 1 objects [retry 2/6] (bytes received): 17367934, done.
Waiting to retry after network error (sec): 100% (16/16), done.
Receiving packfile 1/1 with 1 objects [retry 3/6] (bytes received): 17367934, done.

These are not actually due to network issues, but they look like it based on the stack that is doing retries. Instead, these actually have problems when installing the loose object or packfile into the shared object cache.

The loose objects are hitting issues when installing and the target loose object is corrupt in some way, such as all NUL bytes because the disk wasn't flushed when the machine shut down. The error results because we are doing a collision check without confirming that the existing contents are valid.

The packfiles may be hitting similar comparison cases, but it is less likely. We update these packfile installations to also skip the collision check.

In both cases, if we have a transient network error, we add a new advice message that helps users with the two most common workarounds:

  1. Your disk may be full. Make room.
  2. Your shared object cache may be corrupt. Push all branches, delete it, and fetch to refill it.

I make special note of when the shared object cache doesn't exist and point that it probably should so the whole repo is suspect at that point.

  • This change only applies to interactions with Azure DevOps and the
    GVFS Protocol.

Resolves #837.

When we are installing a loose object, finalize_object_file() first
checks to see if the contents match what already exists in a loose
object file of the target name. However, this doesn't check if the
target is valid, it assumes the target is valid.

However, in the case of a power outage or something like that, the file
may be corrupt (for example: all NUL bytes). That is a common occurrence
when we are needing to install a loose object _again_: we don't think we
have it already so any copy that exists is probably bogus.

Use the flagged version with FOF_SKIP_COLLISION_CHECK to avoid these
types of errors, as seen in GitHub issue
microsoft#837.

Signed-off-by: Derrick Stolee <stolee@gmail.com>
Users sometimes see transient network errors, but they are actually due
to some other problem within the installation of a packfile. Observed
resolutions include freeing up space on a full disk or deleting the
shared object cache because something was broken due to a file
corruption or power outage.

This change only provides the advice to suggest those workarounds to
help users help themselves.

This is our first advice custom to the microsoft/git fork, so I have
partitioned the key away from the others to avoid adjacent change
conflicts (at least until upstream adds a new change at the end of the
alphabetical list).

We could consider providing a tool that does a more robust check of the
shared object cache, but since 'git fsck' isn't safe to run as it may
download missing objects, we do not have that ability at the moment.

The good news is that it is safe to delete and rebuild the shared object
cache as long as all local branches are pushed. The branches must be
pushed because the local .git/objects/ directory is moved to the shared
object cache in the 'cache-local-objects' maintenance task.

Signed-off-by: Derrick Stolee <stolee@gmail.com>
Similar to a recent change to avoid the collision check for loose
objects, do the same for prefetch packfiles. This should be more rare,
but the same prefetch packfile could be downloaded from the same cache
server so this isn't out of the range of possibility.

Signed-off-by: Derrick Stolee <stolee@gmail.com>
Copy link
Member

@dscho dscho left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much!

@dscho dscho merged commit c9b40e1 into microsoft:vfs-2.52.0 Jan 9, 2026
122 of 123 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants