-
Notifications
You must be signed in to change notification settings - Fork 81
registry: Fixes to allow layer uploads above 5GB #78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
As we need to copy a stream of the uploaded layer (random uuid) to a `blobs/.../<digest>` path, that might surpass the maximum size of a monolithic upload to R2 without using multipart uploads. To circumvent this, we will calculate the digest from the uploaded path and compare. Then, we will create a reference in the original `blobs/.../digest` path pointing to the upload path.
…an continue hashing from where we left it
|
tests fail due to some random vitest stuff that ive yet to figure out. However, behavior looks ok to me from manual testing. Will fix tests tomorrow |
|
This PR fix the 5GB size limit on my local tests ! Any update on this @gabivlj for merge soon ? |
|
Pull Requests #100 and #101 should be merged in this PR first before merging to main branch. PS: The vitest error cmme from the direct import of serverless-registry/src/registry/r2.ts Line 37 in b740391
But i'm not sure how to fix it properly. |
|
This PR is pending for more than a year... Anything missing ? |
As we need to copy a stream of the uploaded layer (random uuid) to a
blobs/.../<digest>path, that might surpass the maximum size of a monolithic upload to R2 without using multipart uploads.To circumvent this, we will calculate the digest from the uploaded path and compare. Then, we will create a reference in the original
blobs/.../digestpath pointing to the upload path.Should fix #69