Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 35 additions & 20 deletions build/bazel/remote/execution/v2/remote_execution.proto
Original file line number Diff line number Diff line change
Expand Up @@ -440,19 +440,19 @@ service ContentAddressableStorage {
option (google.api.http) = { get: "/v2/{instance_name=**}/blobs/{root_digest.hash}/{root_digest.size_bytes}:getTree" };
}

// Split a blob into chunks.
// SplitBlob retrieves information about how a blob is split into chunks.
//
// This call splits a blob into chunks, stores the chunks in the CAS, and
// returns a list of the chunk digests. Using this list, a client can check
// which chunks are locally available and just fetch the missing ones. The
// desired blob can be assembled by concatenating the fetched chunks in the
// order of the digests in the list.
// This call returns information about how a blob is split into chunks, and
// returns a list of the chunk digests. Using the returned list of chunk digests,
// a client can check which chunks are locally available and only fetch the
// missing ones. The desired blob can be assembled by concatenating the fetched
// chunks in the order of the digests in the list. The chunks SHOULD all be
// available in the CAS.
//
// This rpc can be used to reduce the required data to download a large blob
// from CAS if chunks from earlier downloads of a different version of this
// blob are locally available. For this procedure to work properly, blobs
// SHOULD be split in a content-defined way, rather than with fixed-sized
// chunking.
// This API can be used to reduce the required data to download a large blob
// from CAS if some chunks from similar blobs are locally available. For this
// procedure to work properly, blobs SHOULD be split in a content-defined way,
// rather than with fixed-sized chunking.
//
// If a split request is answered successfully, a client can expect the
// following guarantees from the server:
Expand Down Expand Up @@ -491,26 +491,33 @@ service ContentAddressableStorage {
//
// Errors:
//
// * `NOT_FOUND`: The requested blob is not present in the CAS.
// * `NOT_FOUND`: The requested blob is not present in the CAS, OR there is no
// split information available for the blob, OR at least one chunk needed to
// reconstruct the blob is missing from the CAS.
// * `RESOURCE_EXHAUSTED`: There is insufficient disk quota to store the blob
// chunks.
rpc SplitBlob(SplitBlobRequest) returns (SplitBlobResponse) {
option (google.api.http) = { get: "/v2/{instance_name=**}/blobs/{blob_digest.hash}/{blob_digest.size_bytes}:splitBlob" };
}

// Splice a blob from chunks.
// SpliceBlob tells the CAS how chunks can compose a blob.
//
// This is the complementary operation to the
// [ContentAddressableStorage.SplitBlob][build.bazel.remote.execution.v2.ContentAddressableStorage.SplitBlob]
// function to handle the chunked upload of large blobs to save upload
// traffic.
//
// When uploading a large blob using chunked upload, clients MUST first upload
// all chunks to the CAS, then call this RPC to tell the server how those chunks
// compose the original blob. The chunks referenced in the SpliceBlob call SHOULD be
// available in the CAS before calling this RPC.
//
// If a client needs to upload a large blob and is able to split a blob into
// chunks in such a way that reusable chunks are obtained, e.g., by means of
// content-defined chunking, it can first determine which parts of the blob
// are already available in the remote CAS and upload the missing chunks, and
// then use this API to instruct the server to splice the original blob from
// the remotely available blob chunks.
// then use this API to store information on how the chunks compose the
// original blob.
//
// Servers which implement this functionality MUST declare that they support
// it by setting the
Expand All @@ -523,10 +530,13 @@ service ContentAddressableStorage {
// In order to ensure data consistency of the CAS, the server MUST only add
// blobs to the CAS after verifying their digests. In particular, servers MUST NOT
// trust digests provided by the client. The server MAY accept a request as no-op
// if the client-specified blob is already in CAS; the lifetime of that blob SHOULD
// be extended as usual. If the client-specified blob is not already in the CAS,
// the server SHOULD verify that the digest of the newly created blob matches the
// digest specified by the client, and reject the request if they differ.
// if the client-specified blob is already in CAS or if information on how to
// construct the blob from chunks is available. If the client-specified blob is
// not already in the CAS, the server MUST verify that the digest of the newly
// created blob assembled from chunks matches the digest specified by the
// client, and reject the request if they differ. Servers MAY choose to allow
// overwriting existing chunk mappings or to store multiple chunk mappings for
// the same blob.
//
// When blob splitting and splicing is used at the same time, the clients and
// the server SHOULD agree out-of-band upon a chunking algorithm used by both
Expand All @@ -540,6 +550,11 @@ service ContentAddressableStorage {
// spliced blob.
// * `INVALID_ARGUMENT`: The digest of the spliced blob is different from the
// provided expected digest.
// * `ALREADY_EXISTS`: The blob already exists in CAS and the server did not
// extend the lifetime of the chunks specified in the request, e.g. because
// it prefers a different chunking and extended those instead. Clients can
// call [SplitBlob][build.bazel.remote.execution.v2.ContentAddressableStorage.SplitBlob]
// to check what chunk mapping the server is using.
rpc SpliceBlob(SpliceBlobRequest) returns (SpliceBlobResponse) {
option (google.api.http) = { post: "/v2/{instance_name=**}/blobs:spliceBlob" body: "*" };
}
Expand Down Expand Up @@ -1804,7 +1819,7 @@ message BatchUpdateBlobsRequest {
bytes data = 2;

// The format of `data`. Must be `IDENTITY`/unspecified, or one of the
// compressors advertised by the
// compressors advertised by the
// [CacheCapabilities.supported_batch_compressors][build.bazel.remote.execution.v2.CacheCapabilities.supported_batch_compressors]
// field.
Compressor.Value compressor = 3;
Expand Down