Skip to content

[FEATURE] Generate ranged responses with downloadZip #68

@Harsh-br0

Description

@Harsh-br0

Is your feature request related to a problem? Please describe.
I have found this only client library that generates zip in streaming fashion to use in cloudfare workers.
But the response doesn't seems to be truely streaming because of the lack of partial content responses

Why is your feature relevant to client-zip?
Since cloudfare Workers have browser based architecture ( having whatwg compliant streaming APIs like ReadableStream , TransformStream ) , so i needed a client library that is capable of generating zips in streaming fashion for large outputs

Describe the solution you'd like
At the moment , I'm not sure but seems like this function

client-zip/src/zip.ts

Lines 51 to 116 in a338414

export async function* loadFiles(files: ForAwaitable<ZipEntryDescription & Metadata>, options: Options) {
const centralRecord: Uint8Array[] = []
let offset = 0n
let fileCount = 0n
let archiveNeedsZip64 = false
// write files
for await (const file of files) {
const flags = flagNameUTF8(file, options.buffersAreUTF8)
yield fileHeader(file, flags)
yield file.encodedName
if (file.isFile) {
yield* fileData(file)
}
const bigFile = file.uncompressedSize! >= 0xffffffffn
const bigOffset = offset >= 0xffffffffn
// @ts-ignore
const zip64HeaderLength = (bigOffset * 12 | bigFile * 28) as Zip64FieldLength
yield dataDescriptor(file, bigFile)
centralRecord.push(centralHeader(file, offset, flags, zip64HeaderLength))
centralRecord.push(file.encodedName)
if (zip64HeaderLength) centralRecord.push(zip64ExtraField(file, offset, zip64HeaderLength))
if (bigFile) offset += 8n // because the data descriptor will have 64-bit sizes
fileCount++
offset += BigInt(fileHeaderLength + descriptorLength + file.encodedName.length) + file.uncompressedSize!
archiveNeedsZip64 ||= bigFile
}
// write central repository
let centralSize = 0n
for (const record of centralRecord) {
yield record
centralSize += BigInt(record.length)
}
if (archiveNeedsZip64 || offset >= 0xffffffffn) {
const endZip64 = makeBuffer(zip64endRecordLength + zip64endLocatorLength)
// 4.3.14 Zip64 end of central directory record
endZip64.setUint32(0, zip64endRecordSignature)
endZip64.setBigUint64(4, BigInt(zip64endRecordLength - 12), true)
endZip64.setUint32(12, 0x2d03_2d_00) // UNIX app version 4.5 | ZIP version 4.5
// leave 8 bytes at zero
endZip64.setBigUint64(24, fileCount, true)
endZip64.setBigUint64(32, fileCount, true)
endZip64.setBigUint64(40, centralSize, true)
endZip64.setBigUint64(48, offset, true)
// 4.3.15 Zip64 end of central directory locator
endZip64.setUint32(56, zip64endLocatorSignature)
// leave 4 bytes at zero
endZip64.setBigUint64(64, offset + centralSize, true)
endZip64.setUint32(72, 1, true)
yield makeUint8Array(endZip64)
}
const end = makeBuffer(endLength)
end.setUint32(0, endSignature)
// skip 4 useless bytes here
end.setUint16(8, clampInt16(fileCount), true)
end.setUint16(10, clampInt16(fileCount), true)
end.setUint32(12, clampInt32(centralSize), true)
end.setUint32(16, clampInt32(offset), true)
// leave comment length = zero (2 bytes)
yield makeUint8Array(end)
}

here it can accept byteRange property as 2 element array or start and end properties in options object to yield only those bytes whose index lies inbetween this range.

Describe alternatives you've considered
Currently it is possible to achieve same thing manually with makeZip output stream and wrapping around with own implemented function

Additional Context
Thanks for such library, expecting more features like a modular approach same as node-archiver
So the implementation can be platform , format agnostic

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions