@node Chunked @cindex chunked @unnumbered Chunked files There is ability to transfer huge files with dividing them into smaller chunks. Each chunk is treated like a separate file, producing separate outbound packet unrelated with other ones. This is useful when your removable storage device has smaller capacity than huge file's size. You can transfer those chunks on different storage devices, and/or at different time, reassembling the whole packet on the destination node. Splitting is done with @command{@ref{nncp-file} -chunked} command and reassembling with @command{@ref{nncp-reass}} command. @vindex .nncp.meta @vindex .nncp.chunk Chunked @file{FILE} produces @file{FILE.nncp.meta}, @file{FILE.nncp.chunk0}, @file{FILE.nncp.chunk1}, @dots{} files. All @file{.nncp.chunkXXX} can be concatenated together to produce original @file{FILE}. @file{.nncp.meta} contains information about file/chunk size and their hash checksums. This is @url{https://tools.ietf.org/html/rfc4506, XDR}-encoded structure: @verbatim +------------------------------+---------------------+ | MAGIC | FILESIZE | CHUNKSIZE | HASH0 | HASH1 | ... | +------------------------------+---------------------+ @end verbatim @multitable @columnfractions 0.2 0.3 0.5 @headitem @tab XDR type @tab Value @item Magic number @tab 8-byte, fixed length opaque data @tab @verb{|N N C P M 0x00 0x00 0x02|} @item File size @tab unsigned hyper integer @tab Whole reassembled file's size @item Chunk size @tab unsigned hyper integer @tab Size of each chunk (except for the last one, that could be smaller) @item Checksums @tab variable length array of 32 byte fixed length opaque data @tab @ref{MTH} checksum of each chunk @end multitable @cindex ZFS recordsize @anchor{ChunkedZFS} It is strongly advisable to reassemble incoming chunked files on @url{https://en.wikipedia.org/wiki/ZFS, ZFS} dataset with deduplication feature enabled. It could be more CPU and memory hungry, but will save your disk's IO and free space from pollution (although temporary). But pay attention that you chunks must be either equal to, or divisible by dataset's @option{recordsize} value for deduplication workability. Default ZFS's @option{recordsize} is 128 KiBs, so it is advisable to chunk your files on 128, 256, 384, 512, etc KiB blocks.