X-Git-Url: http://www.git.cypherpunks.ru/?p=nncp.git;a=blobdiff_plain;f=doc%2Fchunked.texi;h=b73eb9223937f9b5f6caac7e85ba3cfecb074f72;hp=27ad5099661f8dd760b90ef5ff07ff14a5b142d4;hb=203dfe36da7adf2b3089e4fa4017a67409cbad70;hpb=f137f5520414dad67e6d9895eb2c10a94366e9ef diff --git a/doc/chunked.texi b/doc/chunked.texi index 27ad509..b73eb92 100644 --- a/doc/chunked.texi +++ b/doc/chunked.texi @@ -1,7 +1,8 @@ @node Chunked +@cindex chunked @unnumbered Chunked files -There is ability to transfer huge files with splitting them into smaller +There is ability to transfer huge files with dividing them into smaller chunks. Each chunk is treated like a separate file, producing separate outbound packet unrelated with other ones. @@ -10,11 +11,13 @@ than huge file's size. You can transfer those chunks on different storage devices, and/or at different time, reassembling the whole packet on the destination node. -Splitting is done with @ref{nncp-file, nncp-file -chunk} command and +Splitting is done with @ref{nncp-file, nncp-file -chunked} command and reassembling with @ref{nncp-reass} command. +@vindex .nncp.meta +@vindex .nncp.chunk Chunked @file{FILE} produces @file{FILE.nncp.meta}, -@file{FILE.nncp.chunk0}, @file{FILE.nncp.chunk1}, ... files. All +@file{FILE.nncp.chunk0}, @file{FILE.nncp.chunk1}, @dots{} files. All @file{.nncp.chunkXXX} can be concatenated together to produce original @file{FILE}. @@ -29,10 +32,10 @@ size and their hash checksums. This is @end verbatim @multitable @columnfractions 0.2 0.3 0.5 -@headitem @tab XDR type @tab Value +@headitem @tab XDR type @tab Value @item Magic number @tab 8-byte, fixed length opaque data @tab - @verb{|N N C P M 0x00 0x00 0x01|} + @verb{|N N C P M 0x00 0x00 0x02|} @item File size @tab unsigned hyper integer @tab Whole reassembled file's size @@ -41,5 +44,16 @@ size and their hash checksums. This is Size of each chunk (except for the last one, that could be smaller) @item Checksums @tab variable length array of 32 byte fixed length opaque data @tab - BLAKE2b-256 checksum of each chunk + @ref{MTH} checksum of each chunk @end multitable + +@cindex ZFS recordsize +@anchor{ChunkedZFS} +It is strongly advisable to reassemble incoming chunked files on +@url{https://en.wikipedia.org/wiki/ZFS, ZFS} dataset with deduplication +feature enabled. It could be more CPU and memory hungry, but will save +your disk's IO and free space from pollution (although temporary). But +pay attention that you chunks must be either equal to, or divisible by +dataset's @option{recordsize} value for deduplication workability. +Default ZFS's @option{recordsize} is 128 KiBs, so it is advisable to +chunk your files on 128, 256, 384, 512, etc KiB blocks.