X-Git-Url: http://www.git.cypherpunks.ru/?p=nncp.git;a=blobdiff_plain;f=doc%2Fchunked.texi;h=b73eb9223937f9b5f6caac7e85ba3cfecb074f72;hp=1682cc35ad83a50c576630f621d039fd177fb4d4;hb=203dfe36da7adf2b3089e4fa4017a67409cbad70;hpb=07550b82c27aed5186aea04b2a0f7d36dfaeb0c3 diff --git a/doc/chunked.texi b/doc/chunked.texi index 1682cc3..b73eb92 100644 --- a/doc/chunked.texi +++ b/doc/chunked.texi @@ -1,7 +1,8 @@ @node Chunked +@cindex chunked @unnumbered Chunked files -There is ability to transfer huge files with splitting them into smaller +There is ability to transfer huge files with dividing them into smaller chunks. Each chunk is treated like a separate file, producing separate outbound packet unrelated with other ones. @@ -13,8 +14,10 @@ on the destination node. Splitting is done with @ref{nncp-file, nncp-file -chunked} command and reassembling with @ref{nncp-reass} command. +@vindex .nncp.meta +@vindex .nncp.chunk Chunked @file{FILE} produces @file{FILE.nncp.meta}, -@file{FILE.nncp.chunk0}, @file{FILE.nncp.chunk1}, ... files. All +@file{FILE.nncp.chunk0}, @file{FILE.nncp.chunk1}, @dots{} files. All @file{.nncp.chunkXXX} can be concatenated together to produce original @file{FILE}. @@ -32,7 +35,7 @@ size and their hash checksums. This is @headitem @tab XDR type @tab Value @item Magic number @tab 8-byte, fixed length opaque data @tab - @verb{|N N C P M 0x00 0x00 0x01|} + @verb{|N N C P M 0x00 0x00 0x02|} @item File size @tab unsigned hyper integer @tab Whole reassembled file's size @@ -41,5 +44,16 @@ size and their hash checksums. This is Size of each chunk (except for the last one, that could be smaller) @item Checksums @tab variable length array of 32 byte fixed length opaque data @tab - BLAKE2b-256 checksum of each chunk + @ref{MTH} checksum of each chunk @end multitable + +@cindex ZFS recordsize +@anchor{ChunkedZFS} +It is strongly advisable to reassemble incoming chunked files on +@url{https://en.wikipedia.org/wiki/ZFS, ZFS} dataset with deduplication +feature enabled. It could be more CPU and memory hungry, but will save +your disk's IO and free space from pollution (although temporary). But +pay attention that you chunks must be either equal to, or divisible by +dataset's @option{recordsize} value for deduplication workability. +Default ZFS's @option{recordsize} is 128 KiBs, so it is advisable to +chunk your files on 128, 256, 384, 512, etc KiB blocks.