X-Git-Url: http://www.git.cypherpunks.ru/?a=blobdiff_plain;f=doc%2Fchunked.texi;h=67f425919d69f4fe24f6d24e21dccbac2dd986bf;hb=7f71f37675f61b4081ad6fef2a936f1c7eb620f9;hp=1682cc35ad83a50c576630f621d039fd177fb4d4;hpb=093f249044a62ce4d988542c7267caf1da5d0968;p=nncp.git diff --git a/doc/chunked.texi b/doc/chunked.texi index 1682cc3..67f4259 100644 --- a/doc/chunked.texi +++ b/doc/chunked.texi @@ -1,7 +1,7 @@ @node Chunked @unnumbered Chunked files -There is ability to transfer huge files with splitting them into smaller +There is ability to transfer huge files with dividing them into smaller chunks. Each chunk is treated like a separate file, producing separate outbound packet unrelated with other ones. @@ -14,7 +14,7 @@ Splitting is done with @ref{nncp-file, nncp-file -chunked} command and reassembling with @ref{nncp-reass} command. Chunked @file{FILE} produces @file{FILE.nncp.meta}, -@file{FILE.nncp.chunk0}, @file{FILE.nncp.chunk1}, ... files. All +@file{FILE.nncp.chunk0}, @file{FILE.nncp.chunk1}, @dots{} files. All @file{.nncp.chunkXXX} can be concatenated together to produce original @file{FILE}. @@ -32,7 +32,7 @@ size and their hash checksums. This is @headitem @tab XDR type @tab Value @item Magic number @tab 8-byte, fixed length opaque data @tab - @verb{|N N C P M 0x00 0x00 0x01|} + @verb{|N N C P M 0x00 0x00 0x02|} @item File size @tab unsigned hyper integer @tab Whole reassembled file's size @@ -41,5 +41,15 @@ size and their hash checksums. This is Size of each chunk (except for the last one, that could be smaller) @item Checksums @tab variable length array of 32 byte fixed length opaque data @tab - BLAKE2b-256 checksum of each chunk + @ref{MTH} checksum of each chunk @end multitable + +@anchor{ChunkedZFS} +It is strongly advisable to reassemble incoming chunked files on +@url{https://en.wikipedia.org/wiki/ZFS, ZFS} dataset with deduplication +feature enabled. It could be more CPU and memory hungry, but will save +your disk's IO and free space from pollution (although temporary). But +pay attention that you chunks must be either equal to, or divisible by +dataset's @option{recordsize} value for deduplication workability. +Default ZFS's @option{recordsize} is 128 KiBs, so it is advisable to +chunk your files on 128, 256, 384, 512, etc KiB blocks.