X-Git-Url: http://www.git.cypherpunks.ru/?a=blobdiff_plain;f=doc%2Fchunked.texi;h=d0f111aa96366969e872cae60bdf6957e2b9c41c;hb=bb7fe5e770586be9f44a8a7f3321a3139d5345bb;hp=5bdbc22dab5abe9e3386146a012c46c5daf33608;hpb=71fef454f38565b87e8697f86b071a57b3a3c379;p=nncp.git diff --git a/doc/chunked.texi b/doc/chunked.texi index 5bdbc22..d0f111a 100644 --- a/doc/chunked.texi +++ b/doc/chunked.texi @@ -1,7 +1,8 @@ @node Chunked +@cindex chunked @unnumbered Chunked files -There is ability to transfer huge files with splitting them into smaller +There is ability to transfer huge files with dividing them into smaller chunks. Each chunk is treated like a separate file, producing separate outbound packet unrelated with other ones. @@ -10,16 +11,18 @@ than huge file's size. You can transfer those chunks on different storage devices, and/or at different time, reassembling the whole packet on the destination node. -Splitting is done with @ref{nncp-file, nncp-file -chunk} command and -reassembling with @ref{nncp-reass} command. +Splitting is done with @command{@ref{nncp-file} -chunked} command and +reassembling with @command{@ref{nncp-reass}} command. +@vindex .nncp.meta +@vindex .nncp.chunk Chunked @file{FILE} produces @file{FILE.nncp.meta}, -@file{FILE.nncp.chunk0}, @file{FILE.nncp.chunk1}, ... files. All +@file{FILE.nncp.chunk0}, @file{FILE.nncp.chunk1}, @dots{} files. All @file{.nncp.chunkXXX} can be concatenated together to produce original @file{FILE}. @file{.nncp.meta} contains information about file/chunk -size and their hash checksums. It is +size and their hash checksums. This is @url{https://tools.ietf.org/html/rfc4506, XDR}-encoded structure: @verbatim @@ -29,17 +32,28 @@ size and their hash checksums. It is @end verbatim @multitable @columnfractions 0.2 0.3 0.5 -@headitem @tab XDR type @tab Value +@headitem @tab XDR type @tab Value @item Magic number @tab 8-byte, fixed length opaque data @tab - @verb{|N N C P M 0x00 0x00 0x01|} + @verb{|N N C P M 0x00 0x00 0x02|} @item File size @tab unsigned hyper integer @tab Whole reassembled file's size @item Chunk size @tab unsigned hyper integer @tab - Size of each chunks (except for the last one, that could be smaller). + Size of each chunk (except for the last one, that could be smaller) @item Checksums @tab variable length array of 32 byte fixed length opaque data @tab - BLAKE2b-256 checksum of each chunk + @ref{MTH} checksum of each chunk @end multitable + +@cindex ZFS recordsize +@anchor{ChunkedZFS} +It is strongly advisable to reassemble incoming chunked files on +@url{https://en.wikipedia.org/wiki/ZFS, ZFS} dataset with deduplication +feature enabled. It could be more CPU and memory hungry, but will save +your disk's IO and free space from pollution (although temporary). But +pay attention that you chunks must be either equal to, or divisible by +dataset's @option{recordsize} value for deduplication workability. +Default ZFS's @option{recordsize} is 128 KiBs, so it is advisable to +chunk your files on 128, 256, 384, 512, etc KiB blocks.