X-Git-Url: http://www.git.cypherpunks.ru/?p=nncp.git;a=blobdiff_plain;f=doc%2Fchunked.texi;h=b73eb9223937f9b5f6caac7e85ba3cfecb074f72;hp=41b1d7a6e9710539db3c0ac2764e7495ccf32485;hb=203dfe36da7adf2b3089e4fa4017a67409cbad70;hpb=364b3cba688647d50db7b336d9c577e822072e62 diff --git a/doc/chunked.texi b/doc/chunked.texi index 41b1d7a..b73eb92 100644 --- a/doc/chunked.texi +++ b/doc/chunked.texi @@ -1,7 +1,8 @@ @node Chunked +@cindex chunked @unnumbered Chunked files -There is ability to transfer huge files with splitting them into smaller +There is ability to transfer huge files with dividing them into smaller chunks. Each chunk is treated like a separate file, producing separate outbound packet unrelated with other ones. @@ -10,14 +11,19 @@ than huge file's size. You can transfer those chunks on different storage devices, and/or at different time, reassembling the whole packet on the destination node. -Splitting is done with @ref{nncp-file, nncp-file -chunk} command and +Splitting is done with @ref{nncp-file, nncp-file -chunked} command and reassembling with @ref{nncp-reass} command. +@vindex .nncp.meta +@vindex .nncp.chunk Chunked @file{FILE} produces @file{FILE.nncp.meta}, -@file{FILE.nncp.chunk0}, @file{FILE.nncp.chunk1}, ... files. All +@file{FILE.nncp.chunk0}, @file{FILE.nncp.chunk1}, @dots{} files. All @file{.nncp.chunkXXX} can be concatenated together to produce original -@file{FILE}. @file{.nncp.meta} contains information about file/chunk -size and their hash checksums: +@file{FILE}. + +@file{.nncp.meta} contains information about file/chunk +size and their hash checksums. This is +@url{https://tools.ietf.org/html/rfc4506, XDR}-encoded structure: @verbatim +------------------------------+---------------------+ @@ -26,17 +32,28 @@ size and their hash checksums: @end verbatim @multitable @columnfractions 0.2 0.3 0.5 -@headitem @tab XDR type @tab Value +@headitem @tab XDR type @tab Value @item Magic number @tab 8-byte, fixed length opaque data @tab - @verb{|N N C P M 0x00 0x00 0x01|} + @verb{|N N C P M 0x00 0x00 0x02|} @item File size @tab unsigned hyper integer @tab Whole reassembled file's size @item Chunk size @tab unsigned hyper integer @tab - Size of each chunks (except for the last one, that could be smaller). + Size of each chunk (except for the last one, that could be smaller) @item Checksums @tab variable length array of 32 byte fixed length opaque data @tab - BLAKE2b-256 checksum of each chunk + @ref{MTH} checksum of each chunk @end multitable + +@cindex ZFS recordsize +@anchor{ChunkedZFS} +It is strongly advisable to reassemble incoming chunked files on +@url{https://en.wikipedia.org/wiki/ZFS, ZFS} dataset with deduplication +feature enabled. It could be more CPU and memory hungry, but will save +your disk's IO and free space from pollution (although temporary). But +pay attention that you chunks must be either equal to, or divisible by +dataset's @option{recordsize} value for deduplication workability. +Default ZFS's @option{recordsize} is 128 KiBs, so it is advisable to +chunk your files on 128, 256, 384, 512, etc KiB blocks.