@node Chunked
@unnumbered Chunked files
-There is ability to transfer huge files with splitting them into smaller
+There is ability to transfer huge files with dividing them into smaller
chunks. Each chunk is treated like a separate file, producing separate
outbound packet unrelated with other ones.
reassembling with @ref{nncp-reass} command.
Chunked @file{FILE} produces @file{FILE.nncp.meta},
-@file{FILE.nncp.chunk0}, @file{FILE.nncp.chunk1}, ... files. All
+@file{FILE.nncp.chunk0}, @file{FILE.nncp.chunk1}, @dots{} files. All
@file{.nncp.chunkXXX} can be concatenated together to produce original
@file{FILE}.
@headitem @tab XDR type @tab Value
@item Magic number @tab
8-byte, fixed length opaque data @tab
- @verb{|N N C P M 0x00 0x00 0x01|}
+ @verb{|N N C P M 0x00 0x00 0x02|}
@item File size @tab
unsigned hyper integer @tab
Whole reassembled file's size
Size of each chunk (except for the last one, that could be smaller)
@item Checksums @tab
variable length array of 32 byte fixed length opaque data @tab
- BLAKE2b-256 checksum of each chunk
+ @ref{MTH} checksum of each chunk
@end multitable
+
+@anchor{ChunkedZFS}
+It is strongly advisable to reassemble incoming chunked files on
+@url{https://en.wikipedia.org/wiki/ZFS, ZFS} dataset with deduplication
+feature enabled. It could be more CPU and memory hungry, but will save
+your disk's IO and free space from pollution (although temporary). But
+pay attention that you chunks must be either equal to, or divisible by
+dataset's @option{recordsize} value for deduplication workability.
+Default ZFS's @option{recordsize} is 128 KiBs, so it is advisable to
+chunk your files on 128, 256, 384, 512, etc KiB blocks.