@node Chunked
+@cindex chunked
@unnumbered Chunked files
-There is ability to transfer huge files with splitting them into smaller
+There is ability to transfer huge files with dividing them into smaller
chunks. Each chunk is treated like a separate file, producing separate
outbound packet unrelated with other ones.
storage devices, and/or at different time, reassembling the whole packet
on the destination node.
-Splitting is done with @ref{nncp-file, nncp-file -chunked} command and
-reassembling with @ref{nncp-reass} command.
+Splitting is done with @command{@ref{nncp-file} -chunked} command and
+reassembling with @command{@ref{nncp-reass}} command.
+@vindex .nncp.meta
+@vindex .nncp.chunk
Chunked @file{FILE} produces @file{FILE.nncp.meta},
-@file{FILE.nncp.chunk0}, @file{FILE.nncp.chunk1}, ... files. All
+@file{FILE.nncp.chunk0}, @file{FILE.nncp.chunk1}, @dots{} files. All
@file{.nncp.chunkXXX} can be concatenated together to produce original
@file{FILE}.
@headitem @tab XDR type @tab Value
@item Magic number @tab
8-byte, fixed length opaque data @tab
- @verb{|N N C P M 0x00 0x00 0x01|}
+ @verb{|N N C P M 0x00 0x00 0x02|}
@item File size @tab
unsigned hyper integer @tab
Whole reassembled file's size
Size of each chunk (except for the last one, that could be smaller)
@item Checksums @tab
variable length array of 32 byte fixed length opaque data @tab
- BLAKE2b-256 checksum of each chunk
+ @ref{MTH} checksum of each chunk
@end multitable
+@cindex ZFS recordsize
@anchor{ChunkedZFS}
It is strongly advisable to reassemble incoming chunked files on
@url{https://en.wikipedia.org/wiki/ZFS, ZFS} dataset with deduplication