X-Git-Url: http://www.git.cypherpunks.ru/?p=nncp.git;a=blobdiff_plain;f=doc%2Fchunked.texi;h=b73eb9223937f9b5f6caac7e85ba3cfecb074f72;hp=67f425919d69f4fe24f6d24e21dccbac2dd986bf;hb=203dfe36da7adf2b3089e4fa4017a67409cbad70;hpb=152ecfdf11766df967a504efaef7b5ba77e6f5d7 diff --git a/doc/chunked.texi b/doc/chunked.texi index 67f4259..b73eb92 100644 --- a/doc/chunked.texi +++ b/doc/chunked.texi @@ -1,4 +1,5 @@ @node Chunked +@cindex chunked @unnumbered Chunked files There is ability to transfer huge files with dividing them into smaller @@ -13,6 +14,8 @@ on the destination node. Splitting is done with @ref{nncp-file, nncp-file -chunked} command and reassembling with @ref{nncp-reass} command. +@vindex .nncp.meta +@vindex .nncp.chunk Chunked @file{FILE} produces @file{FILE.nncp.meta}, @file{FILE.nncp.chunk0}, @file{FILE.nncp.chunk1}, @dots{} files. All @file{.nncp.chunkXXX} can be concatenated together to produce original @@ -44,6 +47,7 @@ size and their hash checksums. This is @ref{MTH} checksum of each chunk @end multitable +@cindex ZFS recordsize @anchor{ChunkedZFS} It is strongly advisable to reassemble incoming chunked files on @url{https://en.wikipedia.org/wiki/ZFS, ZFS} dataset with deduplication