nncp-exec
nncp-file
nncp-freq
+nncp-hash
nncp-log
nncp-pkt
nncp-reass
@node Chunked
@unnumbered Chunked files
-There is ability to transfer huge files with splitting them into smaller
+There is ability to transfer huge files with dividing them into smaller
chunks. Each chunk is treated like a separate file, producing separate
outbound packet unrelated with other ones.
@headitem @tab XDR type @tab Value
@item Magic number @tab
8-byte, fixed length opaque data @tab
- @verb{|N N C P M 0x00 0x00 0x01|}
+ @verb{|N N C P M 0x00 0x00 0x02|}
@item File size @tab
unsigned hyper integer @tab
Whole reassembled file's size
Size of each chunk (except for the last one, that could be smaller)
@item Checksums @tab
variable length array of 32 byte fixed length opaque data @tab
- BLAKE2b-256 checksum of each chunk
+ @ref{MTH} checksum of each chunk
@end multitable
@anchor{ChunkedZFS}
Perform @ref{Spool, spool} directory integrity check. Read all files
that has Base32-encoded filenames and compare it with recalculated
-BLAKE2b hash output of their contents.
+@ref{MTH} hash output of their contents.
The most useful mode of operation is with @option{-nock} option, that
checks integrity of @file{.nock} files, renaming them to ordinary
temporary file location directory with @env{TMPDIR} environment
variable. Encryption is performed in AEAD mode with
@url{https://cr.yp.to/chacha.html, ChaCha20}-@url{https://en.wikipedia.org/wiki/Poly1305, Poly1305}
-algorithms. Data is splitted on 128 KiB blocks. Each block is encrypted
+algorithms. Data is divided on 128 KiB blocks. Each block is encrypted
with increasing nonce counter. File is deletes immediately after
creation, so even if program crashes -- disk space will be reclaimed, no
need in cleaning it up later.
file request, then it will sent simple letter after successful file
queuing.
+@node nncp-hash
+@section nncp-hash
+
+@example
+$ nncp-log [-file ...] [-seek X] [-debug] [-progress]
+@end example
+
+Calculate @ref{MTH} hash of either stdin, or @option{-file} if
+specified.
+
+You can optionally force seeking the file first, reading only part of
+the file, and then prepending unread portion of data, with the
+@option{-seek} option. It is intended only for testing and debugging of
+MTH hasher capabilities.
+
+@option{-debug} option shows all intermediate MTH hashes.
+And @option{-progress} will show progress bar.
+
@node nncp-log
@section nncp-log
@item @code{github.com/gosuri/uilive} @tab MIT
@item @code{github.com/hjson/hjson-go} @tab MIT
@item @code{github.com/klauspost/compress} @tab BSD 3-Clause
+@item @code{github.com/klauspost/cpuid} @tab BSD 3-Clause
@item @code{go.cypherpunks.ru/balloon} @tab GNU LGPLv3
+@item @code{go.cypherpunks.ru/recfile} @tab GNU GPLv3
@item @code{golang.org/x/crypto} @tab BSD 3-Clause
@item @code{golang.org/x/net} @tab BSD 3-Clause
@item @code{golang.org/x/sys} @tab BSD 3-Clause
@item @code{golang.org/x/term} @tab BSD 3-Clause
+@item @code{lukechampine.com/blake3} @tab MIT
@end multitable
@multitable {XXXXX} {XXXX-XX-XX} {XXXX KiB} {link sign} {xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx}
@item read 32-bytes of blob AEAD encryption key
@item encrypt and authenticate blob using
@url{https://cr.yp.to/chacha.html, ChaCha20}-@url{https://en.wikipedia.org/wiki/Poly1305, Poly1305}.
- Blob is splitted on 128 KiB blocks. Each block is encrypted with
+ Blob is divided on 128 KiB blocks. Each block is encrypted with
increasing nonce counter. Eblob packet itself, with empty blob
field, is fed as an additional authenticated data
@end enumerate
* Spool directory: Spool.
* Log format: Log.
* Packet format: Packet.
+* Merkle Tree Hashing: MTH.
* Sync protocol: Sync.
* MultiCast Discovery: MCD.
* EBlob format: EBlob.
@include spool.texi
@include log.texi
@include pkt.texi
+@include mth.texi
@include sp.texi
@include mcd.texi
@include eblob.texi
--- /dev/null
+@node MTH
+@unnumbered Merkle Tree Hashing
+
+NNCP uses @url{https://github.com/BLAKE3-team/BLAKE3, BLAKE3} hash
+function in @url{https://en.wikipedia.org/wiki/Merkle_Tree, Merkle Tree}
+mode of operation for checksumming @ref{Encrypted, encrypted packets}
+and @ref{Chunked, chunked} files.
+
+Previously ordinary BLAKE2b-256 was used, but it prevented partial
+calculations of file parts, so you had to fully read the whole file
+again after its resumed download.
+
+MTH divides data on 128 KiB blocks, hashes each of them independently
+and then calculates the Merkle tree root:
+
+@verbatim
+ node3
+ / \
+ / \
+ node2 leaf4
+ / \ \
+ / \ \
+ / \ \
+ / \ \
+ / \ \
+ node0 node1 leaf4
+ / \ / \ \
+ / \ / \ \
+leaf0 leaf1 leaf2 leaf3 leaf4
+ | | | | |
+block block block block block
+@end verbatim
+
+Leaf's value is keyed BLAKE3-256 hash of underlying block (128 KiB,
+except for probably the last one). Node's value is keyed BLAKE3-256 hash
+of two underlying leafs. Keys are
+@verb{|BLAKE3-256("NNCP MTH LEAF")|} and
+@verb{|BLAKE3-256("NNCP MTH NODE")|}.
+Keyed operation allows working with an aligned data (128KiB or 64B
+boundaries), unlike popular way of prepending @verb{|0x00|} and
+@verb{|0x01|} to the hashed data, being more efficient with an attention
+to BLAKE3's internal Merkle tree.
@node Новости
@section Новости
+@node Релиз 7.0.0
+@subsection Релиз 7.0.0
+@itemize
+
+@item
+Хэширование с BLAKE3 на базе деревьев Меркле (Merkle Tree Hashing, MTH)
+используется вместо BLAKE2b. Из-за этого, обратно @strong{несовместимое}
+изменение формата шифрованных файлов (всего что находится в spool
+области) и формата @file{.meta} файла при chunked передаче.
+
+Текущая реализация далека от оптимальной: в ней нет распараллеливания
+вычислений и имеет повышенное потребление памяти: около 512 KiB на
+каждый 1 GiB данных файла. Будущая оптимизация производительности и
+потребления памяти не должна привести к изменению формата пакетов. Но
+это всё равно в несколько раз быстрее BLAKE2b.
+
+@item
+Из-за использования MTH, докачиваемые в online режиме файлы потребуют
+чтения с диска только предшествующей части, а не полностью всего файла,
+как было прежде.
+
+@item
+Добавлена @command{nncp-hash} утилита для вычисления MTH хэша файла.
+
+@item
+MultiCast Discovery использует
+@verb{|ff02::4e4e:4350|} адрес вместо @verb{|ff02::1|}.
+
+@item
+Обновлены зависимые библиотеки.
+
+@end itemize
+
@node Релиз 6.6.0
@subsection Релиз 6.6.0
@itemize
See also this page @ref{Новости, on russian}.
+@node Release 7.0.0
+@section Release 7.0.0
+@itemize
+
+@item
+Merkle Tree-based Hashing with BLAKE3 (MTH) is used instead of BLAKE2b.
+Because of that, there are backward @strong{incompatible} changes of
+encrypted files (everything laying in the spool directory) and
+@file{.meta} files of chunked transfer.
+
+Current implementation is far from being optimal: it lacks
+parallelizable calculations and has higher memory consumption: nearly
+512 KiB for each 1 GiB of file's data. Future performance and memory
+size optimizations should not lead to packet's format change. But it is
+still several times faster than BLAKE2b.
+
+@item
+Resumed online downloads, because of MTH, require reading only of the
+preceding part of file, not the whole one as was before.
+
+@item
+@command{nncp-hash} utility appeared for calculating file's MTH hash.
+
+@item
+MultiCast Discovery uses
+@verb{|ff02::4e4e:4350|} address instead of @verb{|ff02::1|}.
+
+@item
+Updated dependencies.
+
+@end itemize
+
@node Release 6.6.0
@section Release 6.6.0
@itemize
@headitem @tab XDR type @tab Value
@item Magic number @tab
8-byte, fixed length opaque data @tab
- @verb{|N N C P E 0x00 0x00 0x04|}
+ @verb{|N N C P E 0x00 0x00 0x05|}
@item Niceness @tab
unsigned integer @tab
1-255, packet @ref{Niceness, niceness} level
All following encryption is done in AEAD mode using
@url{https://cr.yp.to/chacha.html, ChaCha20}-@url{https://en.wikipedia.org/wiki/Poly1305, Poly1305}
-algorithms. Data is splitted on 128 KiB blocks. Each block is encrypted with
+algorithms. Data is divided on 128 KiB blocks. Each block is encrypted with
increasing nonce counter.
Authenticated and encrypted size come after the header:
@item takes remote node's exchange public key and performs
Diffie-Hellman computation on this remote static public key and
private ephemeral one
-@item derive the keys:
- @enumerate
- @item initialize @url{https://blake2.net/, BLAKE2Xb} XOF with
- derived ephemeral key and 96-byte output length
- @item feed @verb{|N N C P E 0x00 0x00 0x04|} magic number to XOF
- @item read 32-bytes of "size" AEAD encryption key
- @item read 32-bytes of payload AEAD encryption key
- @item optionally read 32-bytes pad generation key
- @end enumerate
+@item derives 32-bytes AEAD encryption key with BLAKE3 derivation
+ function. Source key is the derived ephemeral key. Context is
+ @verb{|N N C P E 0x00 0x00 0x05|} magic number
@item encrypts size, appends its authenticated ciphertext to the header
-@item encrypts payload, appends its authenticated ciphertext
+@item encrypts each payload block, appending its authenticated ciphertext
@item possibly appends any kind of "junk" noise data to hide real
- payload's size from the adversary (generated using XOF with
- unlimited output length)
+ payload's size from the adversary (generated using BLAKE3 XOF, with
+ the key derived from the ephemeral one and context string of
+ @verb{|N N C P E 0x00 0x00 0x05 <SP> P A D|})
@end enumerate
@item LYT64MWSNDK34CVYOO7TA6ZCJ3NWI2OUDBBMX2A4QWF34FIRY4DQ
is an example @ref{Encrypted, encrypted packet}. Its filename is Base32
-encoded BLAKE2b hash of the whole contents. It can be integrity checked
+encoded @ref{MTH} hash of the whole contents. It can be integrity checked
anytime.
@item LYT64MWSNDK34CVYOO7TA6ZCJ3NWI2OUDBBMX2A4QWF34FIRY4DQ.part
github.com/flynn/noise/vector* \
github.com/gorhill/cronexpr/APLv2 \
github.com/hjson/hjson-go/build_release.sh \
- github.com/klauspost/compress/snappy \
+ github.com/golang/snappy \
github.com/klauspost/compress/zstd/snappy.go \
golang.org/x/sys/plan9 \
golang.org/x/sys/windows
PORTNAME= nncp
-DISTVERSION= 6.5.0
+DISTVERSION= 7.0.0
CATEGORIES= net
MASTER_SITES= http://www.nncpgo.org/download/
bin/nncp-exec
bin/nncp-file
bin/nncp-freq
+bin/nncp-hash
bin/nncp-log
bin/nncp-pkt
bin/nncp-reass
onlineDeadline, maxOnlineTime time.Duration,
listOnly bool,
noCK bool,
- onlyPkts map[[32]byte]bool,
+ onlyPkts map[[MTHSize]byte]bool,
) (isGood bool) {
for _, addr := range addrs {
les := LEs{{"Node", node.Id}, {"Addr", addr}}
"errors"
"fmt"
"io"
- "log"
"os"
"path/filepath"
-
- "golang.org/x/crypto/blake2b"
)
const NoCKSuffix = ".nock"
-func Check(src io.Reader, checksum []byte, les LEs, showPrgrs bool) (bool, error) {
- hsh, err := blake2b.New256(nil)
- if err != nil {
- log.Fatalln(err)
- }
- if _, err = CopyProgressed(hsh, bufio.NewReader(src), "check", les, showPrgrs); err != nil {
+func Check(
+ src io.Reader,
+ size int64,
+ checksum []byte,
+ les LEs,
+ showPrgrs bool,
+) (bool, error) {
+ hsh := MTHNew(size, 0)
+ if _, err := CopyProgressed(hsh, bufio.NewReaderSize(src, MTHSize), "check", les, showPrgrs); err != nil {
return false, err
}
return bytes.Compare(hsh.Sum(nil), checksum) == 0, nil
ctx.LogE("checking", les, err, logMsg)
return true
}
- gut, err := Check(fd, job.HshValue[:], les, ctx.ShowPrgrs)
+ gut, err := Check(fd, job.Size, job.HshValue[:], les, ctx.ShowPrgrs)
fd.Close() // #nosec G104
if err != nil {
ctx.LogE("checking", les, err, logMsg)
return !(ctx.checkXxIsBad(nodeId, TRx) || ctx.checkXxIsBad(nodeId, TTx))
}
-func (ctx *Ctx) CheckNoCK(nodeId *NodeId, hshValue *[32]byte) (int64, error) {
+func (ctx *Ctx) CheckNoCK(nodeId *NodeId, hshValue *[MTHSize]byte, mth *MTH) (int64, error) {
dirToSync := filepath.Join(ctx.Spool, nodeId.String(), string(TRx))
pktName := Base32Codec.EncodeToString(hshValue[:])
pktPath := filepath.Join(dirToSync, pktName)
if err != nil {
return 0, err
}
- defer fd.Close()
size := fi.Size()
les := LEs{
{"XX", string(TRx)},
{"Pkt", pktName},
{"FullSize", size},
}
- gut, err := Check(fd, hshValue[:], les, ctx.ShowPrgrs)
+ var gut bool
+ if mth == nil {
+ gut, err = Check(fd, size, hshValue[:], les, ctx.ShowPrgrs)
+ } else {
+ mth.PktName = pktName
+ if _, err = mth.PrependFrom(bufio.NewReaderSize(fd, MTHSize)); err != nil {
+ return 0, err
+ }
+ if bytes.Compare(mth.Sum(nil), hshValue[:]) == 0 {
+ gut = true
+ }
+ }
if err != nil || !gut {
return 0, errors.New("checksum mismatch")
}
package nncp
var (
- MagicNNCPMv1 [8]byte = [8]byte{'N', 'N', 'C', 'P', 'M', 0, 0, 1}
+ MagicNNCPMv2 [8]byte = [8]byte{'N', 'N', 'C', 'P', 'M', 0, 0, 2}
ChunkedSuffixMeta = ".nncp.meta"
ChunkedSuffixPart = ".nncp.chunk"
Magic [8]byte
FileSize uint64
ChunkSize uint64
- Checksums [][32]byte
+ Checksums [][MTHSize]byte
}
xdr "github.com/davecgh/go-xdr/xdr2"
"github.com/dustin/go-humanize"
- "go.cypherpunks.ru/nncp/v6"
- "golang.org/x/crypto/blake2b"
+ "go.cypherpunks.ru/nncp/v7"
)
const (
)
continue
}
- if pktEnc.Magic != nncp.MagicNNCPEv4 {
+ if pktEnc.Magic != nncp.MagicNNCPEv5 {
ctx.LogD(
"bundle-rx",
append(les, nncp.LE{K: "Err", V: "Bad packet magic number"}),
})
continue
}
- hsh, err := blake2b.New256(nil)
- if err != nil {
- log.Fatalln("Error during hasher creation:", err)
- }
+ hsh := nncp.MTHNew(entry.Size, 0)
if _, err = hsh.Write(pktEncBuf); err != nil {
log.Fatalln("Error during writing:", err)
}
}
if *doCheck {
if *dryRun {
- hsh, err := blake2b.New256(nil)
- if err != nil {
- log.Fatalln("Error during hasher creation:", err)
- }
+ hsh := nncp.MTHNew(entry.Size, 0)
if _, err = hsh.Write(pktEncBuf); err != nil {
log.Fatalln("Error during writing:", err)
}
"strings"
"time"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
close(autoTossFinish)
badCode = (<-autoTossBadCode) || badCode
}
+ nncp.SPCheckerWg.Wait()
if badCode {
os.Exit(1)
}
"sync"
"time"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
}
}
wg.Wait()
+ nncp.SPCheckerWg.Wait()
}
"os"
xdr "github.com/davecgh/go-xdr/xdr2"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
"golang.org/x/crypto/blake2b"
"golang.org/x/term"
)
"os"
"github.com/hjson/hjson-go"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
"log"
"os"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
"os"
"path/filepath"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
}
if *nock {
for job := range ctx.JobsNoCK(node.Id) {
- if _, err = ctx.CheckNoCK(node.Id, job.HshValue); err != nil {
+ if _, err = ctx.CheckNoCK(node.Id, job.HshValue, nil); err != nil {
pktName := nncp.Base32Codec.EncodeToString(job.HshValue[:])
log.Println(filepath.Join(
ctx.Spool,
"time"
"github.com/gorhill/cronexpr"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
"time"
"github.com/dustin/go-humanize"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
"golang.org/x/net/netutil"
)
func performSP(
ctx *nncp.Ctx,
conn nncp.ConnDeadlined,
+ addr string,
nice uint8,
noCK bool,
nodeIdC chan *nncp.NodeId,
ctx.LogI(
"call-started",
nncp.LEs{{K: "Node", V: state.Node.Id}},
- func(les nncp.LEs) string { return "Connection with " + state.Node.Name },
+ func(les nncp.LEs) string {
+ return fmt.Sprintf("Connection with %s (%s)", state.Node.Name, addr)
+ },
)
nodeIdC <- state.Node.Id
state.Wait()
os.Stderr.Close() // #nosec G104
conn := &InetdConn{os.Stdin, os.Stdout}
nodeIdC := make(chan *nncp.NodeId)
- go performSP(ctx, conn, nice, *noCK, nodeIdC)
+ go performSP(ctx, conn, "PIPE", nice, *noCK, nodeIdC)
nodeId := <-nodeIdC
var autoTossFinish chan struct{}
var autoTossBadCode chan bool
)
go func(conn net.Conn) {
nodeIdC := make(chan *nncp.NodeId)
- go performSP(ctx, conn, nice, *noCK, nodeIdC)
+ go performSP(ctx, conn, conn.RemoteAddr().String(), nice, *noCK, nodeIdC)
nodeId := <-nodeIdC
var autoTossFinish chan struct{}
var autoTossBadCode chan bool
"log"
"os"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
"os"
"strings"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
"path/filepath"
"strings"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
--- /dev/null
+/*
+NNCP -- Node to Node copy, utilities for store-and-forward data exchange
+Copyright (C) 2016-2021 Sergey Matveev <stargrave@stargrave.org>
+
+This program is free software: you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation, version 3 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program. If not, see <http://www.gnu.org/licenses/>.
+*/
+
+// Calculate MTH hash of the file
+package main
+
+import (
+ "bufio"
+ "encoding/hex"
+ "flag"
+ "fmt"
+ "io"
+ "log"
+ "os"
+ "sync"
+
+ "go.cypherpunks.ru/nncp/v7"
+)
+
+func usage() {
+ fmt.Fprintf(os.Stderr, nncp.UsageHeader())
+ fmt.Fprintf(os.Stderr, "nncp-hash -- calculate MTH hash of the file\n\n")
+ fmt.Fprintf(os.Stderr, "Usage: %s [-file ...] [-seek X] [-debug] [-progress] [options]\nOptions:\n", os.Args[0])
+ flag.PrintDefaults()
+}
+
+func main() {
+ var (
+ fn = flag.String("file", "", "Read the file instead of stdin")
+ seek = flag.Uint64("seek", 0, "Seek the file, hash, rewind, hash remaining")
+ showPrgrs = flag.Bool("progress", false, "Progress showing")
+ debug = flag.Bool("debug", false, "Print MTH steps calculations")
+ version = flag.Bool("version", false, "Print version information")
+ warranty = flag.Bool("warranty", false, "Print warranty information")
+ )
+ log.SetFlags(log.Lshortfile)
+ flag.Usage = usage
+ flag.Parse()
+ if *warranty {
+ fmt.Println(nncp.Warranty)
+ return
+ }
+ if *version {
+ fmt.Println(nncp.VersionGet())
+ return
+ }
+
+ fd := os.Stdin
+ var err error
+ var size int64
+ if *fn == "" {
+ *showPrgrs = false
+ } else {
+ fd, err = os.Open(*fn)
+ if err != nil {
+ log.Fatalln(err)
+ }
+ fi, err := fd.Stat()
+ if err != nil {
+ log.Fatalln(err)
+ }
+ size = fi.Size()
+ }
+ mth := nncp.MTHNew(size, int64(*seek))
+ var debugger sync.WaitGroup
+ if *debug {
+ fmt.Println("Leaf BLAKE3 key:", hex.EncodeToString(nncp.MTHLeafKey[:]))
+ fmt.Println("Node BLAKE3 key:", hex.EncodeToString(nncp.MTHNodeKey[:]))
+ mth.Events = make(chan nncp.MTHEvent)
+ debugger.Add(1)
+ go func() {
+ for e := range mth.Events {
+ var t string
+ switch e.Type {
+ case nncp.MTHEventAppend:
+ t = "Add"
+ case nncp.MTHEventPrepend:
+ t = "Pre"
+ case nncp.MTHEventFold:
+ t = "Fold"
+ }
+ fmt.Printf(
+ "%s\t%03d\t%06d\t%s\n",
+ t, e.Level, e.Ctr, hex.EncodeToString(e.Hsh),
+ )
+ }
+ debugger.Done()
+ }()
+ }
+ if *seek != 0 {
+ if *fn == "" {
+ log.Fatalln("-file is required with -seek")
+ }
+ if _, err = fd.Seek(int64(*seek), io.SeekStart); err != nil {
+ log.Fatalln(err)
+ }
+ }
+ if _, err = nncp.CopyProgressed(
+ mth, bufio.NewReaderSize(fd, nncp.MTHBlockSize),
+ "hash", nncp.LEs{{K: "Pkt", V: *fn}, {K: "FullSize", V: size - int64(*seek)}},
+ *showPrgrs,
+ ); err != nil {
+ log.Fatalln(err)
+ }
+ if *seek != 0 {
+ if _, err = fd.Seek(0, io.SeekStart); err != nil {
+ log.Fatalln(err)
+ }
+ if *showPrgrs {
+ mth.PktName = *fn
+ }
+ if _, err = mth.PrependFrom(bufio.NewReaderSize(fd, nncp.MTHBlockSize)); err != nil {
+ log.Fatalln(err)
+ }
+ }
+ sum := mth.Sum(nil)
+ debugger.Wait()
+ fmt.Println(hex.EncodeToString(sum))
+}
"log"
"os"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
"go.cypherpunks.ru/recfile"
)
xdr "github.com/davecgh/go-xdr/xdr2"
"github.com/klauspost/compress/zstd"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
}
var pktEnc nncp.PktEnc
_, err = xdr.Unmarshal(bytes.NewReader(beginning), &pktEnc)
- if err == nil && pktEnc.Magic == nncp.MagicNNCPEv4 {
+ if err == nil && pktEnc.Magic == nncp.MagicNNCPEv5 {
if *dump {
ctx, err := nncp.CtxFromCmdline(*cfgPath, "", "", false, false, false, false)
if err != nil {
xdr "github.com/davecgh/go-xdr/xdr2"
"github.com/dustin/go-humanize"
- "go.cypherpunks.ru/nncp/v6"
- "golang.org/x/crypto/blake2b"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
return false
}
fd.Close() // #nosec G104
- if metaPkt.Magic != nncp.MagicNNCPMv1 {
+ if metaPkt.Magic != nncp.MagicNNCPMv2 {
ctx.LogE("reass", les, nncp.BadMagic, logMsg)
return false
}
if err != nil {
log.Fatalln("Can not stat file:", err)
}
- hsh, err = blake2b.New256(nil)
- if err != nil {
- log.Fatalln(err)
- }
+ hsh = nncp.MTHNew(fi.Size(), 0)
if _, err = nncp.CopyProgressed(
hsh, bufio.NewReader(fd), "check",
nncp.LEs{{K: "Pkt", V: chunkPath}, {K: "FullSize", V: fi.Size()}},
"strings"
"time"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
"sort"
"github.com/dustin/go-humanize"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
"os"
"time"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
"path/filepath"
"github.com/dustin/go-humanize"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
continue
}
pktEnc, pktEncRaw, err := ctx.HdrRead(fd)
- if err != nil || pktEnc.Magic != nncp.MagicNNCPEv4 {
+ if err != nil || pktEnc.Magic != nncp.MagicNNCPEv5 {
ctx.LogD("xfer-rx-not-packet", les, func(les nncp.LEs) string {
return logMsg(les) + ": is not a packet"
})
-module go.cypherpunks.ru/nncp/v6
+module go.cypherpunks.ru/nncp/v7
require (
github.com/davecgh/go-xdr v0.0.0-20161123171359-e6a2ba005892
github.com/dustin/go-humanize v1.0.0
- github.com/flynn/noise v0.0.0-20180327030543-2492fe189ae6
+ github.com/flynn/noise v1.0.0
github.com/gorhill/cronexpr v0.0.0-20180427100037-88b0669f7d75
github.com/hjson/hjson-go v3.1.0+incompatible
- github.com/klauspost/compress v1.11.7
- github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect
+ github.com/klauspost/compress v1.13.1
go.cypherpunks.ru/balloon v1.1.1
go.cypherpunks.ru/recfile v0.4.3
- golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83
- golang.org/x/net v0.0.0-20210220033124-5f55cee0dc0d
- golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43
- golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d
- gopkg.in/check.v1 v1.0.0-20200902074654-038fdea0a05b // indirect
+ golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e
+ golang.org/x/net v0.0.0-20210614182718-04defd469f4e
+ golang.org/x/sys v0.0.0-20210616094352-59db8d763f22
+ golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b
+ lukechampine.com/blake3 v1.1.5
)
go 1.12
github.com/davecgh/go-xdr v0.0.0-20161123171359-e6a2ba005892/go.mod h1:CTDl0pzVzE5DEzZhPfvhY/9sPFMQIxaJ9VAMs9AagrE=
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
-github.com/flynn/noise v0.0.0-20180327030543-2492fe189ae6 h1:u/UEqS66A5ckRmS4yNpjmVH56sVtS/RfclBAYocb4as=
-github.com/flynn/noise v0.0.0-20180327030543-2492fe189ae6/go.mod h1:1i71OnUq3iUe1ma7Lr6yG6/rjvM3emb6yoL7xLFzcVQ=
+github.com/flynn/noise v1.0.0 h1:DlTHqmzmvcEiKj+4RYo/imoswx/4r6iBlCMfVtrMXpQ=
+github.com/flynn/noise v1.0.0/go.mod h1:xbMo+0i6+IGbYdJhF31t2eR1BIU0CYc12+BNAKwUTag=
+github.com/golang/snappy v0.0.3 h1:fHPg5GQYlCeLIPB9BZqMVR5nR9A+IM5zcgeTdjMYmLA=
+github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/gorhill/cronexpr v0.0.0-20180427100037-88b0669f7d75 h1:f0n1xnMSmBLzVfsMMvriDyA75NB/oBgILX2GcHXIQzY=
github.com/gorhill/cronexpr v0.0.0-20180427100037-88b0669f7d75/go.mod h1:g2644b03hfBX9Ov0ZBDgXXens4rxSxmqFBbhvKv2yVA=
github.com/hjson/hjson-go v3.1.0+incompatible h1:DY/9yE8ey8Zv22bY+mHV1uk2yRy0h8tKhZ77hEdi0Aw=
github.com/hjson/hjson-go v3.1.0+incompatible/go.mod h1:qsetwF8NlsTsOTwZTApNlTCerV+b2GjYRRcIk4JMFio=
-github.com/klauspost/compress v1.11.7 h1:0hzRabrMN4tSTvMfnL3SCv1ZGeAP23ynzodBgaHeMeg=
-github.com/klauspost/compress v1.11.7/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
+github.com/klauspost/compress v1.13.1 h1:wXr2uRxZTJXHLly6qhJabee5JqIhTRoLBhDOA74hDEQ=
+github.com/klauspost/compress v1.13.1/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg=
+github.com/klauspost/cpuid v1.3.1 h1:5JNjFYYQrZeKRJ0734q51WCEEn2huer72Dc7K+R/b6s=
+github.com/klauspost/cpuid v1.3.1/go.mod h1:bYW4mA6ZgKPob1/Dlai2LviZJO7KGI3uoWLd42rAQw4=
+github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI=
+github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
-github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
-github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
go.cypherpunks.ru/balloon v1.1.1 h1:ypHM1DRf/XuCrp9pDkTHg00CqZX/Np/APb//iHvDJTA=
go.cypherpunks.ru/balloon v1.1.1/go.mod h1:k4s4ozrIrhpBjj78Z7LX8ZHxMQ+XE7DZUWl8gP2ojCo=
go.cypherpunks.ru/recfile v0.4.3 h1:ephokihmV//p0ob6gx2FWXvm28/NBDbWTOJPUNahxO8=
go.cypherpunks.ru/recfile v0.4.3/go.mod h1:sR+KajB+vzofL3SFVFwKt3Fke0FaCcN1g3YPNAhU3qI=
-golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
-golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83 h1:/ZScEX8SfEmUGRHs0gxpqteO5nfNW6axyZbBdw9A12g=
-golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
-golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
-golang.org/x/net v0.0.0-20210220033124-5f55cee0dc0d h1:1aflnvSoWWLI2k/dMUAl5lvU1YO4Mb4hz0gh+1rjcxU=
-golang.org/x/net v0.0.0-20210220033124-5f55cee0dc0d/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
-golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
-golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
+golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e h1:gsTQYXdTw2Gq7RBsWvlQ91b+aEQ6bXFUngBGuR8sPpI=
+golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
+golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20210614182718-04defd469f4e h1:XpT3nA5TvE525Ne3hInMh6+GETgn27Zfm9dxsThnX2Q=
+golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43 h1:SgQ6LNaYJU0JIuEHv9+s6EbhSCwYeAf5Yvj6lpYlqAE=
-golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
+golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1 h1:SrN+KX8Art/Sf4HNj6Zcz06G7VEz+7w9tdXTPOZ7+l4=
+golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210616094352-59db8d763f22 h1:RqytpXGR1iVNX7psjB3ff8y7sNFinVFvkx1c8SjBkio=
+golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 h1:v+OssWQX+hTHEmOBgwxdZxK4zHq3yOs8F9J7mk0PY8E=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
-golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d h1:SZxvLBoTP5yHO3Frd4z4vrF+DBX9vMVanchswa69toE=
-golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
-golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b h1:9zKuko04nR4gjZ4+DNjHqRlAJqbJETHwiNKDqTfOjfE=
+golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
-gopkg.in/check.v1 v1.0.0-20200902074654-038fdea0a05b h1:QRR6H1YWRnHb4Y/HeNFCTJLFVxaq6wH4YuVdsUOr75U=
-gopkg.in/check.v1 v1.0.0-20200902074654-038fdea0a05b/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
+gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
+lukechampine.com/blake3 v1.1.5 h1:hsACfxWvLdGmjYbWGrumQIphOvO+ZruZehWtgd2fxoM=
+lukechampine.com/blake3 v1.1.5/go.mod h1:hE8RpzdO8ttZ7446CXEwDP1eu2V4z7stv0Urj1El20g=
PktEnc *PktEnc
Path string
Size int64
- HshValue *[32]byte
+ HshValue *[MTHSize]byte
}
func (ctx *Ctx) HdrRead(fd *os.File) (*PktEnc, []byte, error) {
}
pktEnc, pktEncRaw, err := ctx.HdrRead(fd)
fd.Close()
- if err != nil || pktEnc.Magic != MagicNNCPEv4 {
+ if err != nil || pktEnc.Magic != MagicNNCPEv5 {
continue
}
ctx.LogD("job", LEs{
PktEnc: pktEnc,
Path: pth,
Size: fi.Size(),
- HshValue: new([32]byte),
+ HshValue: new([MTHSize]byte),
}
copy(job.HshValue[:], hshValue)
jobs <- job
--- /dev/null
+/*
+NNCP -- Node to Node copy, utilities for store-and-forward data exchange
+Copyright (C) 2016-2021 Sergey Matveev <stargrave@stargrave.org>
+
+This program is free software: you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation, version 3 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program. If not, see <http://www.gnu.org/licenses/>.
+*/
+
+package nncp
+
+import (
+ "bytes"
+ "errors"
+ "io"
+
+ "lukechampine.com/blake3"
+)
+
+const (
+ MTHBlockSize = 128 * 1024
+ MTHSize = 32
+)
+
+var (
+ MTHLeafKey = blake3.Sum256([]byte("NNCP MTH LEAF"))
+ MTHNodeKey = blake3.Sum256([]byte("NNCP MTH NODE"))
+)
+
+type MTHEventType uint8
+
+const (
+ MTHEventAppend MTHEventType = iota
+ MTHEventPrepend MTHEventType = iota
+ MTHEventFold MTHEventType = iota
+)
+
+type MTHEvent struct {
+ Type MTHEventType
+ Level int
+ Ctr int
+ Hsh []byte
+}
+
+type MTH struct {
+ size int64
+ PrependSize int64
+ skip int64
+ skipped bool
+ hasher *blake3.Hasher
+ hashes [][MTHSize]byte
+ buf *bytes.Buffer
+ finished bool
+ Events chan MTHEvent
+ PktName string
+}
+
+func MTHNew(size, offset int64) *MTH {
+ mth := MTH{
+ hasher: blake3.New(MTHSize, MTHLeafKey[:]),
+ buf: bytes.NewBuffer(make([]byte, 0, 2*MTHBlockSize)),
+ }
+ if size == 0 {
+ return &mth
+ }
+ prepends := int(offset / MTHBlockSize)
+ skip := MTHBlockSize - (offset - int64(prepends)*MTHBlockSize)
+ if skip == MTHBlockSize {
+ skip = 0
+ } else if skip > 0 {
+ prepends++
+ }
+ prependSize := int64(prepends * MTHBlockSize)
+ if prependSize > size {
+ prependSize = size
+ }
+ if offset+skip > size {
+ skip = size - offset
+ }
+ mth.size = size
+ mth.PrependSize = prependSize
+ mth.skip = skip
+ mth.hashes = make([][MTHSize]byte, prepends, 1+size/MTHBlockSize)
+ return &mth
+}
+
+func (mth *MTH) Reset() { panic("not implemented") }
+
+func (mth *MTH) Size() int { return MTHSize }
+
+func (mth *MTH) BlockSize() int { return MTHBlockSize }
+
+func (mth *MTH) Write(data []byte) (int, error) {
+ if mth.finished {
+ return 0, errors.New("already Sum()ed")
+ }
+ n, err := mth.buf.Write(data)
+ if err != nil {
+ return n, err
+ }
+ if mth.skip > 0 && int64(mth.buf.Len()) >= mth.skip {
+ mth.buf.Next(int(mth.skip))
+ mth.skip = 0
+ }
+ for mth.buf.Len() >= MTHBlockSize {
+ if _, err = mth.hasher.Write(mth.buf.Next(MTHBlockSize)); err != nil {
+ return n, err
+ }
+ h := new([MTHSize]byte)
+ mth.hasher.Sum(h[:0])
+ mth.hasher.Reset()
+ mth.hashes = append(mth.hashes, *h)
+ if mth.Events != nil {
+ mth.Events <- MTHEvent{
+ MTHEventAppend,
+ 0, len(mth.hashes) - 1,
+ mth.hashes[len(mth.hashes)-1][:],
+ }
+ }
+ }
+ return n, err
+}
+
+func (mth *MTH) PrependFrom(r io.Reader) (int, error) {
+ if mth.finished {
+ return 0, errors.New("already Sum()ed")
+ }
+ var err error
+ buf := make([]byte, MTHBlockSize)
+ var i, n, read int
+ fullsize := mth.PrependSize
+ les := LEs{{"Pkt", mth.PktName}, {"FullSize", fullsize}, {"Size", 0}}
+ for mth.PrependSize >= MTHBlockSize {
+ n, err = io.ReadFull(r, buf)
+ read += n
+ mth.PrependSize -= MTHBlockSize
+ if err != nil {
+ return read, err
+ }
+ if _, err = mth.hasher.Write(buf); err != nil {
+ panic(err)
+ }
+ mth.hasher.Sum(mth.hashes[i][:0])
+ mth.hasher.Reset()
+ if mth.Events != nil {
+ mth.Events <- MTHEvent{MTHEventPrepend, 0, i, mth.hashes[i][:]}
+ }
+ if mth.PktName != "" {
+ les[len(les)-1].V = int64(read)
+ Progress("check", les)
+ }
+ i++
+ }
+ if mth.PrependSize > 0 {
+ n, err = io.ReadFull(r, buf[:mth.PrependSize])
+ read += n
+ if err != nil {
+ return read, err
+ }
+ if _, err = mth.hasher.Write(buf[:mth.PrependSize]); err != nil {
+ panic(err)
+ }
+ mth.hasher.Sum(mth.hashes[i][:0])
+ mth.hasher.Reset()
+ if mth.Events != nil {
+ mth.Events <- MTHEvent{MTHEventPrepend, 0, i, mth.hashes[i][:]}
+ }
+ if mth.PktName != "" {
+ les[len(les)-1].V = fullsize
+ Progress("check", les)
+ }
+ }
+ return read, nil
+}
+
+func (mth *MTH) Sum(b []byte) []byte {
+ if mth.finished {
+ return append(b, mth.hashes[0][:]...)
+ }
+ if mth.buf.Len() > 0 {
+ b := mth.buf.Next(MTHBlockSize)
+ if _, err := mth.hasher.Write(b); err != nil {
+ panic(err)
+ }
+ h := new([MTHSize]byte)
+ mth.hasher.Sum(h[:0])
+ mth.hasher.Reset()
+ mth.hashes = append(mth.hashes, *h)
+ if mth.Events != nil {
+ mth.Events <- MTHEvent{
+ MTHEventAppend,
+ 0, len(mth.hashes) - 1,
+ mth.hashes[len(mth.hashes)-1][:],
+ }
+ }
+ }
+ switch len(mth.hashes) {
+ case 0:
+ h := new([MTHSize]byte)
+ if _, err := mth.hasher.Write(nil); err != nil {
+ panic(err)
+ }
+ mth.hasher.Sum(h[:0])
+ mth.hasher.Reset()
+ mth.hashes = append(mth.hashes, *h)
+ if mth.Events != nil {
+ mth.Events <- MTHEvent{MTHEventAppend, 0, 0, mth.hashes[0][:]}
+ }
+ fallthrough
+ case 1:
+ mth.hashes = append(mth.hashes, mth.hashes[0])
+ if mth.Events != nil {
+ mth.Events <- MTHEvent{MTHEventAppend, 0, 1, mth.hashes[1][:]}
+ }
+ }
+ mth.hasher = blake3.New(MTHSize, MTHNodeKey[:])
+ level := 1
+ for len(mth.hashes) != 1 {
+ hashesUp := make([][MTHSize]byte, 0, 1+len(mth.hashes)/2)
+ pairs := (len(mth.hashes) / 2) * 2
+ for i := 0; i < pairs; i += 2 {
+ if _, err := mth.hasher.Write(mth.hashes[i][:]); err != nil {
+ panic(err)
+ }
+ if _, err := mth.hasher.Write(mth.hashes[i+1][:]); err != nil {
+ panic(err)
+ }
+ h := new([MTHSize]byte)
+ mth.hasher.Sum(h[:0])
+ mth.hasher.Reset()
+ hashesUp = append(hashesUp, *h)
+ if mth.Events != nil {
+ mth.Events <- MTHEvent{
+ MTHEventFold,
+ level, len(hashesUp) - 1,
+ hashesUp[len(hashesUp)-1][:],
+ }
+ }
+ }
+ if len(mth.hashes)%2 == 1 {
+ hashesUp = append(hashesUp, mth.hashes[len(mth.hashes)-1])
+ if mth.Events != nil {
+ mth.Events <- MTHEvent{
+ MTHEventAppend,
+ level, len(hashesUp) - 1,
+ hashesUp[len(hashesUp)-1][:],
+ }
+ }
+ }
+ mth.hashes = hashesUp
+ level++
+ }
+ mth.finished = true
+ if mth.Events != nil {
+ close(mth.Events)
+ }
+ return append(b, mth.hashes[0][:]...)
+}
--- /dev/null
+/*
+NNCP -- Node to Node copy, utilities for store-and-forward data exchange
+Copyright (C) 2016-2021 Sergey Matveev <stargrave@stargrave.org>
+
+This program is free software: you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation, version 3 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program. If not, see <http://www.gnu.org/licenses/>.
+*/
+
+package nncp
+
+import (
+ "bytes"
+ "io"
+ "testing"
+ "testing/quick"
+
+ "lukechampine.com/blake3"
+)
+
+func TestMTHSymmetric(t *testing.T) {
+ xof := blake3.New(32, nil).XOF()
+ f := func(size uint32, offset uint32) bool {
+ size %= 2 * 1024 * 1024
+ data := make([]byte, int(size), int(size)+1)
+ if _, err := io.ReadFull(xof, data); err != nil {
+ panic(err)
+ }
+ offset = offset % size
+
+ mth := MTHNew(int64(size), 0)
+ if _, err := io.Copy(mth, bytes.NewReader(data)); err != nil {
+ panic(err)
+ }
+ hsh0 := mth.Sum(nil)
+
+ mth = MTHNew(int64(size), int64(offset))
+ if _, err := io.Copy(mth, bytes.NewReader(data[int(offset):])); err != nil {
+ panic(err)
+ }
+ if _, err := mth.PrependFrom(bytes.NewReader(data)); err != nil {
+ panic(err)
+ }
+ if bytes.Compare(hsh0, mth.Sum(nil)) != 0 {
+ return false
+ }
+
+ mth = MTHNew(0, 0)
+ mth.Write(data)
+ if bytes.Compare(hsh0, mth.Sum(nil)) != 0 {
+ return false
+ }
+
+ data = append(data, 0)
+ mth = MTHNew(int64(size)+1, 0)
+ if _, err := io.Copy(mth, bytes.NewReader(data)); err != nil {
+ panic(err)
+ }
+ hsh00 := mth.Sum(nil)
+ if bytes.Compare(hsh0, hsh00) == 0 {
+ return false
+ }
+
+ mth = MTHNew(int64(size)+1, int64(offset))
+ if _, err := io.Copy(mth, bytes.NewReader(data[int(offset):])); err != nil {
+ panic(err)
+ }
+ if _, err := mth.PrependFrom(bytes.NewReader(data)); err != nil {
+ panic(err)
+ }
+ if bytes.Compare(hsh00, mth.Sum(nil)) != 0 {
+ return false
+ }
+
+ mth = MTHNew(0, 0)
+ mth.Write(data)
+ if bytes.Compare(hsh00, mth.Sum(nil)) != 0 {
+ return false
+ }
+
+ return true
+ }
+ if err := quick.Check(f, nil); err != nil {
+ t.Error(err)
+ }
+}
+
+func TestMTHNull(t *testing.T) {
+ mth := MTHNew(0, 0)
+ if _, err := mth.Write(nil); err != nil {
+ t.Error(err)
+ }
+ mth.Sum(nil)
+}
const Base32Encoded32Len = 52
var (
- Version string = "6.6.0"
+ Version string = "7.0.0"
Base32Codec *base32.Encoding = base32.StdEncoding.WithPadding(base32.NoPadding)
)
"io"
xdr "github.com/davecgh/go-xdr/xdr2"
- "golang.org/x/crypto/blake2b"
"golang.org/x/crypto/chacha20poly1305"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
"golang.org/x/crypto/nacl/box"
"golang.org/x/crypto/poly1305"
+ "lukechampine.com/blake3"
)
type PktType uint8
const (
EncBlkSize = 128 * (1 << 10)
- KDFXOFSize = chacha20poly1305.KeySize * 2
PktTypeFile PktType = iota
PktTypeFreq PktType = iota
var (
MagicNNCPPv3 [8]byte = [8]byte{'N', 'N', 'C', 'P', 'P', 0, 0, 3}
- MagicNNCPEv4 [8]byte = [8]byte{'N', 'N', 'C', 'P', 'E', 0, 0, 4}
+ MagicNNCPEv5 [8]byte = [8]byte{'N', 'N', 'C', 'P', 'E', 0, 0, 5}
BadMagic error = errors.New("Unknown magic number")
BadPktType error = errors.New("Unknown packet type")
panic(err)
}
pktEnc := PktEnc{
- Magic: MagicNNCPEv4,
+ Magic: MagicNNCPEv5,
Sender: dummyId,
Recipient: dummyId,
}
return nil, err
}
tbs := PktTbs{
- Magic: MagicNNCPEv4,
+ Magic: MagicNNCPEv5,
Nice: nice,
Sender: our.Id,
Recipient: their.Id,
signature := new([ed25519.SignatureSize]byte)
copy(signature[:], ed25519.Sign(our.SignPrv, tbsBuf.Bytes()))
pktEnc := PktEnc{
- Magic: MagicNNCPEv4,
+ Magic: MagicNNCPEv5,
Nice: nice,
Sender: our.Id,
Recipient: their.Id,
}
sharedKey := new([32]byte)
curve25519.ScalarMult(sharedKey, prvEph, their.ExchPub)
- kdf, err := blake2b.NewXOF(KDFXOFSize, sharedKey[:])
- if err != nil {
- return nil, err
- }
- if _, err = kdf.Write(MagicNNCPEv4[:]); err != nil {
- return nil, err
- }
key := make([]byte, chacha20poly1305.KeySize)
- if _, err = io.ReadFull(kdf, key); err != nil {
- return nil, err
- }
+ blake3.DeriveKey(key, string(MagicNNCPEv5[:]), sharedKey[:])
aead, err := chacha20poly1305.New(key)
if err != nil {
return nil, err
return nil, io.ErrUnexpectedEOF
}
if padSize > 0 {
- if _, err = io.ReadFull(kdf, key); err != nil {
- return nil, err
- }
- kdf, err = blake2b.NewXOF(blake2b.OutputLengthUnknown, key)
- if err != nil {
- return nil, err
- }
- if _, err = io.CopyN(out, kdf, padSize); err != nil {
+ blake3.DeriveKey(key, string(MagicNNCPEv5[:])+" PAD", sharedKey[:])
+ xof := blake3.New(32, key).XOF()
+ if _, err = io.CopyN(out, xof, padSize); err != nil {
return nil, err
}
}
func TbsVerify(our *NodeOur, their *Node, pktEnc *PktEnc) (bool, error) {
tbs := PktTbs{
- Magic: MagicNNCPEv4,
+ Magic: MagicNNCPEv5,
Nice: pktEnc.Nice,
Sender: their.Id,
Recipient: our.Id,
if err != nil {
return nil, 0, err
}
- if pktEnc.Magic != MagicNNCPEv4 {
+ if pktEnc.Magic != MagicNNCPEv5 {
return nil, 0, BadMagic
}
their, known := nodes[*pktEnc.Sender]
}
sharedKey := new([32]byte)
curve25519.ScalarMult(sharedKey, our.ExchPrv, &pktEnc.ExchPub)
- kdf, err := blake2b.NewXOF(KDFXOFSize, sharedKey[:])
- if err != nil {
- return their, 0, err
- }
- if _, err = kdf.Write(MagicNNCPEv4[:]); err != nil {
- return their, 0, err
- }
key := make([]byte, chacha20poly1305.KeySize)
- if _, err = io.ReadFull(kdf, key); err != nil {
- return their, 0, err
- }
+ blake3.DeriveKey(key, string(MagicNNCPEv5[:]), sharedKey[:])
aead, err := chacha20poly1305.New(key)
if err != nil {
return their, 0, err
"time"
"github.com/dustin/go-humanize"
- "go.cypherpunks.ru/nncp/v6/uilive"
+ "go.cypherpunks.ru/nncp/v7/uilive"
)
func init() {
"crypto/subtle"
"errors"
"fmt"
- "hash"
"io"
"os"
"path/filepath"
xdr "github.com/davecgh/go-xdr/xdr2"
"github.com/dustin/go-humanize"
"github.com/flynn/noise"
- "golang.org/x/crypto/blake2b"
)
const (
SPHeadOverhead = 4
)
-type SPCheckerQueues struct {
- appeared chan *[32]byte
- checked chan *[32]byte
+type MTHAndOffset struct {
+ mth *MTH
+ offset uint64
+}
+
+type SPCheckerTask struct {
+ nodeId *NodeId
+ hsh *[MTHSize]byte
+ mth *MTH
+ done chan []byte
}
var (
DefaultDeadline = 10 * time.Second
PingTimeout = time.Minute
- spCheckers = make(map[NodeId]*SPCheckerQueues)
- SPCheckersWg sync.WaitGroup
+ spCheckerTasks chan SPCheckerTask
+ SPCheckerWg sync.WaitGroup
+ spCheckerOnce sync.Once
)
type FdAndFullSize struct {
fullSize int64
}
-type HasherAndOffset struct {
- h hash.Hash
- offset uint64
-}
-
type SPType uint8
const (
type SPInfo struct {
Nice uint8
Size uint64
- Hash *[32]byte
+ Hash *[MTHSize]byte
}
type SPFreq struct {
- Hash *[32]byte
+ Hash *[MTHSize]byte
Offset uint64
}
type SPFile struct {
- Hash *[32]byte
+ Hash *[MTHSize]byte
Offset uint64
Payload []byte
}
type SPDone struct {
- Hash *[32]byte
+ Hash *[MTHSize]byte
}
type SPRaw struct {
copy(SPPingMarshalized, buf.Bytes())
buf.Reset()
- spInfo := SPInfo{Nice: 123, Size: 123, Hash: new([32]byte)}
+ spInfo := SPInfo{Nice: 123, Size: 123, Hash: new([MTHSize]byte)}
if _, err := xdr.Marshal(&buf, spInfo); err != nil {
panic(err)
}
SPInfoOverhead = buf.Len()
buf.Reset()
- spFreq := SPFreq{Hash: new([32]byte), Offset: 123}
+ spFreq := SPFreq{Hash: new([MTHSize]byte), Offset: 123}
if _, err := xdr.Marshal(&buf, spFreq); err != nil {
panic(err)
}
SPFreqOverhead = buf.Len()
buf.Reset()
- spFile := SPFile{Hash: new([32]byte), Offset: 123}
+ spFile := SPFile{Hash: new([MTHSize]byte), Offset: 123}
if _, err := xdr.Marshal(&buf, spFile); err != nil {
panic(err)
}
SPFileOverhead = buf.Len()
+ spCheckerTasks = make(chan SPCheckerTask)
}
func MarshalSP(typ SPType, sp interface{}) []byte {
csTheir *noise.CipherState
payloads chan []byte
pings chan struct{}
- infosTheir map[[32]byte]*SPInfo
- infosOurSeen map[[32]byte]uint8
+ infosTheir map[[MTHSize]byte]*SPInfo
+ infosOurSeen map[[MTHSize]byte]uint8
queueTheir []*FreqWithNice
wg sync.WaitGroup
RxBytes int64
txRate int
isDead chan struct{}
listOnly bool
- onlyPkts map[[32]byte]bool
+ onlyPkts map[[MTHSize]byte]bool
writeSPBuf bytes.Buffer
fds map[string]FdAndFullSize
fdsLock sync.RWMutex
- fileHashers map[string]*HasherAndOffset
- checkerQueues SPCheckerQueues
+ fileHashers map[string]*MTHAndOffset
progressBars map[string]struct{}
sync.RWMutex
}
state.Ctx.UnlockDir(state.txLock)
}
-func SPChecker(ctx *Ctx, nodeId *NodeId, appeared, checked chan *[32]byte) {
- for hshValue := range appeared {
- pktName := Base32Codec.EncodeToString(hshValue[:])
- les := LEs{
- {"XX", string(TRx)},
- {"Node", nodeId},
- {"Pkt", pktName},
- }
- SPCheckersWg.Add(1)
- ctx.LogD("sp-checker", les, func(les LEs) string {
- return fmt.Sprintf("Checksumming %s/rx/%s", ctx.NodeName(nodeId), pktName)
- })
- size, err := ctx.CheckNoCK(nodeId, hshValue)
- les = append(les, LE{"Size", size})
- if err != nil {
- ctx.LogE("sp-checker", les, err, func(les LEs) string {
- return fmt.Sprintf(
- "Checksumming %s/rx/%s (%s)", ctx.NodeName(nodeId), pktName,
- humanize.IBytes(uint64(size)),
- )
- })
- continue
- }
- ctx.LogI("sp-checker-done", les, func(les LEs) string {
- return fmt.Sprintf(
- "Packet %s is retreived (%s)",
- pktName, humanize.IBytes(uint64(size)),
- )
- })
- SPCheckersWg.Done()
- go func(hsh *[32]byte) { checked <- hsh }(hshValue)
- }
-}
-
func (state *SPState) WriteSP(dst io.Writer, payload []byte, ping bool) error {
state.writeSPBuf.Reset()
n, err := xdr.Marshal(&state.writeSPBuf, SPRaw{
return sp.Payload, nil
}
-func (ctx *Ctx) infosOur(nodeId *NodeId, nice uint8, seen *map[[32]byte]uint8) [][]byte {
+func (ctx *Ctx) infosOur(nodeId *NodeId, nice uint8, seen *map[[MTHSize]byte]uint8) [][]byte {
var infos []*SPInfo
var totalSize int64
for job := range ctx.Jobs(nodeId, TTx) {
state.hs = hs
state.payloads = make(chan []byte)
state.pings = make(chan struct{})
- state.infosTheir = make(map[[32]byte]*SPInfo)
- state.infosOurSeen = make(map[[32]byte]uint8)
+ state.infosTheir = make(map[[MTHSize]byte]*SPInfo)
+ state.infosOurSeen = make(map[[MTHSize]byte]uint8)
state.progressBars = make(map[string]struct{})
state.started = started
state.rxLock = rxLock
state.hs = hs
state.payloads = make(chan []byte)
state.pings = make(chan struct{})
- state.infosOurSeen = make(map[[32]byte]uint8)
- state.infosTheir = make(map[[32]byte]*SPInfo)
+ state.infosOurSeen = make(map[[MTHSize]byte]uint8)
+ state.infosTheir = make(map[[MTHSize]byte]*SPInfo)
state.progressBars = make(map[string]struct{})
state.started = started
state.xxOnly = xxOnly
) error {
les := LEs{{"Node", state.Node.Id}, {"Nice", int(state.Nice)}}
state.fds = make(map[string]FdAndFullSize)
- state.fileHashers = make(map[string]*HasherAndOffset)
+ state.fileHashers = make(map[string]*MTHAndOffset)
state.isDead = make(chan struct{})
if state.maxOnlineTime > 0 {
state.mustFinishAt = state.started.Add(state.maxOnlineTime)
}
-
- // Checker
if !state.NoCK {
- queues := spCheckers[*state.Node.Id]
- if queues == nil {
- queues = &SPCheckerQueues{
- appeared: make(chan *[32]byte),
- checked: make(chan *[32]byte),
- }
- spCheckers[*state.Node.Id] = queues
- go SPChecker(state.Ctx, state.Node.Id, queues.appeared, queues.checked)
- }
- state.checkerQueues = *queues
- go func() {
- for job := range state.Ctx.JobsNoCK(state.Node.Id) {
- if job.PktEnc.Nice <= state.Nice {
- state.checkerQueues.appeared <- job.HshValue
- }
- }
- }()
- state.wg.Add(1)
- go func() {
- defer state.wg.Done()
- for {
- select {
- case <-state.isDead:
- return
- case hsh := <-state.checkerQueues.checked:
- state.payloads <- MarshalSP(SPTypeDone, SPDone{hsh})
- }
- }
- }()
+ spCheckerOnce.Do(func() { go SPChecker(state.Ctx) })
}
// Remaining handshake payload sending
}
state.Ctx.LogD("sp-sending", append(les, LE{"Size", int64(len(payload))}), logMsg)
conn.SetWriteDeadline(time.Now().Add(DefaultDeadline)) // #nosec G104
- if err := state.WriteSP(conn, state.csOur.Encrypt(nil, nil, payload), ping); err != nil {
+ ct, err := state.csOur.Encrypt(nil, nil, payload)
+ if err != nil {
+ state.Ctx.LogE("sp-encrypting", les, err, logMsg)
+ return
+ }
+ if err := state.WriteSP(conn, ct, ping); err != nil {
state.Ctx.LogE("sp-sending", les, err, logMsg)
return
}
close(state.payloads)
close(state.pings)
state.Duration = time.Now().Sub(state.started)
- SPCheckersWg.Wait()
state.dirUnlock()
state.RxSpeed = state.RxBytes
state.TxSpeed = state.TxBytes
pktName, humanize.IBytes(uint64(len(file.Payload))),
)
}
+ fullsize := int64(0)
+ state.RLock()
+ infoTheir, ok := state.infosTheir[*file.Hash]
+ state.RUnlock()
+ if !ok {
+ state.Ctx.LogE("sp-file-open", lesp, err, func(les LEs) string {
+ return logMsg(les) + ": unknown file"
+ })
+ continue
+ }
+ fullsize = int64(infoTheir.Size)
+ lesp = append(lesp, LE{"FullSize", fullsize})
dirToSync := filepath.Join(
state.Ctx.Spool,
state.Node.Id.String(),
state.fdsLock.RLock()
fdAndFullSize, exists := state.fds[filePathPart]
state.fdsLock.RUnlock()
+ hasherAndOffset := state.fileHashers[filePath]
var fd *os.File
if exists {
fd = fdAndFullSize.fd
state.fdsLock.Lock()
state.fds[filePathPart] = FdAndFullSize{fd: fd}
state.fdsLock.Unlock()
- if file.Offset == 0 {
- h, err := blake2b.New256(nil)
- if err != nil {
- panic(err)
+ if !state.NoCK {
+ hasherAndOffset = &MTHAndOffset{
+ mth: MTHNew(fullsize, int64(file.Offset)),
+ offset: file.Offset,
}
- state.fileHashers[filePath] = &HasherAndOffset{h: h}
+ state.fileHashers[filePath] = hasherAndOffset
}
}
state.Ctx.LogD(
state.closeFd(filePathPart)
return nil, err
}
- hasherAndOffset, hasherExists := state.fileHashers[filePath]
- if hasherExists {
+ if hasherAndOffset != nil {
if hasherAndOffset.offset == file.Offset {
- if _, err = hasherAndOffset.h.Write(file.Payload); err != nil {
+ if _, err = hasherAndOffset.mth.Write(file.Payload); err != nil {
panic(err)
}
hasherAndOffset.offset += uint64(len(file.Payload))
} else {
- state.Ctx.LogD(
- "sp-file-offset-differs", lesp,
+ state.Ctx.LogE(
+ "sp-file-offset-differs", lesp, errors.New("offset differs"),
func(les LEs) string {
- return logMsg(les) + ": offset differs, deleting hasher"
+ return logMsg(les) + ": deleting hasher"
},
)
delete(state.fileHashers, filePath)
- hasherExists = false
+ hasherAndOffset = nil
}
}
ourSize := int64(file.Offset + uint64(len(file.Payload)))
- lesp[len(lesp)-1].V = ourSize
- fullsize := int64(0)
- state.RLock()
- infoTheir, ok := state.infosTheir[*file.Hash]
- state.RUnlock()
- if ok {
- fullsize = int64(infoTheir.Size)
- }
- lesp = append(lesp, LE{"FullSize", fullsize})
+ lesp[len(lesp)-2].V = ourSize
if state.Ctx.ShowPrgrs {
state.progressBars[pktName] = struct{}{}
Progress("Rx", lesp)
state.closeFd(filePathPart)
continue
}
- if hasherExists {
- if bytes.Compare(hasherAndOffset.h.Sum(nil), file.Hash[:]) != 0 {
- state.Ctx.LogE(
- "sp-file-bad-checksum", lesp,
- errors.New("checksum mismatch"),
- logMsg,
- )
- continue
- }
- if err = os.Rename(filePathPart, filePath); err != nil {
- state.Ctx.LogE("sp-file-rename", lesp, err, func(les LEs) string {
- return logMsg(les) + ": renaming"
- })
- continue
- }
- if err = DirSync(dirToSync); err != nil {
- state.Ctx.LogE("sp-file-dirsync", lesp, err, func(les LEs) string {
- return logMsg(les) + ": dirsyncing"
- })
- continue
- }
- state.Ctx.LogI("sp-file-done", lesp, func(les LEs) string {
- return logMsg(les) + ": done"
- })
- state.wg.Add(1)
- go func() {
- state.payloads <- MarshalSP(SPTypeDone, SPDone{file.Hash})
- state.wg.Done()
- }()
- state.Lock()
- delete(state.infosTheir, *file.Hash)
- state.Unlock()
- if !state.Ctx.HdrUsage {
- state.closeFd(filePathPart)
- continue
- }
- if _, err = fd.Seek(0, io.SeekStart); err != nil {
- state.Ctx.LogE("sp-file-seek", lesp, err, func(les LEs) string {
- return logMsg(les) + ": seeking"
+ if hasherAndOffset != nil {
+ delete(state.fileHashers, filePath)
+ if hasherAndOffset.mth.PrependSize == 0 {
+ if bytes.Compare(hasherAndOffset.mth.Sum(nil), file.Hash[:]) != 0 {
+ state.Ctx.LogE(
+ "sp-file-bad-checksum", lesp,
+ errors.New("checksum mismatch"),
+ logMsg,
+ )
+ state.closeFd(filePathPart)
+ continue
+ }
+ if err = os.Rename(filePathPart, filePath); err != nil {
+ state.Ctx.LogE("sp-file-rename", lesp, err, func(les LEs) string {
+ return logMsg(les) + ": renaming"
+ })
+ state.closeFd(filePathPart)
+ continue
+ }
+ if err = DirSync(dirToSync); err != nil {
+ state.Ctx.LogE("sp-file-dirsync", lesp, err, func(les LEs) string {
+ return logMsg(les) + ": dirsyncing"
+ })
+ state.closeFd(filePathPart)
+ continue
+ }
+ state.Ctx.LogI("sp-file-done", lesp, func(les LEs) string {
+ return logMsg(les) + ": done"
})
+ state.wg.Add(1)
+ go func() {
+ state.payloads <- MarshalSP(SPTypeDone, SPDone{file.Hash})
+ state.wg.Done()
+ }()
+ state.Lock()
+ delete(state.infosTheir, *file.Hash)
+ state.Unlock()
+ if !state.Ctx.HdrUsage {
+ continue
+ }
+ if _, err = fd.Seek(0, io.SeekStart); err != nil {
+ state.Ctx.LogE("sp-file-seek", lesp, err, func(les LEs) string {
+ return logMsg(les) + ": seeking"
+ })
+ state.closeFd(filePathPart)
+ continue
+ }
+ _, pktEncRaw, err := state.Ctx.HdrRead(fd)
state.closeFd(filePathPart)
+ if err != nil {
+ state.Ctx.LogE("sp-file-hdr-read", lesp, err, func(les LEs) string {
+ return logMsg(les) + ": HdrReading"
+ })
+ continue
+ }
+ state.Ctx.HdrWrite(pktEncRaw, filePath)
continue
}
- _, pktEncRaw, err := state.Ctx.HdrRead(fd)
- state.closeFd(filePathPart)
- if err != nil {
- state.Ctx.LogE("sp-file-hdr-read", lesp, err, func(les LEs) string {
- return logMsg(les) + ": HdrReading"
- })
- continue
- }
- state.Ctx.HdrWrite(pktEncRaw, filePath)
- continue
}
state.closeFd(filePathPart)
if err = os.Rename(filePathPart, filePath+NoCKSuffix); err != nil {
state.Lock()
delete(state.infosTheir, *file.Hash)
state.Unlock()
- if !state.NoCK {
- state.checkerQueues.appeared <- file.Hash
+ if hasherAndOffset != nil {
+ go func() {
+ spCheckerTasks <- SPCheckerTask{
+ nodeId: state.Node.Id,
+ hsh: file.Hash,
+ mth: hasherAndOffset.mth,
+ done: state.payloads,
+ }
+ }()
}
case SPTypeDone:
}
return payloadsSplit(replies), nil
}
+
+func SPChecker(ctx *Ctx) {
+ for t := range spCheckerTasks {
+ pktName := Base32Codec.EncodeToString(t.hsh[:])
+ les := LEs{
+ {"XX", string(TRx)},
+ {"Node", t.nodeId},
+ {"Pkt", pktName},
+ }
+ SPCheckerWg.Add(1)
+ ctx.LogD("sp-checker", les, func(les LEs) string {
+ return fmt.Sprintf("Checksumming %s/rx/%s", ctx.NodeName(t.nodeId), pktName)
+ })
+ size, err := ctx.CheckNoCK(t.nodeId, t.hsh, t.mth)
+ les = append(les, LE{"Size", size})
+ if err != nil {
+ ctx.LogE("sp-checker", les, err, func(les LEs) string {
+ return fmt.Sprintf(
+ "Checksumming %s/rx/%s (%s)", ctx.NodeName(t.nodeId), pktName,
+ humanize.IBytes(uint64(size)),
+ )
+ })
+ SPCheckerWg.Done()
+ continue
+ }
+ ctx.LogI("sp-checker-done", les, func(les LEs) string {
+ return fmt.Sprintf(
+ "Packet %s is retreived (%s)",
+ pktName, humanize.IBytes(uint64(size)),
+ )
+ })
+ SPCheckerWg.Done()
+ go func(t SPCheckerTask) {
+ defer func() { recover() }()
+ t.done <- MarshalSP(SPTypeDone, SPDone{t.hsh})
+ }(t)
+ }
+}
"path/filepath"
"strconv"
"time"
-
- "golang.org/x/crypto/blake2b"
)
func TempFile(dir, prefix string) (*os.File, error) {
if err != nil {
return nil, err
}
- hsh, err := blake2b.New256(nil)
- if err != nil {
- return nil, err
- }
+ hsh := MTHNew(0, 0)
return &TmpFileWHash{
W: bufio.NewWriter(io.MultiWriter(hsh, tmp)),
Fd: tmp,
xdr "github.com/davecgh/go-xdr/xdr2"
"github.com/dustin/go-humanize"
"github.com/klauspost/compress/zstd"
- "golang.org/x/crypto/blake2b"
"golang.org/x/crypto/poly1305"
)
if noTrns {
goto Closing
}
- dst := new([blake2b.Size256]byte)
+ dst := new([MTHSize]byte)
copy(dst[:], pkt.Path[:int(pkt.PathLen)])
nodeId := NodeId(*dst)
node, known := ctx.Neigh[nodeId]
"testing/quick"
xdr "github.com/davecgh/go-xdr/xdr2"
- "golang.org/x/crypto/blake2b"
)
var (
ctx.Neigh[*nodeOur.Id] = nodeOur.Their()
incomingPath := filepath.Join(spool, "incoming")
for _, fileData := range files {
- checksum := blake2b.Sum256(fileData)
- fileName := Base32Codec.EncodeToString(checksum[:])
+ hasher := MTHNew(0, 0)
+ hasher.Write(fileData)
+ fileName := Base32Codec.EncodeToString(hasher.Sum(nil))
src := filepath.Join(spool, fileName)
if err := ioutil.WriteFile(src, fileData, os.FileMode(0600)); err != nil {
panic(err)
return false
}
for _, fileData := range files {
- checksum := blake2b.Sum256(fileData)
- fileName := Base32Codec.EncodeToString(checksum[:])
+ hasher := MTHNew(0, 0)
+ hasher.Write(fileData)
+ fileName := Base32Codec.EncodeToString(hasher.Sum(nil))
data, err := ioutil.ReadFile(filepath.Join(incomingPath, fileName))
if err != nil {
panic(err)
pktTrans := Pkt{
Magic: MagicNNCPPv3,
Type: PktTypeTrns,
- PathLen: blake2b.Size256,
+ PathLen: MTHSize,
}
copy(pktTrans.Path[:], nodeOur.Id[:])
var dst bytes.Buffer
t.Error(err)
return false
}
- checksum := blake2b.Sum256(dst.Bytes())
+ hasher := MTHNew(0, 0)
+ hasher.Write(dst.Bytes())
if err := ioutil.WriteFile(
- filepath.Join(rxPath, Base32Codec.EncodeToString(checksum[:])),
+ filepath.Join(rxPath, Base32Codec.EncodeToString(hasher.Sum(nil))),
dst.Bytes(),
os.FileMode(0600),
); err != nil {
xdr "github.com/davecgh/go-xdr/xdr2"
"github.com/dustin/go-humanize"
"github.com/klauspost/compress/zstd"
- "golang.org/x/crypto/blake2b"
"golang.org/x/crypto/chacha20poly1305"
)
leftSize := fileSize
metaPkt := ChunkedMeta{
- Magic: MagicNNCPMv1,
+ Magic: MagicNNCPMv2,
FileSize: uint64(fileSize),
ChunkSize: uint64(chunkSize),
- Checksums: make([][32]byte, 0, (fileSize/chunkSize)+1),
+ Checksums: make([][MTHSize]byte, 0, (fileSize/chunkSize)+1),
}
for i := int64(0); i < (fileSize/chunkSize)+1; i++ {
- hsh := new([32]byte)
+ hsh := new([MTHSize]byte)
metaPkt.Checksums = append(metaPkt.Checksums, *hsh)
}
var sizeToSend int64
if err != nil {
return err
}
- hsh, err = blake2b.New256(nil)
- if err != nil {
- return err
- }
+ hsh = MTHNew(0, 0)
_, err = ctx.Tx(
node,
pkt,
"testing/quick"
xdr "github.com/davecgh/go-xdr/xdr2"
- "golang.org/x/crypto/blake2b"
)
func TestTx(t *testing.T) {
if pkt.Type != PktTypeTrns {
return false
}
- if bytes.Compare(pkt.Path[:blake2b.Size256], vias[i+1][:]) != 0 {
+ if bytes.Compare(pkt.Path[:MTHSize], vias[i+1][:]) != 0 {
return false
}
}