nncp-exec
nncp-file
nncp-freq
+nncp-hash
nncp-log
nncp-pkt
nncp-reass
@node Chunked
@unnumbered Chunked files
-There is ability to transfer huge files with splitting them into smaller
+There is ability to transfer huge files with dividing them into smaller
chunks. Each chunk is treated like a separate file, producing separate
outbound packet unrelated with other ones.
@headitem @tab XDR type @tab Value
@item Magic number @tab
8-byte, fixed length opaque data @tab
- @verb{|N N C P M 0x00 0x00 0x01|}
+ @verb{|N N C P M 0x00 0x00 0x02|}
@item File size @tab
unsigned hyper integer @tab
Whole reassembled file's size
Size of each chunk (except for the last one, that could be smaller)
@item Checksums @tab
variable length array of 32 byte fixed length opaque data @tab
- BLAKE2b-256 checksum of each chunk
+ @ref{MTH} checksum of each chunk
@end multitable
@anchor{ChunkedZFS}
Perform @ref{Spool, spool} directory integrity check. Read all files
that has Base32-encoded filenames and compare it with recalculated
-BLAKE2b hash output of their contents.
+@ref{MTH} hash output of their contents.
The most useful mode of operation is with @option{-nock} option, that
checks integrity of @file{.nock} files, renaming them to ordinary
temporary file location directory with @env{TMPDIR} environment
variable. Encryption is performed in AEAD mode with
@url{https://cr.yp.to/chacha.html, ChaCha20}-@url{https://en.wikipedia.org/wiki/Poly1305, Poly1305}
-algorithms. Data is splitted on 128 KiB blocks. Each block is encrypted
+algorithms. Data is divided on 128 KiB blocks. Each block is encrypted
with increasing nonce counter. File is deletes immediately after
creation, so even if program crashes -- disk space will be reclaimed, no
need in cleaning it up later.
file request, then it will sent simple letter after successful file
queuing.
+@node nncp-hash
+@section nncp-hash
+
+@example
+$ nncp-log [-file ...] [-seek X] [-debug] [-progress]
+@end example
+
+Calculate @ref{MTH} hash of either stdin, or @option{-file} if
+specified.
+
+You can optionally force seeking the file first, reading only part of
+the file, and then prepending unread portion of data, with the
+@option{-seek} option. It is intended only for testing and debugging of
+MTH hasher capabilities.
+
+@option{-debug} option shows all intermediate MTH hashes.
+And @option{-progress} will show progress bar.
+
@node nncp-log
@section nncp-log
@item @code{github.com/gosuri/uilive} @tab MIT
@item @code{github.com/hjson/hjson-go} @tab MIT
@item @code{github.com/klauspost/compress} @tab BSD 3-Clause
+@item @code{github.com/klauspost/cpuid} @tab BSD 3-Clause
@item @code{go.cypherpunks.ru/balloon} @tab GNU LGPLv3
+@item @code{go.cypherpunks.ru/recfile} @tab GNU GPLv3
@item @code{golang.org/x/crypto} @tab BSD 3-Clause
@item @code{golang.org/x/net} @tab BSD 3-Clause
@item @code{golang.org/x/sys} @tab BSD 3-Clause
@item @code{golang.org/x/term} @tab BSD 3-Clause
+@item @code{lukechampine.com/blake3} @tab MIT
@end multitable
@multitable {XXXXX} {XXXX-XX-XX} {XXXX KiB} {link sign} {xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx}
@headitem Version @tab Date @tab Size @tab Tarball @tab SHA256 checksum
+@item @ref{Release 6.6.0, 6.6.0} @tab 2021-06-26 @tab 1041 KiB
+@tab @url{download/nncp-6.6.0.tar.xz, link} @url{download/nncp-6.6.0.tar.xz.sig, sign}
+@tab @code{73DB666F A5C30282 770516B2 F39F1240 74117B45 A9F4B484 0361861A 183577F1}
+
@item @ref{Release 6.5.0, 6.5.0} @tab 2021-05-30 @tab 1041 KiB
@tab @url{download/nncp-6.5.0.tar.xz, link} @url{download/nncp-6.5.0.tar.xz.sig, sign}
@tab @code{241D2AA7 27275CCF 86F06797 1AA8B3B8 D625C85C 4279DFDE 560216E3 38670B9A}
@node EBlob
@unnumbered EBlob format
-Eblob is an encrypted blob (binary large object, in the terms of
+EBlob is an encrypted blob (binary large object, in the terms of
databases), holding any kind of symmetrically encrypted data with the
passphrase used to derive the key. It is used to secure configuration
files, holding valuable private keys, allowing them to be transferred
(@url{https://password-hashing.net/, Password Hashing Competition}
winner).
-Eblob is an @url{https://tools.ietf.org/html/rfc4506, XDR}-encoded structure:
+EBlob is an @url{https://tools.ietf.org/html/rfc4506, XDR}-encoded structure:
@verbatim
+-------+------------------+------+
@end multitable
@enumerate
-@item generate the main key using @code{balloon(BLAKE2b-256, S, T, P,
-salt, password)}
-@item initialize @url{https://blake2.net/, BLAKE2Xb} XOF with generated
-main key and 32-byte output length
-@item feed @verb{|N N C P B 0x00 0x00 0x03|} magic number to XOF
-@item read 32-bytes of blob AEAD encryption key
+@item generate the key using @code{balloon(BLAKE2b-256, S, T, P, salt, password)}
@item encrypt and authenticate blob using
@url{https://cr.yp.to/chacha.html, ChaCha20}-@url{https://en.wikipedia.org/wiki/Poly1305, Poly1305}.
- Blob is splitted on 128 KiB blocks. Each block is encrypted with
- increasing nonce counter. Eblob packet itself, with empty blob
- field, is fed as an additional authenticated data
+ EBlob packet itself, with empty blob field, is fed as an additional authenticated data
@end enumerate
* Spool directory: Spool.
* Log format: Log.
* Packet format: Packet.
+* Merkle Tree Hashing: MTH.
* Sync protocol: Sync.
* MultiCast Discovery: MCD.
* EBlob format: EBlob.
@include spool.texi
@include log.texi
@include pkt.texi
+@include mth.texi
@include sp.texi
@include mcd.texi
@include eblob.texi
@itemize
@item
- @command{nncp-daemon} sends multicast messages about its presence
- from time to time.
- See @ref{CfgMCDSend, mcd-send} configuration option.
+ @ref{nncp-daemon} sends multicast messages about its presence from
+ time to time. See @ref{CfgMCDSend, mcd-send} configuration option.
@item
- When @command{nncp-caller} sees them, it adds them as the most
+ When @ref{nncp-caller} sees them, it adds them as the most
preferred addresses to already known ones. If MCD address
announcement was not refreshed after two minutes -- it is removed.
See @ref{CfgMCDListen, mcd-listen} and
@end verbatim
Magic number is @verb{|N N C P D 0x00 0x00 0x01|} and sender is 32-byte
-identifier of the node. It is sent as UDP packet on IPv6 @verb{|ff02::1|}
-multicast address (all hosts in the network) and hard-coded @strong{5400}
-port. Operating system will use IPv6 link-local address as a source one,
-with the port taken from @command{nncp-daemon}'s @option{-bind} option.
-That way, IP packet itself will carry the link-scope reachable address
-of the daemon.
+identifier of the node. It is sent as UDP packet on IPv6
+@strong{@verb{|ff02::4e4e:4350|}} (hexadecimal ASCII @verb{|NNCP|})
+multicast address and @strong{5400} port. Operating system will use IPv6
+link-local address as a source one, with the port taken from
+@command{nncp-daemon}'s @option{-bind} option. That way, IP packet
+itself will carry the link-scope reachable address of the daemon.
--- /dev/null
+@node MTH
+@unnumbered Merkle Tree Hashing
+
+NNCP uses @url{https://github.com/BLAKE3-team/BLAKE3, BLAKE3} hash
+function in @url{https://en.wikipedia.org/wiki/Merkle_Tree, Merkle Tree}
+mode of operation for checksumming @ref{Encrypted, encrypted packets}
+and @ref{Chunked, chunked} files.
+
+Previously ordinary BLAKE2b-256 was used, but it prevented partial
+calculations of file parts, so you had to fully read the whole file
+again after its resumed download.
+
+MTH divides data on 128 KiB blocks, hashes each of them independently
+and then calculates the Merkle tree root:
+
+@verbatim
+ node3
+ / \
+ / \
+ node2 leaf4
+ / \ \
+ / \ \
+ / \ \
+ / \ \
+ / \ \
+ node0 node1 leaf4
+ / \ / \ \
+ / \ / \ \
+leaf0 leaf1 leaf2 leaf3 leaf4
+ | | | | |
+block block block block block
+@end verbatim
+
+Leaf's value is keyed BLAKE3-256 hash of underlying block (128 KiB,
+except for probably the last one). Node's value is keyed BLAKE3-256 hash
+of two underlying leafs. Keys are
+@verb{|BLAKE3-256("NNCP MTH LEAF")|} and
+@verb{|BLAKE3-256("NNCP MTH NODE")|}.
+Keyed operation allows working with an aligned data (128KiB or 64B
+boundaries), unlike popular way of prepending @verb{|0x00|} and
+@verb{|0x01|} to the hashed data, being more efficient with an attention
+to BLAKE3's internal Merkle tree.
@node Новости
@section Новости
+@node Релиз 7.0.0
+@subsection Релиз 7.0.0
+@itemize
+
+@item
+Хэширование с BLAKE3 на базе деревьев Меркле (Merkle Tree Hashing, MTH)
+используется вместо BLAKE2b. Из-за этого, обратно @strong{несовместимое}
+изменение формата шифрованных файлов (всего что находится в spool
+области) и формата @file{.meta} файла при chunked передаче.
+
+Текущая реализация далека от оптимальной: в ней нет распараллеливания
+вычислений и имеет повышенное потребление памяти: около 512 KiB на
+каждый 1 GiB данных файла. Будущая оптимизация производительности и
+потребления памяти не должна привести к изменению формата пакетов. Но
+это всё равно в несколько раз быстрее BLAKE2b.
+
+@item
+Из-за использования MTH, докачиваемые в online режиме файлы потребуют
+чтения с диска только предшествующей части, а не полностью всего файла,
+как было прежде.
+
+@item
+Добавлена @command{nncp-hash} утилита для вычисления MTH хэша файла.
+
+@item
+В шифрованных пакетах BLAKE2 KDF и XOF функции заменены на BLAKE3. Ещё
+уменьшая количество примитивов. А также заголовок шифрованного файла
+теперь является ассоциированными данными при шифровании.
+
+@item
+MultiCast Discovery использует
+@verb{|ff02::4e4e:4350|} адрес вместо @verb{|ff02::1|}.
+
+@item
+@command{nncp-cfgenc} ошибочно трижды спрашивал парольную фразу при шифровании.
+
+@item
+@command{nncp-stat} выводит сводку о частично скачанных пакетах.
+
+@item
+Обновлены зависимые библиотеки.
+
+@end itemize
+
@node Релиз 6.6.0
@subsection Релиз 6.6.0
@itemize
See also this page @ref{Новости, on russian}.
+@node Release 7.0.0
+@section Release 7.0.0
+@itemize
+
+@item
+Merkle Tree-based Hashing with BLAKE3 (MTH) is used instead of BLAKE2b.
+Because of that, there are backward @strong{incompatible} changes of
+encrypted files (everything laying in the spool directory) and
+@file{.meta} files of chunked transfer.
+
+Current implementation is far from being optimal: it lacks
+parallelizable calculations and has higher memory consumption: nearly
+512 KiB for each 1 GiB of file's data. Future performance and memory
+size optimizations should not lead to packet's format change. But it is
+still several times faster than BLAKE2b.
+
+@item
+Resumed online downloads, because of MTH, require reading only of the
+preceding part of file, not the whole one as was before.
+
+@item
+@command{nncp-hash} utility appeared for calculating file's MTH hash.
+
+@item
+BLAKE2 KDF and XOF functions are replaced with BLAKE3 in encrypted
+packets. Lowering number of used primitives. Also, its encrypted
+packet's header is used as an associated data during encryption.
+
+@item
+MultiCast Discovery uses
+@verb{|ff02::4e4e:4350|} address instead of @verb{|ff02::1|}.
+
+@item
+@command{nncp-cfgenc} mistakenly asked passphrase three times during encryption.
+
+@item
+@command{nncp-stat} reports about partly downloaded packets.
+
+@item
+Updated dependencies.
+
+@end itemize
+
@node Release 6.6.0
@section Release 6.6.0
@itemize
@headitem @tab XDR type @tab Value
@item Magic number @tab
8-byte, fixed length opaque data @tab
- @verb{|N N C P E 0x00 0x00 0x04|}
+ @verb{|N N C P E 0x00 0x00 0x05|}
@item Niceness @tab
unsigned integer @tab
1-255, packet @ref{Niceness, niceness} level
Ephemeral curve25519 public key
@item Signature @tab
64-byte, fixed length opaque data @tab
- ed25519 signature for that packet's header
+ ed25519 signature for that packet's header over all previous fields.
@end multitable
-Signature is calculated over all previous fields.
-
All following encryption is done in AEAD mode using
@url{https://cr.yp.to/chacha.html, ChaCha20}-@url{https://en.wikipedia.org/wiki/Poly1305, Poly1305}
-algorithms. Data is splitted on 128 KiB blocks. Each block is encrypted with
-increasing nonce counter.
-
-Authenticated and encrypted size come after the header:
-
-@multitable @columnfractions 0.2 0.3 0.5
-@headitem @tab XDR type @tab Value
-@item Size @tab
- unsigned hyper integer @tab
- Payload size.
-@end multitable
+algorithms. Authenticated data is BLAKE3-256 hash of the unsigned
+portion of the header (the same data used in the signature). Size is
+XDR-encoded unsigned hyper integer, carrying the payload size, encrypted
+as a single AEAD-block (with the tag) independently from the following
+blocks. It is encoded with the zero nonce.
-Then comes the actual payload.
+Payload with possible padding is divided on 128 KiB blocks blocks. They
+are encrypted with the same authenticated data and increasing big-endian
+64-bit nonce, starting at 1.
Each node has static @strong{exchange} and @strong{signature} keypairs.
When node A want to send encrypted packet to node B, it:
@item takes remote node's exchange public key and performs
Diffie-Hellman computation on this remote static public key and
private ephemeral one
-@item derive the keys:
- @enumerate
- @item initialize @url{https://blake2.net/, BLAKE2Xb} XOF with
- derived ephemeral key and 96-byte output length
- @item feed @verb{|N N C P E 0x00 0x00 0x04|} magic number to XOF
- @item read 32-bytes of "size" AEAD encryption key
- @item read 32-bytes of payload AEAD encryption key
- @item optionally read 32-bytes pad generation key
- @end enumerate
+@item derives 32-bytes AEAD encryption key with BLAKE3 derivation
+ function. Source key is the derived ephemeral key. Context is
+ @verb{|N N C P E 0x00 0x00 0x05|} magic number
+@item calculates authenticated data: it is BLAKE3-256 hash of the
+ unsigned header (same used for signing)
@item encrypts size, appends its authenticated ciphertext to the header
-@item encrypts payload, appends its authenticated ciphertext
+ (with authenticated data, nonce=0)
+@item encrypts each payload block, appending its authenticated ciphertext
+ (with authenticated data, nonce starting at 1, increasing with each block)
@item possibly appends any kind of "junk" noise data to hide real
- payload's size from the adversary (generated using XOF with
- unlimited output length)
+ payload's size from the adversary (generated using BLAKE3 XOF, with
+ the key derived from the ephemeral one and context string of
+ @verb{|N N C P E 0x00 0x00 0x05 <SP> P A D|})
@end enumerate
@item LYT64MWSNDK34CVYOO7TA6ZCJ3NWI2OUDBBMX2A4QWF34FIRY4DQ
is an example @ref{Encrypted, encrypted packet}. Its filename is Base32
-encoded BLAKE2b hash of the whole contents. It can be integrity checked
+encoded @ref{MTH} hash of the whole contents. It can be integrity checked
anytime.
@item LYT64MWSNDK34CVYOO7TA6ZCJ3NWI2OUDBBMX2A4QWF34FIRY4DQ.part
github.com/flynn/noise/vector* \
github.com/gorhill/cronexpr/APLv2 \
github.com/hjson/hjson-go/build_release.sh \
- github.com/klauspost/compress/snappy \
+ github.com/golang/snappy \
github.com/klauspost/compress/zstd/snappy.go \
golang.org/x/sys/plan9 \
golang.org/x/sys/windows
PORTNAME= nncp
-DISTVERSION= 6.5.0
+DISTVERSION= 7.0.0
CATEGORIES= net
MASTER_SITES= http://www.nncpgo.org/download/
bin/nncp-exec
bin/nncp-file
bin/nncp-freq
+bin/nncp-hash
bin/nncp-log
bin/nncp-pkt
bin/nncp-reass
onlineDeadline, maxOnlineTime time.Duration,
listOnly bool,
noCK bool,
- onlyPkts map[[32]byte]bool,
+ onlyPkts map[[MTHSize]byte]bool,
) (isGood bool) {
for _, addr := range addrs {
les := LEs{{"Node", node.Id}, {"Addr", addr}}
node.Name,
int(state.Duration.Hours()),
int(state.Duration.Minutes()),
- int(state.Duration.Seconds()),
+ int(state.Duration.Seconds()/60),
humanize.IBytes(uint64(state.RxBytes)),
humanize.IBytes(uint64(state.RxSpeed)),
humanize.IBytes(uint64(state.TxBytes)),
func CfgParse(data []byte) (*Ctx, error) {
var err error
- if bytes.Compare(data[:8], MagicNNCPBv3[:]) == 0 {
+ if bytes.Compare(data[:8], MagicNNCPBv3.B[:]) == 0 {
os.Stderr.WriteString("Passphrase:") // #nosec G104
password, err := term.ReadPassword(0)
if err != nil {
if err != nil {
return nil, err
}
+ } else if bytes.Compare(data[:8], MagicNNCPBv2.B[:]) == 0 {
+ log.Fatalln(MagicNNCPBv2.TooOld())
+ } else if bytes.Compare(data[:8], MagicNNCPBv1.B[:]) == 0 {
+ log.Fatalln(MagicNNCPBv1.TooOld())
}
var cfgGeneral map[string]interface{}
if err = hjson.Unmarshal(data, &cfgGeneral); err != nil {
"errors"
"fmt"
"io"
- "log"
"os"
"path/filepath"
-
- "golang.org/x/crypto/blake2b"
)
const NoCKSuffix = ".nock"
-func Check(src io.Reader, checksum []byte, les LEs, showPrgrs bool) (bool, error) {
- hsh, err := blake2b.New256(nil)
- if err != nil {
- log.Fatalln(err)
- }
- if _, err = CopyProgressed(hsh, bufio.NewReader(src), "check", les, showPrgrs); err != nil {
+func Check(
+ src io.Reader,
+ size int64,
+ checksum []byte,
+ les LEs,
+ showPrgrs bool,
+) (bool, error) {
+ hsh := MTHNew(size, 0)
+ if _, err := CopyProgressed(hsh, bufio.NewReaderSize(src, MTHSize), "check", les, showPrgrs); err != nil {
return false, err
}
return bytes.Compare(hsh.Sum(nil), checksum) == 0, nil
ctx.LogE("checking", les, err, logMsg)
return true
}
- gut, err := Check(fd, job.HshValue[:], les, ctx.ShowPrgrs)
+ gut, err := Check(fd, job.Size, job.HshValue[:], les, ctx.ShowPrgrs)
fd.Close() // #nosec G104
if err != nil {
ctx.LogE("checking", les, err, logMsg)
return !(ctx.checkXxIsBad(nodeId, TRx) || ctx.checkXxIsBad(nodeId, TTx))
}
-func (ctx *Ctx) CheckNoCK(nodeId *NodeId, hshValue *[32]byte) (int64, error) {
+func (ctx *Ctx) CheckNoCK(nodeId *NodeId, hshValue *[MTHSize]byte, mth *MTH) (int64, error) {
dirToSync := filepath.Join(ctx.Spool, nodeId.String(), string(TRx))
pktName := Base32Codec.EncodeToString(hshValue[:])
pktPath := filepath.Join(dirToSync, pktName)
if err != nil {
return 0, err
}
- defer fd.Close()
size := fi.Size()
les := LEs{
{"XX", string(TRx)},
{"Pkt", pktName},
{"FullSize", size},
}
- gut, err := Check(fd, hshValue[:], les, ctx.ShowPrgrs)
+ var gut bool
+ if mth == nil {
+ gut, err = Check(fd, size, hshValue[:], les, ctx.ShowPrgrs)
+ } else {
+ mth.PktName = pktName
+ if _, err = mth.PrependFrom(bufio.NewReaderSize(fd, MTHSize)); err != nil {
+ return 0, err
+ }
+ if bytes.Compare(mth.Sum(nil), hshValue[:]) == 0 {
+ gut = true
+ }
+ }
if err != nil || !gut {
return 0, errors.New("checksum mismatch")
}
package nncp
var (
- MagicNNCPMv1 [8]byte = [8]byte{'N', 'N', 'C', 'P', 'M', 0, 0, 1}
-
ChunkedSuffixMeta = ".nncp.meta"
ChunkedSuffixPart = ".nncp.chunk"
)
Magic [8]byte
FileSize uint64
ChunkSize uint64
- Checksums [][32]byte
+ Checksums [][MTHSize]byte
}
xdr "github.com/davecgh/go-xdr/xdr2"
"github.com/dustin/go-humanize"
- "go.cypherpunks.ru/nncp/v6"
- "golang.org/x/crypto/blake2b"
+ "go.cypherpunks.ru/nncp/v7"
)
const (
)
continue
}
- if pktEnc.Magic != nncp.MagicNNCPEv4 {
+ switch pktEnc.Magic {
+ case nncp.MagicNNCPEv1.B:
+ err = nncp.MagicNNCPEv1.TooOld()
+ case nncp.MagicNNCPEv2.B:
+ err = nncp.MagicNNCPEv2.TooOld()
+ case nncp.MagicNNCPEv3.B:
+ err = nncp.MagicNNCPEv3.TooOld()
+ case nncp.MagicNNCPEv4.B:
+ err = nncp.MagicNNCPEv4.TooOld()
+ case nncp.MagicNNCPEv5.B:
+ default:
+ err = errors.New("Bad packet magic number")
+ }
+ if err != nil {
ctx.LogD(
"bundle-rx",
- append(les, nncp.LE{K: "Err", V: "Bad packet magic number"}),
+ append(les, nncp.LE{K: "Err", V: err.Error()}),
logMsg,
)
continue
})
continue
}
- hsh, err := blake2b.New256(nil)
- if err != nil {
- log.Fatalln("Error during hasher creation:", err)
- }
+ hsh := nncp.MTHNew(entry.Size, 0)
if _, err = hsh.Write(pktEncBuf); err != nil {
log.Fatalln("Error during writing:", err)
}
}
if *doCheck {
if *dryRun {
- hsh, err := blake2b.New256(nil)
- if err != nil {
- log.Fatalln("Error during hasher creation:", err)
- }
+ hsh := nncp.MTHNew(entry.Size, 0)
if _, err = hsh.Write(pktEncBuf); err != nil {
log.Fatalln("Error during writing:", err)
}
"strings"
"time"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
close(autoTossFinish)
badCode = (<-autoTossBadCode) || badCode
}
+ nncp.SPCheckerWg.Wait()
if badCode {
os.Exit(1)
}
"sync"
"time"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
}
}
wg.Wait()
+ nncp.SPCheckerWg.Wait()
}
"os"
xdr "github.com/davecgh/go-xdr/xdr2"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
"golang.org/x/crypto/blake2b"
"golang.org/x/term"
)
if _, err := xdr.Unmarshal(bytes.NewReader(data), &eblob); err != nil {
log.Fatalln(err)
}
- if eblob.Magic != nncp.MagicNNCPBv3 {
+ switch eblob.Magic {
+ case nncp.MagicNNCPBv1.B:
+ log.Fatalln(nncp.MagicNNCPBv1.TooOld())
+ case nncp.MagicNNCPBv2.B:
+ log.Fatalln(nncp.MagicNNCPBv2.TooOld())
+ case nncp.MagicNNCPBv3.B:
+ default:
log.Fatalln(errors.New("Unknown eblob type"))
}
fmt.Println("Strengthening function: Balloon with BLAKE2b-256")
}
os.Stderr.WriteString("Passphrase:") // #nosec G104
- password, err := term.ReadPassword(0)
+ password1, err := term.ReadPassword(0)
if err != nil {
log.Fatalln(err)
}
- os.Stderr.WriteString("\n") // #nosec G104
-
if *decrypt {
- cfgRaw, err := nncp.DeEBlob(data, password)
+ cfgRaw, err := nncp.DeEBlob(data, password1)
if err != nil {
log.Fatalln(err)
}
os.Stdout.Write(cfgRaw) // #nosec G104
return
}
-
- password1, err := term.ReadPassword(0)
- if err != nil {
- log.Fatalln(err)
- }
- os.Stderr.WriteString("\n") // #nosec G104
- os.Stderr.WriteString("Repeat passphrase:") // #nosec G104
+ os.Stderr.WriteString("\nRepeat passphrase:") // #nosec G104
password2, err := term.ReadPassword(0)
if err != nil {
log.Fatalln(err)
"os"
"github.com/hjson/hjson-go"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
"log"
"os"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
"os"
"path/filepath"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
}
if *nock {
for job := range ctx.JobsNoCK(node.Id) {
- if _, err = ctx.CheckNoCK(node.Id, job.HshValue); err != nil {
+ if _, err = ctx.CheckNoCK(node.Id, job.HshValue, nil); err != nil {
pktName := nncp.Base32Codec.EncodeToString(job.HshValue[:])
log.Println(filepath.Join(
ctx.Spool,
"time"
"github.com/gorhill/cronexpr"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
"time"
"github.com/dustin/go-humanize"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
"golang.org/x/net/netutil"
)
func performSP(
ctx *nncp.Ctx,
conn nncp.ConnDeadlined,
+ addr string,
nice uint8,
noCK bool,
nodeIdC chan *nncp.NodeId,
ctx.LogI(
"call-started",
nncp.LEs{{K: "Node", V: state.Node.Id}},
- func(les nncp.LEs) string { return "Connection with " + state.Node.Name },
+ func(les nncp.LEs) string {
+ return fmt.Sprintf("Connection with %s (%s)", state.Node.Name, addr)
+ },
)
nodeIdC <- state.Node.Id
state.Wait()
state.Node.Name,
int(state.Duration.Hours()),
int(state.Duration.Minutes()),
- int(state.Duration.Seconds()),
+ int(state.Duration.Seconds()/60),
humanize.IBytes(uint64(state.RxBytes)),
humanize.IBytes(uint64(state.RxSpeed)),
humanize.IBytes(uint64(state.TxBytes)),
os.Stderr.Close() // #nosec G104
conn := &InetdConn{os.Stdin, os.Stdout}
nodeIdC := make(chan *nncp.NodeId)
- go performSP(ctx, conn, nice, *noCK, nodeIdC)
+ go performSP(ctx, conn, "PIPE", nice, *noCK, nodeIdC)
nodeId := <-nodeIdC
var autoTossFinish chan struct{}
var autoTossBadCode chan bool
)
go func(conn net.Conn) {
nodeIdC := make(chan *nncp.NodeId)
- go performSP(ctx, conn, nice, *noCK, nodeIdC)
+ go performSP(ctx, conn, conn.RemoteAddr().String(), nice, *noCK, nodeIdC)
nodeId := <-nodeIdC
var autoTossFinish chan struct{}
var autoTossBadCode chan bool
"log"
"os"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
"os"
"strings"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
"path/filepath"
"strings"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
--- /dev/null
+/*
+NNCP -- Node to Node copy, utilities for store-and-forward data exchange
+Copyright (C) 2016-2021 Sergey Matveev <stargrave@stargrave.org>
+
+This program is free software: you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation, version 3 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program. If not, see <http://www.gnu.org/licenses/>.
+*/
+
+// Calculate MTH hash of the file
+package main
+
+import (
+ "bufio"
+ "encoding/hex"
+ "flag"
+ "fmt"
+ "io"
+ "log"
+ "os"
+ "sync"
+
+ "go.cypherpunks.ru/nncp/v7"
+)
+
+func usage() {
+ fmt.Fprintf(os.Stderr, nncp.UsageHeader())
+ fmt.Fprintf(os.Stderr, "nncp-hash -- calculate MTH hash of the file\n\n")
+ fmt.Fprintf(os.Stderr, "Usage: %s [-file ...] [-seek X] [-debug] [-progress] [options]\nOptions:\n", os.Args[0])
+ flag.PrintDefaults()
+}
+
+func main() {
+ var (
+ fn = flag.String("file", "", "Read the file instead of stdin")
+ seek = flag.Uint64("seek", 0, "Seek the file, hash, rewind, hash remaining")
+ showPrgrs = flag.Bool("progress", false, "Progress showing")
+ debug = flag.Bool("debug", false, "Print MTH steps calculations")
+ version = flag.Bool("version", false, "Print version information")
+ warranty = flag.Bool("warranty", false, "Print warranty information")
+ )
+ log.SetFlags(log.Lshortfile)
+ flag.Usage = usage
+ flag.Parse()
+ if *warranty {
+ fmt.Println(nncp.Warranty)
+ return
+ }
+ if *version {
+ fmt.Println(nncp.VersionGet())
+ return
+ }
+
+ fd := os.Stdin
+ var err error
+ var size int64
+ if *fn == "" {
+ *showPrgrs = false
+ } else {
+ fd, err = os.Open(*fn)
+ if err != nil {
+ log.Fatalln(err)
+ }
+ fi, err := fd.Stat()
+ if err != nil {
+ log.Fatalln(err)
+ }
+ size = fi.Size()
+ }
+ mth := nncp.MTHNew(size, int64(*seek))
+ var debugger sync.WaitGroup
+ if *debug {
+ fmt.Println("Leaf BLAKE3 key:", hex.EncodeToString(nncp.MTHLeafKey[:]))
+ fmt.Println("Node BLAKE3 key:", hex.EncodeToString(nncp.MTHNodeKey[:]))
+ mth.Events = make(chan nncp.MTHEvent)
+ debugger.Add(1)
+ go func() {
+ for e := range mth.Events {
+ var t string
+ switch e.Type {
+ case nncp.MTHEventAppend:
+ t = "Add"
+ case nncp.MTHEventPrepend:
+ t = "Pre"
+ case nncp.MTHEventFold:
+ t = "Fold"
+ }
+ fmt.Printf(
+ "%s\t%03d\t%06d\t%s\n",
+ t, e.Level, e.Ctr, hex.EncodeToString(e.Hsh),
+ )
+ }
+ debugger.Done()
+ }()
+ }
+ if *seek != 0 {
+ if *fn == "" {
+ log.Fatalln("-file is required with -seek")
+ }
+ if _, err = fd.Seek(int64(*seek), io.SeekStart); err != nil {
+ log.Fatalln(err)
+ }
+ }
+ if _, err = nncp.CopyProgressed(
+ mth, bufio.NewReaderSize(fd, nncp.MTHBlockSize),
+ "hash", nncp.LEs{{K: "Pkt", V: *fn}, {K: "FullSize", V: size - int64(*seek)}},
+ *showPrgrs,
+ ); err != nil {
+ log.Fatalln(err)
+ }
+ if *seek != 0 {
+ if _, err = fd.Seek(0, io.SeekStart); err != nil {
+ log.Fatalln(err)
+ }
+ if *showPrgrs {
+ mth.PktName = *fn
+ }
+ if _, err = mth.PrependFrom(bufio.NewReaderSize(fd, nncp.MTHBlockSize)); err != nil {
+ log.Fatalln(err)
+ }
+ }
+ sum := mth.Sum(nil)
+ debugger.Wait()
+ fmt.Println(hex.EncodeToString(sum))
+}
"log"
"os"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
"go.cypherpunks.ru/recfile"
)
xdr "github.com/davecgh/go-xdr/xdr2"
"github.com/klauspost/compress/zstd"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
fmt.Fprintln(os.Stderr, "Packet is read from stdin.")
}
+func doPlain(pkt nncp.Pkt, dump, decompress bool) {
+ if dump {
+ bufW := bufio.NewWriter(os.Stdout)
+ var r io.Reader
+ r = bufio.NewReader(os.Stdin)
+ if decompress {
+ decompressor, err := zstd.NewReader(r)
+ if err != nil {
+ log.Fatalln(err)
+ }
+ r = decompressor
+ }
+ if _, err := io.Copy(bufW, r); err != nil {
+ log.Fatalln(err)
+ }
+ if err := bufW.Flush(); err != nil {
+ log.Fatalln(err)
+ }
+ return
+ }
+ payloadType := "unknown"
+ switch pkt.Type {
+ case nncp.PktTypeFile:
+ payloadType = "file"
+ case nncp.PktTypeFreq:
+ payloadType = "file request"
+ case nncp.PktTypeExec:
+ payloadType = "exec compressed"
+ case nncp.PktTypeExecFat:
+ payloadType = "exec uncompressed"
+ case nncp.PktTypeTrns:
+ payloadType = "transitional"
+ }
+ var path string
+ switch pkt.Type {
+ case nncp.PktTypeExec, nncp.PktTypeExecFat:
+ path = string(bytes.Replace(
+ pkt.Path[:pkt.PathLen],
+ []byte{0},
+ []byte(" "),
+ -1,
+ ))
+ case nncp.PktTypeTrns:
+ path = nncp.Base32Codec.EncodeToString(pkt.Path[:pkt.PathLen])
+ default:
+ path = string(pkt.Path[:pkt.PathLen])
+ }
+ fmt.Printf(
+ "Packet type: plain\nPayload type: %s\nNiceness: %s (%d)\nPath: %s\n",
+ payloadType, nncp.NicenessFmt(pkt.Nice), pkt.Nice, path,
+ )
+ return
+}
+
+func doEncrypted(pktEnc nncp.PktEnc, dump bool, cfgPath string, beginning []byte) {
+ if !dump {
+ fmt.Printf(
+ "Packet type: encrypted\nNiceness: %s (%d)\nSender: %s\nRecipient: %s\n",
+ nncp.NicenessFmt(pktEnc.Nice), pktEnc.Nice, pktEnc.Sender, pktEnc.Recipient,
+ )
+ return
+ }
+ ctx, err := nncp.CtxFromCmdline(cfgPath, "", "", false, false, false, false)
+ if err != nil {
+ log.Fatalln("Error during initialization:", err)
+ }
+ if ctx.Self == nil {
+ log.Fatalln("Config lacks private keys")
+ }
+ bufW := bufio.NewWriter(os.Stdout)
+ if _, _, err = nncp.PktEncRead(
+ ctx.Self,
+ ctx.Neigh,
+ io.MultiReader(
+ bytes.NewReader(beginning),
+ bufio.NewReader(os.Stdin),
+ ),
+ bufW,
+ ); err != nil {
+ log.Fatalln(err)
+ }
+ if err = bufW.Flush(); err != nil {
+ log.Fatalln(err)
+ }
+}
+
func main() {
var (
overheads = flag.Bool("overheads", false, "Print packet overheads")
return
}
- var err error
beginning := make([]byte, nncp.PktOverhead)
- if _, err = io.ReadFull(os.Stdin, beginning); err != nil {
+ if _, err := io.ReadFull(os.Stdin, beginning); err != nil {
log.Fatalln("Not enough data to read")
}
var pkt nncp.Pkt
- _, err = xdr.Unmarshal(bytes.NewReader(beginning), &pkt)
- if err == nil && pkt.Magic == nncp.MagicNNCPPv3 {
- if *dump {
- bufW := bufio.NewWriter(os.Stdout)
- var r io.Reader
- r = bufio.NewReader(os.Stdin)
- if *decompress {
- decompressor, err := zstd.NewReader(r)
- if err != nil {
- log.Fatalln(err)
- }
- r = decompressor
- }
- if _, err = io.Copy(bufW, r); err != nil {
- log.Fatalln(err)
- }
- if err = bufW.Flush(); err != nil {
- log.Fatalln(err)
- }
+ if _, err := xdr.Unmarshal(bytes.NewReader(beginning), &pkt); err == nil {
+ switch pkt.Magic {
+ case nncp.MagicNNCPPv1.B:
+ log.Fatalln(nncp.MagicNNCPPv1.TooOld())
+ case nncp.MagicNNCPPv2.B:
+ log.Fatalln(nncp.MagicNNCPPv2.TooOld())
+ case nncp.MagicNNCPPv3.B:
+ doPlain(pkt, *dump, *decompress)
return
}
- payloadType := "unknown"
- switch pkt.Type {
- case nncp.PktTypeFile:
- payloadType = "file"
- case nncp.PktTypeFreq:
- payloadType = "file request"
- case nncp.PktTypeExec:
- payloadType = "exec"
- case nncp.PktTypeExecFat:
- payloadType = "exec uncompressed"
- case nncp.PktTypeTrns:
- payloadType = "transitional"
- }
- var path string
- switch pkt.Type {
- case nncp.PktTypeExec, nncp.PktTypeExecFat:
- path = string(bytes.Replace(
- pkt.Path[:pkt.PathLen],
- []byte{0},
- []byte(" "),
- -1,
- ))
- case nncp.PktTypeTrns:
- path = nncp.Base32Codec.EncodeToString(pkt.Path[:pkt.PathLen])
- default:
- path = string(pkt.Path[:pkt.PathLen])
- }
- fmt.Printf(
- "Packet type: plain\nPayload type: %s\nNiceness: %s (%d)\nPath: %s\n",
- payloadType, nncp.NicenessFmt(pkt.Nice), pkt.Nice, path,
- )
- return
}
var pktEnc nncp.PktEnc
- _, err = xdr.Unmarshal(bytes.NewReader(beginning), &pktEnc)
- if err == nil && pktEnc.Magic == nncp.MagicNNCPEv4 {
- if *dump {
- ctx, err := nncp.CtxFromCmdline(*cfgPath, "", "", false, false, false, false)
- if err != nil {
- log.Fatalln("Error during initialization:", err)
- }
- if ctx.Self == nil {
- log.Fatalln("Config lacks private keys")
- }
- bufW := bufio.NewWriter(os.Stdout)
- if _, _, err = nncp.PktEncRead(
- ctx.Self,
- ctx.Neigh,
- io.MultiReader(
- bytes.NewReader(beginning),
- bufio.NewReader(os.Stdin),
- ),
- bufW,
- ); err != nil {
- log.Fatalln(err)
- }
- if err = bufW.Flush(); err != nil {
- log.Fatalln(err)
- }
+ if _, err := xdr.Unmarshal(bytes.NewReader(beginning), &pktEnc); err == nil {
+ switch pkt.Magic {
+ case nncp.MagicNNCPEv1.B:
+ log.Fatalln(nncp.MagicNNCPEv1.TooOld())
+ case nncp.MagicNNCPEv2.B:
+ log.Fatalln(nncp.MagicNNCPEv2.TooOld())
+ case nncp.MagicNNCPEv3.B:
+ log.Fatalln(nncp.MagicNNCPEv3.TooOld())
+ case nncp.MagicNNCPEv4.B:
+ log.Fatalln(nncp.MagicNNCPEv4.TooOld())
+ case nncp.MagicNNCPEv5.B:
+ doEncrypted(pktEnc, *dump, *cfgPath, beginning)
return
}
- fmt.Printf(
- "Packet type: encrypted\nNiceness: %s (%d)\nSender: %s\nRecipient: %s\n",
- nncp.NicenessFmt(pktEnc.Nice), pktEnc.Nice, pktEnc.Sender, pktEnc.Recipient,
- )
- return
}
log.Fatalln("Unable to determine packet type")
}
xdr "github.com/davecgh/go-xdr/xdr2"
"github.com/dustin/go-humanize"
- "go.cypherpunks.ru/nncp/v6"
- "golang.org/x/crypto/blake2b"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
return false
}
fd.Close() // #nosec G104
- if metaPkt.Magic != nncp.MagicNNCPMv1 {
+ if metaPkt.Magic == nncp.MagicNNCPMv1.B {
+ ctx.LogE("reass", les, nncp.MagicNNCPMv1.TooOld(), logMsg)
+ return false
+ }
+ if metaPkt.Magic != nncp.MagicNNCPMv2.B {
ctx.LogE("reass", les, nncp.BadMagic, logMsg)
return false
}
if err != nil {
log.Fatalln("Can not stat file:", err)
}
- hsh, err = blake2b.New256(nil)
- if err != nil {
- log.Fatalln(err)
- }
+ hsh = nncp.MTHNew(fi.Size(), 0)
if _, err = nncp.CopyProgressed(
hsh, bufio.NewReader(fd), "check",
nncp.LEs{{K: "Pkt", V: chunkPath}, {K: "FullSize", V: fi.Size()}},
"strings"
"time"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
"sort"
"github.com/dustin/go-humanize"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
flag.PrintDefaults()
}
-func jobPrint(xx nncp.TRxTx, job nncp.Job) {
+func jobPrint(xx nncp.TRxTx, job nncp.Job, suffix string) {
fmt.Printf(
- "\t%s %s %s (nice: %s)\n",
+ "\t%s %s%s %s (nice: %s)\n",
string(xx),
- nncp.Base32Codec.EncodeToString(job.HshValue[:]),
+ nncp.Base32Codec.EncodeToString(job.HshValue[:]), suffix,
humanize.IBytes(uint64(job.Size)),
nncp.NicenessFmt(job.PktEnc.Nice),
)
rxBytes := make(map[uint8]int64)
noCKNums := make(map[uint8]int)
noCKBytes := make(map[uint8]int64)
+ partNums := 0
+ partBytes := int64(0)
for job := range ctx.Jobs(node.Id, nncp.TRx) {
if *showPkt {
- jobPrint(nncp.TRx, job)
+ jobPrint(nncp.TRx, job, "")
}
rxNums[job.PktEnc.Nice] = rxNums[job.PktEnc.Nice] + 1
rxBytes[job.PktEnc.Nice] = rxBytes[job.PktEnc.Nice] + job.Size
}
for job := range ctx.JobsNoCK(node.Id) {
if *showPkt {
- jobPrint(nncp.TRx, job)
+ jobPrint(nncp.TRx, job, ".nock")
}
noCKNums[job.PktEnc.Nice] = noCKNums[job.PktEnc.Nice] + 1
noCKBytes[job.PktEnc.Nice] = noCKBytes[job.PktEnc.Nice] + job.Size
}
+ for job := range ctx.JobsPart(node.Id) {
+ if *showPkt {
+ fmt.Printf(
+ "\t%s %s.part %s\n",
+ string(nncp.TRx),
+ nncp.Base32Codec.EncodeToString(job.HshValue[:]),
+ humanize.IBytes(uint64(job.Size)),
+ )
+ }
+ partNums++
+ partBytes += job.Size
+ }
txNums := make(map[uint8]int)
txBytes := make(map[uint8]int64)
for job := range ctx.Jobs(node.Id, nncp.TTx) {
if *showPkt {
- jobPrint(nncp.TTx, job)
+ jobPrint(nncp.TTx, job, "")
}
txNums[job.PktEnc.Nice] = txNums[job.PktEnc.Nice] + 1
txBytes[job.PktEnc.Nice] = txBytes[job.PktEnc.Nice] + job.Size
}
var nice uint8
+ if partNums > 0 {
+ fmt.Printf(
+ "\tpart: % 10s, % 3d pkts\n",
+ humanize.IBytes(uint64(partBytes)), partNums,
+ )
+ }
for nice = 1; nice > 0; nice++ {
rxNum, rxExists := rxNums[nice]
txNum, txExists := txNums[nice]
"os"
"time"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
"path/filepath"
"github.com/dustin/go-humanize"
- "go.cypherpunks.ru/nncp/v6"
+ "go.cypherpunks.ru/nncp/v7"
)
func usage() {
continue
}
pktEnc, pktEncRaw, err := ctx.HdrRead(fd)
- if err != nil || pktEnc.Magic != nncp.MagicNNCPEv4 {
- ctx.LogD("xfer-rx-not-packet", les, func(les nncp.LEs) string {
- return logMsg(les) + ": is not a packet"
- })
+ if err != nil {
+ switch pktEnc.Magic {
+ case nncp.MagicNNCPEv1.B:
+ err = nncp.MagicNNCPEv1.TooOld()
+ case nncp.MagicNNCPEv2.B:
+ err = nncp.MagicNNCPEv2.TooOld()
+ case nncp.MagicNNCPEv3.B:
+ err = nncp.MagicNNCPEv3.TooOld()
+ case nncp.MagicNNCPEv4.B:
+ err = nncp.MagicNNCPEv4.TooOld()
+ case nncp.MagicNNCPEv5.B:
+ default:
+ err = errors.New("is not an encrypted packet")
+ }
+ }
+ if err != nil {
+ ctx.LogD(
+ "xfer-rx-not-packet",
+ append(les, nncp.LE{K: "Err", V: err}),
+ func(les nncp.LEs) string {
+ return logMsg(les) + ": not valid packet: " + err.Error()
+ },
+ )
fd.Close() // #nosec G104
continue
}
DefaultP = 2
)
-var (
- MagicNNCPBv3 [8]byte = [8]byte{'N', 'N', 'C', 'P', 'B', 0, 0, 3}
-)
-
type EBlob struct {
Magic [8]byte
SCost uint32
return nil, err
}
eblob := EBlob{
- Magic: MagicNNCPBv3,
+ Magic: MagicNNCPBv3.B,
SCost: uint32(sCost),
TCost: uint32(tCost),
PCost: uint32(pCost),
if _, err = xdr.Unmarshal(bytes.NewReader(eblobRaw), &eblob); err != nil {
return nil, err
}
- if eblob.Magic != MagicNNCPBv3 {
- return nil, BadMagic
+ switch eblob.Magic {
+ case MagicNNCPBv1.B:
+ err = MagicNNCPBv1.TooOld()
+ case MagicNNCPBv2.B:
+ err = MagicNNCPBv1.TooOld()
+ case MagicNNCPBv3.B:
+ default:
+ err = BadMagic
+ }
+ if err != nil {
+ return nil, err
}
key := balloon.H(
blake256,
-module go.cypherpunks.ru/nncp/v6
+module go.cypherpunks.ru/nncp/v7
require (
github.com/davecgh/go-xdr v0.0.0-20161123171359-e6a2ba005892
github.com/dustin/go-humanize v1.0.0
- github.com/flynn/noise v0.0.0-20180327030543-2492fe189ae6
+ github.com/flynn/noise v1.0.0
github.com/gorhill/cronexpr v0.0.0-20180427100037-88b0669f7d75
github.com/hjson/hjson-go v3.1.0+incompatible
- github.com/klauspost/compress v1.11.7
- github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect
+ github.com/klauspost/compress v1.13.1
go.cypherpunks.ru/balloon v1.1.1
go.cypherpunks.ru/recfile v0.4.3
- golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83
- golang.org/x/net v0.0.0-20210220033124-5f55cee0dc0d
- golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43
- golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d
- gopkg.in/check.v1 v1.0.0-20200902074654-038fdea0a05b // indirect
+ golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e
+ golang.org/x/net v0.0.0-20210614182718-04defd469f4e
+ golang.org/x/sys v0.0.0-20210616094352-59db8d763f22
+ golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b
+ lukechampine.com/blake3 v1.1.5
)
go 1.12
github.com/davecgh/go-xdr v0.0.0-20161123171359-e6a2ba005892/go.mod h1:CTDl0pzVzE5DEzZhPfvhY/9sPFMQIxaJ9VAMs9AagrE=
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
-github.com/flynn/noise v0.0.0-20180327030543-2492fe189ae6 h1:u/UEqS66A5ckRmS4yNpjmVH56sVtS/RfclBAYocb4as=
-github.com/flynn/noise v0.0.0-20180327030543-2492fe189ae6/go.mod h1:1i71OnUq3iUe1ma7Lr6yG6/rjvM3emb6yoL7xLFzcVQ=
+github.com/flynn/noise v1.0.0 h1:DlTHqmzmvcEiKj+4RYo/imoswx/4r6iBlCMfVtrMXpQ=
+github.com/flynn/noise v1.0.0/go.mod h1:xbMo+0i6+IGbYdJhF31t2eR1BIU0CYc12+BNAKwUTag=
+github.com/golang/snappy v0.0.3 h1:fHPg5GQYlCeLIPB9BZqMVR5nR9A+IM5zcgeTdjMYmLA=
+github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/gorhill/cronexpr v0.0.0-20180427100037-88b0669f7d75 h1:f0n1xnMSmBLzVfsMMvriDyA75NB/oBgILX2GcHXIQzY=
github.com/gorhill/cronexpr v0.0.0-20180427100037-88b0669f7d75/go.mod h1:g2644b03hfBX9Ov0ZBDgXXens4rxSxmqFBbhvKv2yVA=
github.com/hjson/hjson-go v3.1.0+incompatible h1:DY/9yE8ey8Zv22bY+mHV1uk2yRy0h8tKhZ77hEdi0Aw=
github.com/hjson/hjson-go v3.1.0+incompatible/go.mod h1:qsetwF8NlsTsOTwZTApNlTCerV+b2GjYRRcIk4JMFio=
-github.com/klauspost/compress v1.11.7 h1:0hzRabrMN4tSTvMfnL3SCv1ZGeAP23ynzodBgaHeMeg=
-github.com/klauspost/compress v1.11.7/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
+github.com/klauspost/compress v1.13.1 h1:wXr2uRxZTJXHLly6qhJabee5JqIhTRoLBhDOA74hDEQ=
+github.com/klauspost/compress v1.13.1/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg=
+github.com/klauspost/cpuid v1.3.1 h1:5JNjFYYQrZeKRJ0734q51WCEEn2huer72Dc7K+R/b6s=
+github.com/klauspost/cpuid v1.3.1/go.mod h1:bYW4mA6ZgKPob1/Dlai2LviZJO7KGI3uoWLd42rAQw4=
+github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI=
+github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
-github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
-github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
go.cypherpunks.ru/balloon v1.1.1 h1:ypHM1DRf/XuCrp9pDkTHg00CqZX/Np/APb//iHvDJTA=
go.cypherpunks.ru/balloon v1.1.1/go.mod h1:k4s4ozrIrhpBjj78Z7LX8ZHxMQ+XE7DZUWl8gP2ojCo=
go.cypherpunks.ru/recfile v0.4.3 h1:ephokihmV//p0ob6gx2FWXvm28/NBDbWTOJPUNahxO8=
go.cypherpunks.ru/recfile v0.4.3/go.mod h1:sR+KajB+vzofL3SFVFwKt3Fke0FaCcN1g3YPNAhU3qI=
-golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
-golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83 h1:/ZScEX8SfEmUGRHs0gxpqteO5nfNW6axyZbBdw9A12g=
-golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
-golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
-golang.org/x/net v0.0.0-20210220033124-5f55cee0dc0d h1:1aflnvSoWWLI2k/dMUAl5lvU1YO4Mb4hz0gh+1rjcxU=
-golang.org/x/net v0.0.0-20210220033124-5f55cee0dc0d/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
-golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
-golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
+golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e h1:gsTQYXdTw2Gq7RBsWvlQ91b+aEQ6bXFUngBGuR8sPpI=
+golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
+golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20210614182718-04defd469f4e h1:XpT3nA5TvE525Ne3hInMh6+GETgn27Zfm9dxsThnX2Q=
+golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43 h1:SgQ6LNaYJU0JIuEHv9+s6EbhSCwYeAf5Yvj6lpYlqAE=
-golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
+golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1 h1:SrN+KX8Art/Sf4HNj6Zcz06G7VEz+7w9tdXTPOZ7+l4=
+golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210616094352-59db8d763f22 h1:RqytpXGR1iVNX7psjB3ff8y7sNFinVFvkx1c8SjBkio=
+golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 h1:v+OssWQX+hTHEmOBgwxdZxK4zHq3yOs8F9J7mk0PY8E=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
-golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d h1:SZxvLBoTP5yHO3Frd4z4vrF+DBX9vMVanchswa69toE=
-golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
-golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b h1:9zKuko04nR4gjZ4+DNjHqRlAJqbJETHwiNKDqTfOjfE=
+golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
-gopkg.in/check.v1 v1.0.0-20200902074654-038fdea0a05b h1:QRR6H1YWRnHb4Y/HeNFCTJLFVxaq6wH4YuVdsUOr75U=
-gopkg.in/check.v1 v1.0.0-20200902074654-038fdea0a05b/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
+gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
+lukechampine.com/blake3 v1.1.5 h1:hsACfxWvLdGmjYbWGrumQIphOvO+ZruZehWtgd2fxoM=
+lukechampine.com/blake3 v1.1.5/go.mod h1:hE8RpzdO8ttZ7446CXEwDP1eu2V4z7stv0Urj1El20g=
PktEnc *PktEnc
Path string
Size int64
- HshValue *[32]byte
+ HshValue *[MTHSize]byte
}
func (ctx *Ctx) HdrRead(fd *os.File) (*PktEnc, []byte, error) {
return err
}
-func (ctx *Ctx) jobsFind(nodeId *NodeId, xx TRxTx, nock bool) chan Job {
+func (ctx *Ctx) jobsFind(nodeId *NodeId, xx TRxTx, nock, part bool) chan Job {
rxPath := filepath.Join(ctx.Spool, nodeId.String(), string(xx))
jobs := make(chan Job, 16)
go func() {
hshValue, err = Base32Codec.DecodeString(
strings.TrimSuffix(name, NoCKSuffix),
)
+ } else if part {
+ if !strings.HasSuffix(name, PartSuffix) ||
+ len(name) != Base32Encoded32Len+len(PartSuffix) {
+ continue
+ }
+ hshValue, err = Base32Codec.DecodeString(
+ strings.TrimSuffix(name, PartSuffix),
+ )
} else {
if len(name) != Base32Encoded32Len {
continue
pth := filepath.Join(rxPath, name)
hdrExists := true
var fd *os.File
- if nock {
+ if nock || part {
fd, err = os.Open(pth)
} else {
fd, err = os.Open(pth + HdrSuffix)
if err != nil {
continue
}
+ if part {
+ job := Job{
+ Path: pth,
+ Size: fi.Size(),
+ HshValue: new([MTHSize]byte),
+ }
+ copy(job.HshValue[:], hshValue)
+ jobs <- job
+ continue
+ }
pktEnc, pktEncRaw, err := ctx.HdrRead(fd)
fd.Close()
- if err != nil || pktEnc.Magic != MagicNNCPEv4 {
+ if err != nil {
+ continue
+ }
+ switch pktEnc.Magic {
+ case MagicNNCPEv1.B:
+ err = MagicNNCPEv1.TooOld()
+ case MagicNNCPEv2.B:
+ err = MagicNNCPEv2.TooOld()
+ case MagicNNCPEv3.B:
+ err = MagicNNCPEv3.TooOld()
+ case MagicNNCPEv4.B:
+ err = MagicNNCPEv4.TooOld()
+ case MagicNNCPEv5.B:
+ default:
+ err = BadMagic
+ }
+ if err != nil {
+ ctx.LogE("job", LEs{
+ {"XX", string(xx)},
+ {"Name", name},
+ {"Size", fi.Size()},
+ }, err, func(les LEs) string {
+ return fmt.Sprintf(
+ "Job %s/%s size: %s",
+ string(xx), name,
+ humanize.IBytes(uint64(fi.Size())),
+ )
+ })
continue
}
ctx.LogD("job", LEs{
PktEnc: pktEnc,
Path: pth,
Size: fi.Size(),
- HshValue: new([32]byte),
+ HshValue: new([MTHSize]byte),
}
copy(job.HshValue[:], hshValue)
jobs <- job
}
func (ctx *Ctx) Jobs(nodeId *NodeId, xx TRxTx) chan Job {
- return ctx.jobsFind(nodeId, xx, false)
+ return ctx.jobsFind(nodeId, xx, false, false)
}
func (ctx *Ctx) JobsNoCK(nodeId *NodeId) chan Job {
- return ctx.jobsFind(nodeId, TRx, true)
+ return ctx.jobsFind(nodeId, TRx, true, false)
+}
+
+func (ctx *Ctx) JobsPart(nodeId *NodeId) chan Job {
+ return ctx.jobsFind(nodeId, TRx, false, true)
}
--- /dev/null
+/*
+NNCP -- Node to Node copy, utilities for store-and-forward data exchange
+Copyright (C) 2016-2021 Sergey Matveev <stargrave@stargrave.org>
+
+This program is free software: you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation, version 3 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program. If not, see <http://www.gnu.org/licenses/>.
+*/
+
+package nncp
+
+import "fmt"
+
+type Magic struct {
+ B [8]byte
+ Name string
+ Till string
+}
+
+var (
+ MagicNNCPBv1 = Magic{
+ B: [8]byte{'N', 'N', 'C', 'P', 'B', 0, 0, 1},
+ Name: "NNCPBv1 (EBlob v1)", Till: "1.0",
+ }
+ MagicNNCPBv2 = Magic{
+ B: [8]byte{'N', 'N', 'C', 'P', 'B', 0, 0, 2},
+ Name: "NNCPBv2 (EBlob v2)", Till: "3.4",
+ }
+ MagicNNCPBv3 = Magic{
+ B: [8]byte{'N', 'N', 'C', 'P', 'B', 0, 0, 3},
+ Name: "NNCPBv3 (EBlob v3)", Till: "now",
+ }
+ MagicNNCPDv1 = Magic{
+ B: [8]byte{'N', 'N', 'C', 'P', 'D', 0, 0, 1},
+ Name: "NNCPDv1 (multicast discovery v1)", Till: "now",
+ }
+ MagicNNCPEv1 = Magic{
+ B: [8]byte{'N', 'N', 'C', 'P', 'E', 0, 0, 1},
+ Name: "NNCPEv1 (encrypted packet v1)", Till: "0.12",
+ }
+ MagicNNCPEv2 = Magic{
+ B: [8]byte{'N', 'N', 'C', 'P', 'E', 0, 0, 2},
+ Name: "NNCPEv2 (encrypted packet v2)", Till: "1.0",
+ }
+ MagicNNCPEv3 = Magic{
+ B: [8]byte{'N', 'N', 'C', 'P', 'E', 0, 0, 3},
+ Name: "NNCPEv3 (encrypted packet v3)", Till: "3.4",
+ }
+ MagicNNCPEv4 = Magic{
+ B: [8]byte{'N', 'N', 'C', 'P', 'E', 0, 0, 4},
+ Name: "NNCPEv4 (encrypted packet v4)", Till: "6.6.0",
+ }
+ MagicNNCPEv5 = Magic{
+ B: [8]byte{'N', 'N', 'C', 'P', 'E', 0, 0, 5},
+ Name: "NNCPEv5 (encrypted packet v5)", Till: "now",
+ }
+ MagicNNCPSv1 = Magic{
+ B: [8]byte{'N', 'N', 'C', 'P', 'S', 0, 0, 1},
+ Name: "NNCPSv1 (sync protocol v1)", Till: "now",
+ }
+ MagicNNCPMv1 = Magic{
+ B: [8]byte{'N', 'N', 'C', 'P', 'M', 0, 0, 1},
+ Name: "NNCPMv1 (chunked .meta v1)", Till: "6.6.0",
+ }
+ MagicNNCPMv2 = Magic{
+ B: [8]byte{'N', 'N', 'C', 'P', 'M', 0, 0, 2},
+ Name: "NNCPMv2 (chunked .meta v2)", Till: "now",
+ }
+ MagicNNCPPv1 = Magic{
+ B: [8]byte{'N', 'N', 'C', 'P', 'P', 0, 0, 1},
+ Name: "NNCPPv1 (plain packet v1)", Till: "2.0",
+ }
+ MagicNNCPPv2 = Magic{
+ B: [8]byte{'N', 'N', 'C', 'P', 'P', 0, 0, 2},
+ Name: "NNCPPv2 (plain packet v2)", Till: "4.1",
+ }
+ MagicNNCPPv3 = Magic{
+ B: [8]byte{'N', 'N', 'C', 'P', 'P', 0, 0, 3},
+ Name: "NNCPPv3 (plain packet v3)", Till: "now",
+ }
+)
+
+func (m *Magic) TooOld() error {
+ return fmt.Errorf("%s format is unsupported (used till %s)", m.Name, m.Till)
+}
}
var (
- MagicNNCPDv1 [8]byte = [8]byte{'N', 'N', 'C', 'P', 'D', 0, 0, 1}
-
- mcdIP = net.ParseIP("ff02::1")
+ mcdIP = net.ParseIP("ff02::4e4e:4350")
mcdAddrLifetime = 2 * time.Minute
mcdPktSize int
})
continue
}
- if mcd.Magic != MagicNNCPDv1 {
+ if mcd.Magic != MagicNNCPDv1.B {
ctx.LogD("mcd", les, func(les LEs) string {
return fmt.Sprintf(
"MCD Rx %s/%d: unexpected magic: %s",
return err
}
var buf bytes.Buffer
- mcd := MCD{Magic: MagicNNCPDv1, Sender: ctx.Self.Id}
+ mcd := MCD{Magic: MagicNNCPDv1.B, Sender: ctx.Self.Id}
if _, err := xdr.Marshal(&buf, mcd); err != nil {
panic(err)
}
--- /dev/null
+/*
+NNCP -- Node to Node copy, utilities for store-and-forward data exchange
+Copyright (C) 2016-2021 Sergey Matveev <stargrave@stargrave.org>
+
+This program is free software: you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation, version 3 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program. If not, see <http://www.gnu.org/licenses/>.
+*/
+
+package nncp
+
+import (
+ "bytes"
+ "errors"
+ "io"
+
+ "lukechampine.com/blake3"
+)
+
+const (
+ MTHBlockSize = 128 * 1024
+ MTHSize = 32
+)
+
+var (
+ MTHLeafKey = blake3.Sum256([]byte("NNCP MTH LEAF"))
+ MTHNodeKey = blake3.Sum256([]byte("NNCP MTH NODE"))
+)
+
+type MTHEventType uint8
+
+const (
+ MTHEventAppend MTHEventType = iota
+ MTHEventPrepend MTHEventType = iota
+ MTHEventFold MTHEventType = iota
+)
+
+type MTHEvent struct {
+ Type MTHEventType
+ Level int
+ Ctr int
+ Hsh []byte
+}
+
+type MTH struct {
+ size int64
+ PrependSize int64
+ skip int64
+ skipped bool
+ hasher *blake3.Hasher
+ hashes [][MTHSize]byte
+ buf *bytes.Buffer
+ finished bool
+ Events chan MTHEvent
+ PktName string
+}
+
+func MTHNew(size, offset int64) *MTH {
+ mth := MTH{
+ hasher: blake3.New(MTHSize, MTHLeafKey[:]),
+ buf: bytes.NewBuffer(make([]byte, 0, 2*MTHBlockSize)),
+ }
+ if size == 0 {
+ return &mth
+ }
+ prepends := int(offset / MTHBlockSize)
+ skip := MTHBlockSize - (offset - int64(prepends)*MTHBlockSize)
+ if skip == MTHBlockSize {
+ skip = 0
+ } else if skip > 0 {
+ prepends++
+ }
+ prependSize := int64(prepends * MTHBlockSize)
+ if prependSize > size {
+ prependSize = size
+ }
+ if offset+skip > size {
+ skip = size - offset
+ }
+ mth.size = size
+ mth.PrependSize = prependSize
+ mth.skip = skip
+ mth.hashes = make([][MTHSize]byte, prepends, 1+size/MTHBlockSize)
+ return &mth
+}
+
+func (mth *MTH) Reset() { panic("not implemented") }
+
+func (mth *MTH) Size() int { return MTHSize }
+
+func (mth *MTH) BlockSize() int { return MTHBlockSize }
+
+func (mth *MTH) Write(data []byte) (int, error) {
+ if mth.finished {
+ return 0, errors.New("already Sum()ed")
+ }
+ n, err := mth.buf.Write(data)
+ if err != nil {
+ return n, err
+ }
+ if mth.skip > 0 && int64(mth.buf.Len()) >= mth.skip {
+ mth.buf.Next(int(mth.skip))
+ mth.skip = 0
+ }
+ for mth.buf.Len() >= MTHBlockSize {
+ if _, err = mth.hasher.Write(mth.buf.Next(MTHBlockSize)); err != nil {
+ return n, err
+ }
+ h := new([MTHSize]byte)
+ mth.hasher.Sum(h[:0])
+ mth.hasher.Reset()
+ mth.hashes = append(mth.hashes, *h)
+ if mth.Events != nil {
+ mth.Events <- MTHEvent{
+ MTHEventAppend,
+ 0, len(mth.hashes) - 1,
+ mth.hashes[len(mth.hashes)-1][:],
+ }
+ }
+ }
+ return n, err
+}
+
+func (mth *MTH) PrependFrom(r io.Reader) (int, error) {
+ if mth.finished {
+ return 0, errors.New("already Sum()ed")
+ }
+ var err error
+ buf := make([]byte, MTHBlockSize)
+ var i, n, read int
+ fullsize := mth.PrependSize
+ les := LEs{{"Pkt", mth.PktName}, {"FullSize", fullsize}, {"Size", 0}}
+ for mth.PrependSize >= MTHBlockSize {
+ n, err = io.ReadFull(r, buf)
+ read += n
+ mth.PrependSize -= MTHBlockSize
+ if err != nil {
+ return read, err
+ }
+ if _, err = mth.hasher.Write(buf); err != nil {
+ panic(err)
+ }
+ mth.hasher.Sum(mth.hashes[i][:0])
+ mth.hasher.Reset()
+ if mth.Events != nil {
+ mth.Events <- MTHEvent{MTHEventPrepend, 0, i, mth.hashes[i][:]}
+ }
+ if mth.PktName != "" {
+ les[len(les)-1].V = int64(read)
+ Progress("check", les)
+ }
+ i++
+ }
+ if mth.PrependSize > 0 {
+ n, err = io.ReadFull(r, buf[:mth.PrependSize])
+ read += n
+ if err != nil {
+ return read, err
+ }
+ if _, err = mth.hasher.Write(buf[:mth.PrependSize]); err != nil {
+ panic(err)
+ }
+ mth.hasher.Sum(mth.hashes[i][:0])
+ mth.hasher.Reset()
+ if mth.Events != nil {
+ mth.Events <- MTHEvent{MTHEventPrepend, 0, i, mth.hashes[i][:]}
+ }
+ if mth.PktName != "" {
+ les[len(les)-1].V = fullsize
+ Progress("check", les)
+ }
+ }
+ return read, nil
+}
+
+func (mth *MTH) Sum(b []byte) []byte {
+ if mth.finished {
+ return append(b, mth.hashes[0][:]...)
+ }
+ if mth.buf.Len() > 0 {
+ b := mth.buf.Next(MTHBlockSize)
+ if _, err := mth.hasher.Write(b); err != nil {
+ panic(err)
+ }
+ h := new([MTHSize]byte)
+ mth.hasher.Sum(h[:0])
+ mth.hasher.Reset()
+ mth.hashes = append(mth.hashes, *h)
+ if mth.Events != nil {
+ mth.Events <- MTHEvent{
+ MTHEventAppend,
+ 0, len(mth.hashes) - 1,
+ mth.hashes[len(mth.hashes)-1][:],
+ }
+ }
+ }
+ switch len(mth.hashes) {
+ case 0:
+ h := new([MTHSize]byte)
+ if _, err := mth.hasher.Write(nil); err != nil {
+ panic(err)
+ }
+ mth.hasher.Sum(h[:0])
+ mth.hasher.Reset()
+ mth.hashes = append(mth.hashes, *h)
+ if mth.Events != nil {
+ mth.Events <- MTHEvent{MTHEventAppend, 0, 0, mth.hashes[0][:]}
+ }
+ fallthrough
+ case 1:
+ mth.hashes = append(mth.hashes, mth.hashes[0])
+ if mth.Events != nil {
+ mth.Events <- MTHEvent{MTHEventAppend, 0, 1, mth.hashes[1][:]}
+ }
+ }
+ mth.hasher = blake3.New(MTHSize, MTHNodeKey[:])
+ level := 1
+ for len(mth.hashes) != 1 {
+ hashesUp := make([][MTHSize]byte, 0, 1+len(mth.hashes)/2)
+ pairs := (len(mth.hashes) / 2) * 2
+ for i := 0; i < pairs; i += 2 {
+ if _, err := mth.hasher.Write(mth.hashes[i][:]); err != nil {
+ panic(err)
+ }
+ if _, err := mth.hasher.Write(mth.hashes[i+1][:]); err != nil {
+ panic(err)
+ }
+ h := new([MTHSize]byte)
+ mth.hasher.Sum(h[:0])
+ mth.hasher.Reset()
+ hashesUp = append(hashesUp, *h)
+ if mth.Events != nil {
+ mth.Events <- MTHEvent{
+ MTHEventFold,
+ level, len(hashesUp) - 1,
+ hashesUp[len(hashesUp)-1][:],
+ }
+ }
+ }
+ if len(mth.hashes)%2 == 1 {
+ hashesUp = append(hashesUp, mth.hashes[len(mth.hashes)-1])
+ if mth.Events != nil {
+ mth.Events <- MTHEvent{
+ MTHEventAppend,
+ level, len(hashesUp) - 1,
+ hashesUp[len(hashesUp)-1][:],
+ }
+ }
+ }
+ mth.hashes = hashesUp
+ level++
+ }
+ mth.finished = true
+ if mth.Events != nil {
+ close(mth.Events)
+ }
+ return append(b, mth.hashes[0][:]...)
+}
--- /dev/null
+/*
+NNCP -- Node to Node copy, utilities for store-and-forward data exchange
+Copyright (C) 2016-2021 Sergey Matveev <stargrave@stargrave.org>
+
+This program is free software: you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation, version 3 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program. If not, see <http://www.gnu.org/licenses/>.
+*/
+
+package nncp
+
+import (
+ "bytes"
+ "io"
+ "testing"
+ "testing/quick"
+
+ "lukechampine.com/blake3"
+)
+
+func TestMTHSymmetric(t *testing.T) {
+ xof := blake3.New(32, nil).XOF()
+ f := func(size uint32, offset uint32) bool {
+ size %= 2 * 1024 * 1024
+ data := make([]byte, int(size), int(size)+1)
+ if _, err := io.ReadFull(xof, data); err != nil {
+ panic(err)
+ }
+ offset = offset % size
+
+ mth := MTHNew(int64(size), 0)
+ if _, err := io.Copy(mth, bytes.NewReader(data)); err != nil {
+ panic(err)
+ }
+ hsh0 := mth.Sum(nil)
+
+ mth = MTHNew(int64(size), int64(offset))
+ if _, err := io.Copy(mth, bytes.NewReader(data[int(offset):])); err != nil {
+ panic(err)
+ }
+ if _, err := mth.PrependFrom(bytes.NewReader(data)); err != nil {
+ panic(err)
+ }
+ if bytes.Compare(hsh0, mth.Sum(nil)) != 0 {
+ return false
+ }
+
+ mth = MTHNew(0, 0)
+ mth.Write(data)
+ if bytes.Compare(hsh0, mth.Sum(nil)) != 0 {
+ return false
+ }
+
+ data = append(data, 0)
+ mth = MTHNew(int64(size)+1, 0)
+ if _, err := io.Copy(mth, bytes.NewReader(data)); err != nil {
+ panic(err)
+ }
+ hsh00 := mth.Sum(nil)
+ if bytes.Compare(hsh0, hsh00) == 0 {
+ return false
+ }
+
+ mth = MTHNew(int64(size)+1, int64(offset))
+ if _, err := io.Copy(mth, bytes.NewReader(data[int(offset):])); err != nil {
+ panic(err)
+ }
+ if _, err := mth.PrependFrom(bytes.NewReader(data)); err != nil {
+ panic(err)
+ }
+ if bytes.Compare(hsh00, mth.Sum(nil)) != 0 {
+ return false
+ }
+
+ mth = MTHNew(0, 0)
+ mth.Write(data)
+ if bytes.Compare(hsh00, mth.Sum(nil)) != 0 {
+ return false
+ }
+
+ return true
+ }
+ if err := quick.Check(f, nil); err != nil {
+ t.Error(err)
+ }
+}
+
+func TestMTHNull(t *testing.T) {
+ mth := MTHNew(0, 0)
+ if _, err := mth.Write(nil); err != nil {
+ t.Error(err)
+ }
+ mth.Sum(nil)
+}
const Base32Encoded32Len = 52
var (
- Version string = "6.6.0"
+ Version string = "7.0.0"
Base32Codec *base32.Encoding = base32.StdEncoding.WithPadding(base32.NoPadding)
)
"io"
xdr "github.com/davecgh/go-xdr/xdr2"
- "golang.org/x/crypto/blake2b"
"golang.org/x/crypto/chacha20poly1305"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
"golang.org/x/crypto/nacl/box"
"golang.org/x/crypto/poly1305"
+ "lukechampine.com/blake3"
)
type PktType uint8
const (
EncBlkSize = 128 * (1 << 10)
- KDFXOFSize = chacha20poly1305.KeySize * 2
PktTypeFile PktType = iota
PktTypeFreq PktType = iota
)
var (
- MagicNNCPPv3 [8]byte = [8]byte{'N', 'N', 'C', 'P', 'P', 0, 0, 3}
- MagicNNCPEv4 [8]byte = [8]byte{'N', 'N', 'C', 'P', 'E', 0, 0, 4}
- BadMagic error = errors.New("Unknown magic number")
- BadPktType error = errors.New("Unknown packet type")
+ BadMagic error = errors.New("Unknown magic number")
+ BadPktType error = errors.New("Unknown packet type")
PktOverhead int64
PktEncOverhead int64
panic(err)
}
pktEnc := PktEnc{
- Magic: MagicNNCPEv4,
+ Magic: MagicNNCPEv5.B,
Sender: dummyId,
Recipient: dummyId,
}
return nil, errors.New("Too long path")
}
pkt := Pkt{
- Magic: MagicNNCPPv3,
+ Magic: MagicNNCPPv3.B,
Type: typ,
Nice: nice,
PathLen: uint8(len(path)),
return &pkt, nil
}
+func ctrIncr(b []byte) {
+ for i := len(b) - 1; i >= 0; i-- {
+ b[i]++
+ if b[i] != 0 {
+ return
+ }
+ }
+ panic("counter overflow")
+}
+
func aeadProcess(
aead cipher.AEAD,
- nonce []byte,
+ nonce, ad []byte,
doEncrypt bool,
r io.Reader,
w io.Writer,
) (int, error) {
- var blkCtr uint64
ciphCtr := nonce[len(nonce)-8:]
buf := make([]byte, EncBlkSize+aead.Overhead())
var toRead []byte
}
}
readBytes += n
- blkCtr++
- binary.BigEndian.PutUint64(ciphCtr, blkCtr)
+ ctrIncr(ciphCtr)
if doEncrypt {
- toWrite = aead.Seal(buf[:0], nonce, buf[:n], nil)
+ toWrite = aead.Seal(buf[:0], nonce, buf[:n], ad)
} else {
- toWrite, err = aead.Open(buf[:0], nonce, buf[:n], nil)
+ toWrite, err = aead.Open(buf[:0], nonce, buf[:n], ad)
if err != nil {
return readBytes, err
}
return nil, err
}
tbs := PktTbs{
- Magic: MagicNNCPEv4,
+ Magic: MagicNNCPEv5.B,
Nice: nice,
Sender: our.Id,
Recipient: their.Id,
signature := new([ed25519.SignatureSize]byte)
copy(signature[:], ed25519.Sign(our.SignPrv, tbsBuf.Bytes()))
pktEnc := PktEnc{
- Magic: MagicNNCPEv4,
+ Magic: MagicNNCPEv5.B,
Nice: nice,
Sender: our.Id,
Recipient: their.Id,
ExchPub: *pubEph,
Sign: *signature,
}
+ ad := blake3.Sum256(tbsBuf.Bytes())
tbsBuf.Reset()
if _, err = xdr.Marshal(&tbsBuf, &pktEnc); err != nil {
return nil, err
}
sharedKey := new([32]byte)
curve25519.ScalarMult(sharedKey, prvEph, their.ExchPub)
- kdf, err := blake2b.NewXOF(KDFXOFSize, sharedKey[:])
- if err != nil {
- return nil, err
- }
- if _, err = kdf.Write(MagicNNCPEv4[:]); err != nil {
- return nil, err
- }
key := make([]byte, chacha20poly1305.KeySize)
- if _, err = io.ReadFull(kdf, key); err != nil {
- return nil, err
- }
+ blake3.DeriveKey(key, string(MagicNNCPEv5.B[:]), sharedKey[:])
aead, err := chacha20poly1305.New(key)
if err != nil {
return nil, err
fullSize := pktBuf.Len() + int(size)
sizeBuf := make([]byte, 8+aead.Overhead())
binary.BigEndian.PutUint64(sizeBuf, uint64(sizeWithTags(int64(fullSize))))
- if _, err = out.Write(aead.Seal(sizeBuf[:0], nonce, sizeBuf[:8], nil)); err != nil {
+ if _, err = out.Write(aead.Seal(sizeBuf[:0], nonce, sizeBuf[:8], ad[:])); err != nil {
return nil, err
}
lr := io.LimitedReader{R: data, N: size}
mr := io.MultiReader(&pktBuf, &lr)
- written, err := aeadProcess(aead, nonce, true, mr, out)
+ written, err := aeadProcess(aead, nonce, ad[:], true, mr, out)
if err != nil {
return nil, err
}
return nil, io.ErrUnexpectedEOF
}
if padSize > 0 {
- if _, err = io.ReadFull(kdf, key); err != nil {
- return nil, err
- }
- kdf, err = blake2b.NewXOF(blake2b.OutputLengthUnknown, key)
- if err != nil {
- return nil, err
- }
- if _, err = io.CopyN(out, kdf, padSize); err != nil {
+ blake3.DeriveKey(key, string(MagicNNCPEv5.B[:])+" PAD", sharedKey[:])
+ xof := blake3.New(32, key).XOF()
+ if _, err = io.CopyN(out, xof, padSize); err != nil {
return nil, err
}
}
return pktEncRaw, nil
}
-func TbsVerify(our *NodeOur, their *Node, pktEnc *PktEnc) (bool, error) {
+func TbsVerify(our *NodeOur, their *Node, pktEnc *PktEnc) ([]byte, bool, error) {
tbs := PktTbs{
- Magic: MagicNNCPEv4,
+ Magic: MagicNNCPEv5.B,
Nice: pktEnc.Nice,
Sender: their.Id,
Recipient: our.Id,
}
var tbsBuf bytes.Buffer
if _, err := xdr.Marshal(&tbsBuf, &tbs); err != nil {
- return false, err
+ return nil, false, err
}
- return ed25519.Verify(their.SignPub, tbsBuf.Bytes(), pktEnc.Sign[:]), nil
+ return tbsBuf.Bytes(), ed25519.Verify(their.SignPub, tbsBuf.Bytes(), pktEnc.Sign[:]), nil
}
func PktEncRead(
if err != nil {
return nil, 0, err
}
- if pktEnc.Magic != MagicNNCPEv4 {
- return nil, 0, BadMagic
+ switch pktEnc.Magic {
+ case MagicNNCPEv1.B:
+ err = MagicNNCPEv1.TooOld()
+ case MagicNNCPEv2.B:
+ err = MagicNNCPEv2.TooOld()
+ case MagicNNCPEv3.B:
+ err = MagicNNCPEv3.TooOld()
+ case MagicNNCPEv4.B:
+ err = MagicNNCPEv4.TooOld()
+ case MagicNNCPEv5.B:
+ default:
+ err = BadMagic
+ }
+ if err != nil {
+ return nil, 0, err
}
their, known := nodes[*pktEnc.Sender]
if !known {
if *pktEnc.Recipient != *our.Id {
return nil, 0, errors.New("Invalid recipient")
}
- verified, err := TbsVerify(our, their, &pktEnc)
+ tbsRaw, verified, err := TbsVerify(our, their, &pktEnc)
if err != nil {
return nil, 0, err
}
if !verified {
return their, 0, errors.New("Invalid signature")
}
+ ad := blake3.Sum256(tbsRaw)
sharedKey := new([32]byte)
curve25519.ScalarMult(sharedKey, our.ExchPrv, &pktEnc.ExchPub)
- kdf, err := blake2b.NewXOF(KDFXOFSize, sharedKey[:])
- if err != nil {
- return their, 0, err
- }
- if _, err = kdf.Write(MagicNNCPEv4[:]); err != nil {
- return their, 0, err
- }
key := make([]byte, chacha20poly1305.KeySize)
- if _, err = io.ReadFull(kdf, key); err != nil {
- return their, 0, err
- }
+ blake3.DeriveKey(key, string(MagicNNCPEv5.B[:]), sharedKey[:])
aead, err := chacha20poly1305.New(key)
if err != nil {
return their, 0, err
if _, err = io.ReadFull(data, sizeBuf); err != nil {
return their, 0, err
}
- sizeBuf, err = aead.Open(sizeBuf[:0], nonce, sizeBuf, nil)
+ sizeBuf, err = aead.Open(sizeBuf[:0], nonce, sizeBuf, ad[:])
if err != nil {
return their, 0, err
}
size := int64(binary.BigEndian.Uint64(sizeBuf))
lr := io.LimitedReader{R: data, N: size}
- written, err := aeadProcess(aead, nonce, false, &lr, out)
+ written, err := aeadProcess(aead, nonce, ad[:], false, &lr, out)
if err != nil {
return their, int64(written), err
}
"time"
"github.com/dustin/go-humanize"
- "go.cypherpunks.ru/nncp/v6/uilive"
+ "go.cypherpunks.ru/nncp/v7/uilive"
)
func init() {
"crypto/subtle"
"errors"
"fmt"
- "hash"
"io"
"os"
"path/filepath"
xdr "github.com/davecgh/go-xdr/xdr2"
"github.com/dustin/go-humanize"
"github.com/flynn/noise"
- "golang.org/x/crypto/blake2b"
)
const (
SPHeadOverhead = 4
)
-type SPCheckerQueues struct {
- appeared chan *[32]byte
- checked chan *[32]byte
+type MTHAndOffset struct {
+ mth *MTH
+ offset uint64
}
-var (
- MagicNNCPLv1 [8]byte = [8]byte{'N', 'N', 'C', 'P', 'S', 0, 0, 1}
+type SPCheckerTask struct {
+ nodeId *NodeId
+ hsh *[MTHSize]byte
+ mth *MTH
+ done chan []byte
+}
+var (
SPInfoOverhead int
SPFreqOverhead int
SPFileOverhead int
DefaultDeadline = 10 * time.Second
PingTimeout = time.Minute
- spCheckers = make(map[NodeId]*SPCheckerQueues)
- SPCheckersWg sync.WaitGroup
+ spCheckerTasks chan SPCheckerTask
+ SPCheckerWg sync.WaitGroup
+ spCheckerOnce sync.Once
)
type FdAndFullSize struct {
fullSize int64
}
-type HasherAndOffset struct {
- h hash.Hash
- offset uint64
-}
-
type SPType uint8
const (
type SPInfo struct {
Nice uint8
Size uint64
- Hash *[32]byte
+ Hash *[MTHSize]byte
}
type SPFreq struct {
- Hash *[32]byte
+ Hash *[MTHSize]byte
Offset uint64
}
type SPFile struct {
- Hash *[32]byte
+ Hash *[MTHSize]byte
Offset uint64
Payload []byte
}
type SPDone struct {
- Hash *[32]byte
+ Hash *[MTHSize]byte
}
type SPRaw struct {
copy(SPPingMarshalized, buf.Bytes())
buf.Reset()
- spInfo := SPInfo{Nice: 123, Size: 123, Hash: new([32]byte)}
+ spInfo := SPInfo{Nice: 123, Size: 123, Hash: new([MTHSize]byte)}
if _, err := xdr.Marshal(&buf, spInfo); err != nil {
panic(err)
}
SPInfoOverhead = buf.Len()
buf.Reset()
- spFreq := SPFreq{Hash: new([32]byte), Offset: 123}
+ spFreq := SPFreq{Hash: new([MTHSize]byte), Offset: 123}
if _, err := xdr.Marshal(&buf, spFreq); err != nil {
panic(err)
}
SPFreqOverhead = buf.Len()
buf.Reset()
- spFile := SPFile{Hash: new([32]byte), Offset: 123}
+ spFile := SPFile{Hash: new([MTHSize]byte), Offset: 123}
if _, err := xdr.Marshal(&buf, spFile); err != nil {
panic(err)
}
SPFileOverhead = buf.Len()
+ spCheckerTasks = make(chan SPCheckerTask)
}
func MarshalSP(typ SPType, sp interface{}) []byte {
csTheir *noise.CipherState
payloads chan []byte
pings chan struct{}
- infosTheir map[[32]byte]*SPInfo
- infosOurSeen map[[32]byte]uint8
+ infosTheir map[[MTHSize]byte]*SPInfo
+ infosOurSeen map[[MTHSize]byte]uint8
queueTheir []*FreqWithNice
wg sync.WaitGroup
RxBytes int64
txRate int
isDead chan struct{}
listOnly bool
- onlyPkts map[[32]byte]bool
+ onlyPkts map[[MTHSize]byte]bool
writeSPBuf bytes.Buffer
fds map[string]FdAndFullSize
fdsLock sync.RWMutex
- fileHashers map[string]*HasherAndOffset
- checkerQueues SPCheckerQueues
+ fileHashers map[string]*MTHAndOffset
progressBars map[string]struct{}
sync.RWMutex
}
state.Ctx.UnlockDir(state.txLock)
}
-func SPChecker(ctx *Ctx, nodeId *NodeId, appeared, checked chan *[32]byte) {
- for hshValue := range appeared {
- pktName := Base32Codec.EncodeToString(hshValue[:])
- les := LEs{
- {"XX", string(TRx)},
- {"Node", nodeId},
- {"Pkt", pktName},
- }
- SPCheckersWg.Add(1)
- ctx.LogD("sp-checker", les, func(les LEs) string {
- return fmt.Sprintf("Checksumming %s/rx/%s", ctx.NodeName(nodeId), pktName)
- })
- size, err := ctx.CheckNoCK(nodeId, hshValue)
- les = append(les, LE{"Size", size})
- if err != nil {
- ctx.LogE("sp-checker", les, err, func(les LEs) string {
- return fmt.Sprintf(
- "Checksumming %s/rx/%s (%s)", ctx.NodeName(nodeId), pktName,
- humanize.IBytes(uint64(size)),
- )
- })
- continue
- }
- ctx.LogI("sp-checker-done", les, func(les LEs) string {
- return fmt.Sprintf(
- "Packet %s is retreived (%s)",
- pktName, humanize.IBytes(uint64(size)),
- )
- })
- SPCheckersWg.Done()
- go func(hsh *[32]byte) { checked <- hsh }(hshValue)
- }
-}
-
func (state *SPState) WriteSP(dst io.Writer, payload []byte, ping bool) error {
state.writeSPBuf.Reset()
n, err := xdr.Marshal(&state.writeSPBuf, SPRaw{
- Magic: MagicNNCPLv1,
+ Magic: MagicNNCPSv1.B,
Payload: payload,
})
if err != nil {
}
state.RxLastSeen = time.Now()
state.RxBytes += int64(n)
- if sp.Magic != MagicNNCPLv1 {
+ if sp.Magic != MagicNNCPSv1.B {
return nil, BadMagic
}
return sp.Payload, nil
}
-func (ctx *Ctx) infosOur(nodeId *NodeId, nice uint8, seen *map[[32]byte]uint8) [][]byte {
+func (ctx *Ctx) infosOur(nodeId *NodeId, nice uint8, seen *map[[MTHSize]byte]uint8) [][]byte {
var infos []*SPInfo
var totalSize int64
for job := range ctx.Jobs(nodeId, TTx) {
state.hs = hs
state.payloads = make(chan []byte)
state.pings = make(chan struct{})
- state.infosTheir = make(map[[32]byte]*SPInfo)
- state.infosOurSeen = make(map[[32]byte]uint8)
+ state.infosTheir = make(map[[MTHSize]byte]*SPInfo)
+ state.infosOurSeen = make(map[[MTHSize]byte]uint8)
state.progressBars = make(map[string]struct{})
state.started = started
state.rxLock = rxLock
state.hs = hs
state.payloads = make(chan []byte)
state.pings = make(chan struct{})
- state.infosOurSeen = make(map[[32]byte]uint8)
- state.infosTheir = make(map[[32]byte]*SPInfo)
+ state.infosOurSeen = make(map[[MTHSize]byte]uint8)
+ state.infosTheir = make(map[[MTHSize]byte]*SPInfo)
state.progressBars = make(map[string]struct{})
state.started = started
state.xxOnly = xxOnly
) error {
les := LEs{{"Node", state.Node.Id}, {"Nice", int(state.Nice)}}
state.fds = make(map[string]FdAndFullSize)
- state.fileHashers = make(map[string]*HasherAndOffset)
+ state.fileHashers = make(map[string]*MTHAndOffset)
state.isDead = make(chan struct{})
if state.maxOnlineTime > 0 {
state.mustFinishAt = state.started.Add(state.maxOnlineTime)
}
-
- // Checker
if !state.NoCK {
- queues := spCheckers[*state.Node.Id]
- if queues == nil {
- queues = &SPCheckerQueues{
- appeared: make(chan *[32]byte),
- checked: make(chan *[32]byte),
- }
- spCheckers[*state.Node.Id] = queues
- go SPChecker(state.Ctx, state.Node.Id, queues.appeared, queues.checked)
- }
- state.checkerQueues = *queues
- go func() {
- for job := range state.Ctx.JobsNoCK(state.Node.Id) {
- if job.PktEnc.Nice <= state.Nice {
- state.checkerQueues.appeared <- job.HshValue
- }
- }
- }()
- state.wg.Add(1)
- go func() {
- defer state.wg.Done()
- for {
- select {
- case <-state.isDead:
- return
- case hsh := <-state.checkerQueues.checked:
- state.payloads <- MarshalSP(SPTypeDone, SPDone{hsh})
- }
- }
- }()
+ spCheckerOnce.Do(func() { go SPChecker(state.Ctx) })
}
// Remaining handshake payload sending
}
state.Ctx.LogD("sp-sending", append(les, LE{"Size", int64(len(payload))}), logMsg)
conn.SetWriteDeadline(time.Now().Add(DefaultDeadline)) // #nosec G104
- if err := state.WriteSP(conn, state.csOur.Encrypt(nil, nil, payload), ping); err != nil {
+ ct, err := state.csOur.Encrypt(nil, nil, payload)
+ if err != nil {
+ state.Ctx.LogE("sp-encrypting", les, err, logMsg)
+ return
+ }
+ if err := state.WriteSP(conn, ct, ping); err != nil {
state.Ctx.LogE("sp-sending", les, err, logMsg)
return
}
close(state.payloads)
close(state.pings)
state.Duration = time.Now().Sub(state.started)
- SPCheckersWg.Wait()
state.dirUnlock()
state.RxSpeed = state.RxBytes
state.TxSpeed = state.TxBytes
pktName, humanize.IBytes(uint64(len(file.Payload))),
)
}
+ fullsize := int64(0)
+ state.RLock()
+ infoTheir, ok := state.infosTheir[*file.Hash]
+ state.RUnlock()
+ if !ok {
+ state.Ctx.LogE("sp-file-open", lesp, err, func(les LEs) string {
+ return logMsg(les) + ": unknown file"
+ })
+ continue
+ }
+ fullsize = int64(infoTheir.Size)
+ lesp = append(lesp, LE{"FullSize", fullsize})
dirToSync := filepath.Join(
state.Ctx.Spool,
state.Node.Id.String(),
state.fdsLock.RLock()
fdAndFullSize, exists := state.fds[filePathPart]
state.fdsLock.RUnlock()
+ hasherAndOffset := state.fileHashers[filePath]
var fd *os.File
if exists {
fd = fdAndFullSize.fd
state.fdsLock.Lock()
state.fds[filePathPart] = FdAndFullSize{fd: fd}
state.fdsLock.Unlock()
- if file.Offset == 0 {
- h, err := blake2b.New256(nil)
- if err != nil {
- panic(err)
+ if !state.NoCK {
+ hasherAndOffset = &MTHAndOffset{
+ mth: MTHNew(fullsize, int64(file.Offset)),
+ offset: file.Offset,
}
- state.fileHashers[filePath] = &HasherAndOffset{h: h}
+ state.fileHashers[filePath] = hasherAndOffset
}
}
state.Ctx.LogD(
state.closeFd(filePathPart)
return nil, err
}
- hasherAndOffset, hasherExists := state.fileHashers[filePath]
- if hasherExists {
+ if hasherAndOffset != nil {
if hasherAndOffset.offset == file.Offset {
- if _, err = hasherAndOffset.h.Write(file.Payload); err != nil {
+ if _, err = hasherAndOffset.mth.Write(file.Payload); err != nil {
panic(err)
}
hasherAndOffset.offset += uint64(len(file.Payload))
} else {
- state.Ctx.LogD(
- "sp-file-offset-differs", lesp,
+ state.Ctx.LogE(
+ "sp-file-offset-differs", lesp, errors.New("offset differs"),
func(les LEs) string {
- return logMsg(les) + ": offset differs, deleting hasher"
+ return logMsg(les) + ": deleting hasher"
},
)
delete(state.fileHashers, filePath)
- hasherExists = false
+ hasherAndOffset = nil
}
}
ourSize := int64(file.Offset + uint64(len(file.Payload)))
- lesp[len(lesp)-1].V = ourSize
- fullsize := int64(0)
- state.RLock()
- infoTheir, ok := state.infosTheir[*file.Hash]
- state.RUnlock()
- if ok {
- fullsize = int64(infoTheir.Size)
- }
- lesp = append(lesp, LE{"FullSize", fullsize})
+ lesp[len(lesp)-2].V = ourSize
if state.Ctx.ShowPrgrs {
state.progressBars[pktName] = struct{}{}
Progress("Rx", lesp)
state.closeFd(filePathPart)
continue
}
- if hasherExists {
- if bytes.Compare(hasherAndOffset.h.Sum(nil), file.Hash[:]) != 0 {
- state.Ctx.LogE(
- "sp-file-bad-checksum", lesp,
- errors.New("checksum mismatch"),
- logMsg,
- )
- continue
- }
- if err = os.Rename(filePathPart, filePath); err != nil {
- state.Ctx.LogE("sp-file-rename", lesp, err, func(les LEs) string {
- return logMsg(les) + ": renaming"
- })
- continue
- }
- if err = DirSync(dirToSync); err != nil {
- state.Ctx.LogE("sp-file-dirsync", lesp, err, func(les LEs) string {
- return logMsg(les) + ": dirsyncing"
- })
- continue
- }
- state.Ctx.LogI("sp-file-done", lesp, func(les LEs) string {
- return logMsg(les) + ": done"
- })
- state.wg.Add(1)
- go func() {
- state.payloads <- MarshalSP(SPTypeDone, SPDone{file.Hash})
- state.wg.Done()
- }()
- state.Lock()
- delete(state.infosTheir, *file.Hash)
- state.Unlock()
- if !state.Ctx.HdrUsage {
- state.closeFd(filePathPart)
- continue
- }
- if _, err = fd.Seek(0, io.SeekStart); err != nil {
- state.Ctx.LogE("sp-file-seek", lesp, err, func(les LEs) string {
- return logMsg(les) + ": seeking"
+ if hasherAndOffset != nil {
+ delete(state.fileHashers, filePath)
+ if hasherAndOffset.mth.PrependSize == 0 {
+ if bytes.Compare(hasherAndOffset.mth.Sum(nil), file.Hash[:]) != 0 {
+ state.Ctx.LogE(
+ "sp-file-bad-checksum", lesp,
+ errors.New("checksum mismatch"),
+ logMsg,
+ )
+ state.closeFd(filePathPart)
+ continue
+ }
+ if err = os.Rename(filePathPart, filePath); err != nil {
+ state.Ctx.LogE("sp-file-rename", lesp, err, func(les LEs) string {
+ return logMsg(les) + ": renaming"
+ })
+ state.closeFd(filePathPart)
+ continue
+ }
+ if err = DirSync(dirToSync); err != nil {
+ state.Ctx.LogE("sp-file-dirsync", lesp, err, func(les LEs) string {
+ return logMsg(les) + ": dirsyncing"
+ })
+ state.closeFd(filePathPart)
+ continue
+ }
+ state.Ctx.LogI("sp-file-done", lesp, func(les LEs) string {
+ return logMsg(les) + ": done"
})
+ state.wg.Add(1)
+ go func() {
+ state.payloads <- MarshalSP(SPTypeDone, SPDone{file.Hash})
+ state.wg.Done()
+ }()
+ state.Lock()
+ delete(state.infosTheir, *file.Hash)
+ state.Unlock()
+ if !state.Ctx.HdrUsage {
+ continue
+ }
+ if _, err = fd.Seek(0, io.SeekStart); err != nil {
+ state.Ctx.LogE("sp-file-seek", lesp, err, func(les LEs) string {
+ return logMsg(les) + ": seeking"
+ })
+ state.closeFd(filePathPart)
+ continue
+ }
+ _, pktEncRaw, err := state.Ctx.HdrRead(fd)
state.closeFd(filePathPart)
+ if err != nil {
+ state.Ctx.LogE("sp-file-hdr-read", lesp, err, func(les LEs) string {
+ return logMsg(les) + ": HdrReading"
+ })
+ continue
+ }
+ state.Ctx.HdrWrite(pktEncRaw, filePath)
continue
}
- _, pktEncRaw, err := state.Ctx.HdrRead(fd)
- state.closeFd(filePathPart)
- if err != nil {
- state.Ctx.LogE("sp-file-hdr-read", lesp, err, func(les LEs) string {
- return logMsg(les) + ": HdrReading"
- })
- continue
- }
- state.Ctx.HdrWrite(pktEncRaw, filePath)
- continue
}
state.closeFd(filePathPart)
if err = os.Rename(filePathPart, filePath+NoCKSuffix); err != nil {
state.Lock()
delete(state.infosTheir, *file.Hash)
state.Unlock()
- if !state.NoCK {
- state.checkerQueues.appeared <- file.Hash
+ if hasherAndOffset != nil {
+ go func() {
+ spCheckerTasks <- SPCheckerTask{
+ nodeId: state.Node.Id,
+ hsh: file.Hash,
+ mth: hasherAndOffset.mth,
+ done: state.payloads,
+ }
+ }()
}
case SPTypeDone:
}
return payloadsSplit(replies), nil
}
+
+func SPChecker(ctx *Ctx) {
+ for t := range spCheckerTasks {
+ pktName := Base32Codec.EncodeToString(t.hsh[:])
+ les := LEs{
+ {"XX", string(TRx)},
+ {"Node", t.nodeId},
+ {"Pkt", pktName},
+ }
+ SPCheckerWg.Add(1)
+ ctx.LogD("sp-checker", les, func(les LEs) string {
+ return fmt.Sprintf("Checksumming %s/rx/%s", ctx.NodeName(t.nodeId), pktName)
+ })
+ size, err := ctx.CheckNoCK(t.nodeId, t.hsh, t.mth)
+ les = append(les, LE{"Size", size})
+ if err != nil {
+ ctx.LogE("sp-checker", les, err, func(les LEs) string {
+ return fmt.Sprintf(
+ "Checksumming %s/rx/%s (%s)", ctx.NodeName(t.nodeId), pktName,
+ humanize.IBytes(uint64(size)),
+ )
+ })
+ SPCheckerWg.Done()
+ continue
+ }
+ ctx.LogI("sp-checker-done", les, func(les LEs) string {
+ return fmt.Sprintf(
+ "Packet %s is retreived (%s)",
+ pktName, humanize.IBytes(uint64(size)),
+ )
+ })
+ SPCheckerWg.Done()
+ go func(t SPCheckerTask) {
+ defer func() { recover() }()
+ t.done <- MarshalSP(SPTypeDone, SPDone{t.hsh})
+ }(t)
+ }
+}
"path/filepath"
"strconv"
"time"
-
- "golang.org/x/crypto/blake2b"
)
func TempFile(dir, prefix string) (*os.File, error) {
if err != nil {
return nil, err
}
- hsh, err := blake2b.New256(nil)
- if err != nil {
- return nil, err
- }
+ hsh := MTHNew(0, 0)
return &TmpFileWHash{
W: bufio.NewWriter(io.MultiWriter(hsh, tmp)),
Fd: tmp,
xdr "github.com/davecgh/go-xdr/xdr2"
"github.com/dustin/go-humanize"
"github.com/klauspost/compress/zstd"
- "golang.org/x/crypto/blake2b"
"golang.org/x/crypto/poly1305"
)
if noTrns {
goto Closing
}
- dst := new([blake2b.Size256]byte)
+ dst := new([MTHSize]byte)
copy(dst[:], pkt.Path[:int(pkt.PathLen)])
nodeId := NodeId(*dst)
node, known := ctx.Neigh[nodeId]
"testing/quick"
xdr "github.com/davecgh/go-xdr/xdr2"
- "golang.org/x/crypto/blake2b"
)
var (
ctx.Neigh[*nodeOur.Id] = nodeOur.Their()
incomingPath := filepath.Join(spool, "incoming")
for _, fileData := range files {
- checksum := blake2b.Sum256(fileData)
- fileName := Base32Codec.EncodeToString(checksum[:])
+ hasher := MTHNew(0, 0)
+ hasher.Write(fileData)
+ fileName := Base32Codec.EncodeToString(hasher.Sum(nil))
src := filepath.Join(spool, fileName)
if err := ioutil.WriteFile(src, fileData, os.FileMode(0600)); err != nil {
panic(err)
return false
}
for _, fileData := range files {
- checksum := blake2b.Sum256(fileData)
- fileName := Base32Codec.EncodeToString(checksum[:])
+ hasher := MTHNew(0, 0)
+ hasher.Write(fileData)
+ fileName := Base32Codec.EncodeToString(hasher.Sum(nil))
data, err := ioutil.ReadFile(filepath.Join(incomingPath, fileName))
if err != nil {
panic(err)
os.MkdirAll(txPath, os.FileMode(0700))
for _, data := range datum {
pktTrans := Pkt{
- Magic: MagicNNCPPv3,
+ Magic: MagicNNCPPv3.B,
Type: PktTypeTrns,
- PathLen: blake2b.Size256,
+ PathLen: MTHSize,
}
copy(pktTrans.Path[:], nodeOur.Id[:])
var dst bytes.Buffer
t.Error(err)
return false
}
- checksum := blake2b.Sum256(dst.Bytes())
+ hasher := MTHNew(0, 0)
+ hasher.Write(dst.Bytes())
if err := ioutil.WriteFile(
- filepath.Join(rxPath, Base32Codec.EncodeToString(checksum[:])),
+ filepath.Join(rxPath, Base32Codec.EncodeToString(hasher.Sum(nil))),
dst.Bytes(),
os.FileMode(0600),
); err != nil {
xdr "github.com/davecgh/go-xdr/xdr2"
"github.com/dustin/go-humanize"
"github.com/klauspost/compress/zstd"
- "golang.org/x/crypto/blake2b"
"golang.org/x/crypto/chacha20poly1305"
)
return
}
nonce := make([]byte, aead.NonceSize())
- written, err := aeadProcess(aead, nonce, true, r, tmpW)
+ written, err := aeadProcess(aead, nonce, nil, true, r, tmpW)
if err != nil {
rerr = err
return
}
r, w := io.Pipe()
go func() {
- if _, err := aeadProcess(aead, nonce, false, bufio.NewReader(src), w); err != nil {
+ if _, err := aeadProcess(aead, nonce, nil, false, bufio.NewReader(src), w); err != nil {
w.CloseWithError(err) // #nosec G104
}
}()
leftSize := fileSize
metaPkt := ChunkedMeta{
- Magic: MagicNNCPMv1,
+ Magic: MagicNNCPMv2.B,
FileSize: uint64(fileSize),
ChunkSize: uint64(chunkSize),
- Checksums: make([][32]byte, 0, (fileSize/chunkSize)+1),
+ Checksums: make([][MTHSize]byte, 0, (fileSize/chunkSize)+1),
}
for i := int64(0); i < (fileSize/chunkSize)+1; i++ {
- hsh := new([32]byte)
+ hsh := new([MTHSize]byte)
metaPkt.Checksums = append(metaPkt.Checksums, *hsh)
}
var sizeToSend int64
if err != nil {
return err
}
- hsh, err = blake2b.New256(nil)
- if err != nil {
- return err
- }
+ hsh = MTHNew(0, 0)
_, err = ctx.Tx(
node,
pkt,
"testing/quick"
xdr "github.com/davecgh/go-xdr/xdr2"
- "golang.org/x/crypto/blake2b"
)
func TestTx(t *testing.T) {
if pkt.Type != PktTypeTrns {
return false
}
- if bytes.Compare(pkt.Path[:blake2b.Size256], vias[i+1][:]) != 0 {
+ if bytes.Compare(pkt.Path[:MTHSize], vias[i+1][:]) != 0 {
return false
}
}