[This CL is part of a sequence implementing the proposal #51082.
The design doc is at https://go.dev/s/godocfmt-design.]
Run the updated gofmt, which reformats doc comments,
on the main repository. Vendored files are excluded.
For #51082.
Change-Id: I7332f099b60f716295fb34719c98c04eb1a85407
Reviewed-on: https://go-review.googlesource.com/c/go/+/384268
Reviewed-by: Jonathan Amsterdam <jba@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
// CmpInt compares x and y. The result is
//
-// -1 if x < y
-// 0 if x == y
-// +1 if x > y
-//
+// -1 if x < y
+// 0 if x == y
+// +1 if x > y
func CmpInt(x, y *Int) int {
x.doinit()
y.doinit()
// binary.
//
// This script requires that three environment variables be set:
-// GOIOS_DEV_ID: The codesigning developer id or certificate identifier
-// GOIOS_APP_ID: The provisioning app id prefix. Must support wildcard app ids.
-// GOIOS_TEAM_ID: The team id that owns the app id prefix.
+//
+// GOIOS_DEV_ID: The codesigning developer id or certificate identifier
+// GOIOS_APP_ID: The provisioning app id prefix. Must support wildcard app ids.
+// GOIOS_TEAM_ID: The team id that owns the app id prefix.
+//
// $GOROOT/misc/ios contains a script, detect.go, that attempts to autodetect these.
package main
// that the file has no data in it, which is rather odd.
//
// As an example, if the underlying raw file contains the 10-byte data:
+//
// var compactFile = "abcdefgh"
//
// And the sparse map has the following entries:
+//
// var spd sparseDatas = []sparseEntry{
// {Offset: 2, Length: 5}, // Data fragment for 2..6
// {Offset: 18, Length: 3}, // Data fragment for 18..20
// }
//
// Then the content of the resulting sparse file with a Header.Size of 25 is:
+//
// var sparseFile = "\x00"*2 + "abcde" + "\x00"*11 + "fgh" + "\x00"*4
type (
sparseDatas []sparseEntry
// The input must have been already validated.
//
// This function mutates src and returns a normalized map where:
-// * adjacent fragments are coalesced together
-// * only the last fragment may be empty
-// * the endOffset of the last fragment is the total size
+// - adjacent fragments are coalesced together
+// - only the last fragment may be empty
+// - the endOffset of the last fragment is the total size
func invertSparseEntries(src []sparseEntry, size int64) []sparseEntry {
dst := src[:0]
var pre sparseEntry
// header in case further processing is required.
//
// The err will be set to io.EOF only when one of the following occurs:
-// * Exactly 0 bytes are read and EOF is hit.
-// * Exactly 1 block of zeros is read and EOF is hit.
-// * At least 2 blocks of zeros are read.
+// - Exactly 0 bytes are read and EOF is hit.
+// - Exactly 1 block of zeros is read and EOF is hit.
+// - At least 2 blocks of zeros are read.
func (tr *Reader) readHeader() (*Header, *block, error) {
// Two blocks of zero bytes marks the end of the archive.
if _, err := io.ReadFull(tr.r, tr.blk[:]); err != nil {
// validPAXRecord reports whether the key-value pair is valid where each
// record is formatted as:
+//
// "%d %s=%s\n" % (size, key, value)
//
// Keys and values should be UTF-8, but the number of bad writers out there
// new elements. If it does not, a new underlying array will be allocated.
// Append returns the updated slice. It is therefore necessary to store the
// result of append, often in the variable holding the slice itself:
+//
// slice = append(slice, elem1, elem2)
// slice = append(slice, anotherSlice...)
+//
// As a special case, it is legal to append a string to a byte slice, like this:
+//
// slice = append([]byte("hello "), "world"...)
func append(slice []Type, elems ...Type) []Type
func delete(m map[Type]Type1, key Type)
// The len built-in function returns the length of v, according to its type:
+//
// Array: the number of elements in v.
// Pointer to array: the number of elements in *v (even if v is nil).
// Slice, or map: the number of elements in v; if v is nil, len(v) is zero.
// String: the number of bytes in v.
// Channel: the number of elements queued (unread) in the channel buffer;
// if v is nil, len(v) is zero.
+//
// For some arguments, such as a string literal or a simple array expression, the
// result can be a constant. See the Go language specification's "Length and
// capacity" section for details.
func len(v Type) int
// The cap built-in function returns the capacity of v, according to its type:
+//
// Array: the number of elements in v (same as len(v)).
// Pointer to array: the number of elements in *v (same as len(v)).
// Slice: the maximum length the slice can reach when resliced;
// if v is nil, cap(v) is zero.
// Channel: the channel buffer capacity, in units of elements;
// if v is nil, cap(v) is zero.
+//
// For some arguments, such as a simple array expression, the result can be a
// constant. See the Go language specification's "Length and capacity" section for
// details.
// value. Unlike new, make's return type is the same as the type of its
// argument, not a pointer to it. The specification of the result depends on
// the type:
+//
// Slice: The size specifies the length. The capacity of the slice is
// equal to its length. A second integer argument may be provided to
// specify a different capacity; it must be no smaller than the
// the last sent value is received. After the last value has been received
// from a closed channel c, any receive from c will succeed without
// blocking, returning the zero value for the channel element. The form
+//
// x, ok := <-c
+//
// will also set ok to false for a closed channel.
func close(c chan<- Type)
// the subslices between those separators.
// If sep is empty, SplitN splits after each UTF-8 sequence.
// The count determines the number of subslices to return:
-// n > 0: at most n subslices; the last subslice will be the unsplit remainder.
-// n == 0: the result is nil (zero subslices)
-// n < 0: all subslices
+//
+// n > 0: at most n subslices; the last subslice will be the unsplit remainder.
+// n == 0: the result is nil (zero subslices)
+// n < 0: all subslices
//
// To split around the first instance of a separator, see Cut.
func SplitN(s, sep []byte, n int) [][]byte { return genSplit(s, sep, 0, n) }
// returns a slice of those subslices.
// If sep is empty, SplitAfterN splits after each UTF-8 sequence.
// The count determines the number of subslices to return:
-// n > 0: at most n subslices; the last subslice will be the unsplit remainder.
-// n == 0: the result is nil (zero subslices)
-// n < 0: all subslices
+//
+// n > 0: at most n subslices; the last subslice will be the unsplit remainder.
+// n == 0: the result is nil (zero subslices)
+// n < 0: all subslices
func SplitAfterN(s, sep []byte, n int) [][]byte {
return genSplit(s, sep, len(sep), n)
}
// just enough to support pprof.
//
// Usage:
+//
// go tool addr2line binary
//
// Addr2line reads hexadecimal addresses, one per line and with optional 0x prefix,
// license that can be found in the LICENSE file.
/*
-Asm, typically invoked as ``go tool asm'', assembles the source file into an object
+Asm, typically invoked as “go tool asm”, assembles the source file into an object
file named for the basename of the argument source file with a .o suffix. The
object file can then be combined with other objects into a package archive.
-Command Line
+# Command Line
Usage:
// line consumes a single assembly line from p.lex of the form
//
-// {label:} WORD[.cond] [ arg {, arg} ] (';' | '\n')
+// {label:} WORD[.cond] [ arg {, arg} ] (';' | '\n')
//
// It adds any labels to p.pendingLabels and returns the word, cond,
// operand list, and true. If there is an error or EOF, it returns
// constrained form of the operand syntax that's always SB-based,
// non-static, and has at most a simple integer offset:
//
-// [$|*]sym[<abi>][+Int](SB)
+// [$|*]sym[<abi>][+Int](SB)
func (p *Parser) funcAddress() (string, obj.ABI, bool) {
switch p.peek() {
case '$', '*':
//
// For 386/AMD64 register list specifies 4VNNIW-style multi-source operand.
// For range of 4 elements, Intel manual uses "+3" notation, for example:
+//
// VP4DPWSSDS zmm1{k1}{z}, zmm2+3, m128
+//
// Given asm line:
+//
// VP4DPWSSDS Z5, [Z10-Z13], (AX)
+//
// zmm2 is Z10, and Z13 is the only valid value for it (Z10+3).
// Only simple ranges are accepted, like [Z0-Z3].
//
Buildid displays or updates the build ID stored in a Go package or binary.
Usage:
+
go tool buildid [-w] file
By default, buildid prints the build ID found in the named file.
// license that can be found in the LICENSE file.
/*
-
Cgo enables the creation of Go packages that call C code.
-Using cgo with the go command
+# Using cgo with the go command
To use cgo write normal Go code that imports a pseudo-package "C".
The Go code can then refer to types such as C.size_t, variables such
directory and linked properly.
For example if package foo is in the directory /go/src/foo:
- // #cgo LDFLAGS: -L${SRCDIR}/libs -lfoo
+ // #cgo LDFLAGS: -L${SRCDIR}/libs -lfoo
Will be expanded to:
- // #cgo LDFLAGS: -L/go/src/foo/libs -lfoo
+ // #cgo LDFLAGS: -L/go/src/foo/libs -lfoo
When the Go tool sees that one or more Go files use the special import
"C", it will look for other non-Go files in the directory and compile
The CXX_FOR_TARGET, CXX_FOR_${GOOS}_${GOARCH}, and CXX
environment variables work in a similar way for C++ code.
-Go references to C
+# Go references to C
Within the Go file, C's struct field names that are keywords in Go
can be accessed by prefixing them with an underscore: if x points at a C
of memory. Because C.malloc cannot fail, it has no two-result form
that returns errno.
-C references to Go
+# C references to Go
Go functions can be exported for use by C code in the following way:
duplicate symbols and the linker will fail. To avoid this, definitions
must be placed in preambles in other files, or in C source files.
-Passing pointers
+# Passing pointers
Go is a garbage collected language, and the garbage collector needs to
know the location of every pointer to Go memory. Because of this,
store pointer values in it. Zero out the memory in C before passing it
to Go.
-Special cases
+# Special cases
A few special C types which would normally be represented by a pointer
type in Go are instead represented by a uintptr. Those include:
go tool fix -r eglconf <pkg>
-Using cgo directly
+# Using cgo directly
Usage:
+
go tool cgo [cgo options] [-- compiler options] gofiles...
Cgo transforms the specified input Go source files into several output
//
// For example, the following string:
//
-// `a b:"c d" 'e''f' "g\""`
+// `a b:"c d" 'e''f' "g\""`
//
// Would be parsed as:
//
-// []string{"a", "b:c d", "ef", `g"`}
+// []string{"a", "b:c d", "ef", `g"`}
func splitQuoted(s string) (r []string, err error) {
var args []string
arg := make([]rune, len(s))
// checkIndex checks whether arg has the form &a[i], possibly inside
// type conversions. If so, then in the general case it writes
-// _cgoIndexNN := a
-// _cgoNN := &cgoIndexNN[i] // with type conversions, if any
+//
+// _cgoIndexNN := a
+// _cgoNN := &cgoIndexNN[i] // with type conversions, if any
+//
// to sb, and writes
-// _cgoCheckPointer(_cgoNN, _cgoIndexNN)
+//
+// _cgoCheckPointer(_cgoNN, _cgoIndexNN)
+//
// to sbCheck, and returns true. If a is a simple variable or field reference,
// it writes
-// _cgoIndexNN := &a
+//
+// _cgoIndexNN := &a
+//
// and dereferences the uses of _cgoIndexNN. Taking the address avoids
// making a copy of an array.
//
// checkAddr checks whether arg has the form &x, possibly inside type
// conversions. If so, it writes
-// _cgoBaseNN := &x
-// _cgoNN := _cgoBaseNN // with type conversions, if any
+//
+// _cgoBaseNN := &x
+// _cgoNN := _cgoBaseNN // with type conversions, if any
+//
// to sb, and writes
-// _cgoCheckPointer(_cgoBaseNN, true)
+//
+// _cgoCheckPointer(_cgoBaseNN, true)
+//
// to sbCheck, and returns true. This tells _cgoCheckPointer to check
// just the contents of the pointer being passed, not any other part
// of the memory allocation. This is run after checkIndex, which looks
}
// opregreg emits instructions for
-// dest := dest(To) op src(From)
+//
+// dest := dest(To) op src(From)
+//
// and also returns the created obj.Prog so it
// may be further adjusted (offset, scale, etc).
func opregreg(s *ssagen.State, op obj.As, dest, src int16) *obj.Prog {
// but then you may as well do it here. so this is cleaner and
// shorter and less complicated.
// The result of inlnode MUST be assigned back to n, e.g.
-// n.Left = inlnode(n.Left)
+//
+// n.Left = inlnode(n.Left)
func inlnode(n ir.Node, maxCost int32, inlMap map[*ir.Func]bool, edit func(ir.Node) ir.Node) ir.Node {
if n == nil {
return n
// inlined function body, and (List, Rlist) contain the (input, output)
// parameters.
// The result of mkinlcall MUST be assigned back to n, e.g.
-// n.Left = mkinlcall(n.Left, fn, isddd)
+//
+// n.Left = mkinlcall(n.Left, fn, isddd)
func mkinlcall(n *ir.CallExpr, fn *ir.Func, maxCost int32, inlMap map[*ir.Func]bool, edit func(ir.Node) ir.Node) ir.Node {
if fn.Inl == nil {
if logopt.Enabled() {
// instead of computing both. SameSafeExpr assumes that l and r are
// used in the same statement or expression. In order for it to be
// safe to reuse l or r, they must:
-// * be the same expression
-// * not have side-effects (no function calls, no channel ops);
-// however, panics are ok
-// * not cause inappropriate aliasing; e.g. two string to []byte
-// conversions, must result in two distinct slices
+// - be the same expression
+// - not have side-effects (no function calls, no channel ops);
+// however, panics are ok
+// - not cause inappropriate aliasing; e.g. two string to []byte
+// conversions, must result in two distinct slices
//
// The handling of OINDEXMAP is subtle. OINDEXMAP can occur both
// as an lvalue (map assignment) and an rvalue (map access). This is
}
// The result of InitExpr MUST be assigned back to n, e.g.
-// n.X = InitExpr(init, n.X)
+//
+// n.X = InitExpr(init, n.X)
func InitExpr(init []Node, expr Node) Node {
if len(init) == 0 {
return expr
// liveness effects on a variable.
//
// The possible flags are:
+//
// uevar - used by the instruction
// varkill - killed by the instruction (set)
+//
// A kill happens after the use (for an instruction that updates a value, for example).
type liveEffect int
// isfat reports whether a variable of type t needs multiple assignments to initialize.
// For example:
//
-// type T struct { x, y int }
-// x := T{x: 0, y: 1}
+// type T struct { x, y int }
+// x := T{x: 0, y: 1}
//
// Then we need:
//
-// var t T
-// t.x = 0
-// t.y = 1
+// var t T
+// t.x = 0
+// t.y = 1
//
// to fully initialize t.
func isfat(t *types.Type) bool {
//
// The pipeline contains 2 steps:
//
-// 1) Generate package export data "stub".
+// 1. Generate package export data "stub".
//
-// 2) Generate package IR from package export data.
+// 2. Generate package IR from package export data.
//
// The package data "stub" at step (1) contains everything from the local package,
// but nothing that have been imported. When we're actually writing out export data
// to the output files (see writeNewExport function), we run the "linker", which does
// a few things:
//
-// + Updates compiler extensions data (e.g., inlining cost, escape analysis results).
+// - Updates compiler extensions data (e.g., inlining cost, escape analysis results).
//
-// + Handles re-exporting any transitive dependencies.
+// - Handles re-exporting any transitive dependencies.
//
-// + Prunes out any unnecessary details (e.g., non-inlineable functions, because any
-// downstream importers only care about inlinable functions).
+// - Prunes out any unnecessary details (e.g., non-inlineable functions, because any
+// downstream importers only care about inlinable functions).
//
// The source files are typechecked twice, once before writing export data
// using types2 checker, once after read export data using gc/typecheck.
// This duplication of work will go away once we always use types2 checker,
// we can remove the gc/typecheck pass. The reason it is still here:
//
-// + It reduces engineering costs in maintaining a fork of typecheck
-// (e.g., no need to backport fixes like CL 327651).
+// - It reduces engineering costs in maintaining a fork of typecheck
+// (e.g., no need to backport fixes like CL 327651).
//
-// + It makes it easier to pass toolstash -cmp.
+// - It makes it easier to pass toolstash -cmp.
//
-// + Historically, we would always re-run the typechecker after import, even though
-// we know the imported data is valid. It's not ideal, but also not causing any
-// problem either.
+// - Historically, we would always re-run the typechecker after import, even though
+// we know the imported data is valid. It's not ideal, but also not causing any
+// problem either.
//
-// + There's still transformation that being done during gc/typecheck, like rewriting
-// multi-valued function call, or transform ir.OINDEX -> ir.OINDEXMAP.
+// - There's still transformation that being done during gc/typecheck, like rewriting
+// multi-valued function call, or transform ir.OINDEX -> ir.OINDEXMAP.
//
// Using syntax+types2 tree, which already has a complete representation of generics,
// the unified IR has the full typed AST for doing introspection during step (1).
// Task makes and returns an initialization record for the package.
// See runtime/proc.go:initTask for its layout.
// The 3 tasks for initialization are:
-// 1) Initialize all of the packages the current package depends on.
-// 2) Initialize all the variables that have initializers.
-// 3) Run any init functions.
+// 1. Initialize all of the packages the current package depends on.
+// 2. Initialize all the variables that have initializers.
+// 3. Run any init functions.
func Task() *ir.Name {
var deps []*obj.LSym // initTask records for packages the current package depends on
var fns []*obj.LSym // functions to call for package initialization
}
// eqfield returns the node
-// p.field == q.field
+//
+// p.field == q.field
func eqfield(p ir.Node, q ir.Node, field *types.Sym) ir.Node {
nx := ir.NewSelectorExpr(base.Pos, ir.OXDOT, p, field)
ny := ir.NewSelectorExpr(base.Pos, ir.OXDOT, q, field)
}
// EqString returns the nodes
-// len(s) == len(t)
+//
+// len(s) == len(t)
+//
// and
-// memequal(s.ptr, t.ptr, len(s))
+//
+// memequal(s.ptr, t.ptr, len(s))
+//
// which can be used to construct string equality comparison.
// eqlen must be evaluated before eqmem, and shortcircuiting is required.
func EqString(s, t ir.Node) (eqlen *ir.BinaryExpr, eqmem *ir.CallExpr) {
}
// EqInterface returns the nodes
-// s.tab == t.tab (or s.typ == t.typ, as appropriate)
+//
+// s.tab == t.tab (or s.typ == t.typ, as appropriate)
+//
// and
-// ifaceeq(s.tab, s.data, t.data) (or efaceeq(s.typ, s.data, t.data), as appropriate)
+//
+// ifaceeq(s.tab, s.data, t.data) (or efaceeq(s.typ, s.data, t.data), as appropriate)
+//
// which can be used to construct interface equality comparison.
// eqtab must be evaluated before eqdata, and shortcircuiting is required.
func EqInterface(s, t ir.Node) (eqtab *ir.BinaryExpr, eqdata *ir.CallExpr) {
}
// eqmem returns the node
-// memequal(&p.field, &q.field [, size])
+//
+// memequal(&p.field, &q.field [, size])
func eqmem(p ir.Node, q ir.Node, field *types.Sym, size int64) ir.Node {
nx := typecheck.Expr(typecheck.NodAddr(ir.NewSelectorExpr(base.Pos, ir.OXDOT, p, field)))
ny := typecheck.Expr(typecheck.NodAddr(ir.NewSelectorExpr(base.Pos, ir.OXDOT, q, field)))
// tflag is documented in reflect/type.go.
//
// tflag values must be kept in sync with copies in:
-// - cmd/compile/internal/reflectdata/reflect.go
-// - cmd/link/internal/ld/decodesym.go
-// - reflect/type.go
-// - runtime/type.go
+// - cmd/compile/internal/reflectdata/reflect.go
+// - cmd/link/internal/ld/decodesym.go
+// - reflect/type.go
+// - runtime/type.go
const (
tflagUncommon = 1 << 0
tflagExtraStar = 1 << 1
// Also wraps methods on instantiated generic types for use in itab entries.
// For an instantiated generic type G[int], we generate wrappers like:
// G[int] pointer shaped:
+//
// func (x G[int]) f(arg) {
// .inst.G[int].f(dictionary, x, arg)
-// }
+// }
+//
// G[int] not pointer shaped:
+//
// func (x *G[int]) f(arg) {
// .inst.G[int].f(dictionary, *x, arg)
-// }
+// }
+//
// These wrappers are always fully stenciled.
func methodWrapper(rcvr *types.Type, method *types.Field, forItab bool) *obj.LSym {
orig := rcvr
}
// opregreg emits instructions for
-// dest := dest(To) op src(From)
+//
+// dest := dest(To) op src(From)
+//
// and also returns the created obj.Prog so it
// may be further adjusted (offset, scale, etc).
func opregreg(s *ssagen.State, op obj.As, dest, src int16) *obj.Prog {
}
// opregregimm emits instructions for
+//
// dest := src(From) op off
+//
// and also returns the created obj.Prog so it
// may be further adjusted (offset, scale, etc).
func opregregimm(s *ssagen.State, op obj.As, dest, src int16, off int64) *obj.Prog {
}
// For each entry k, v in this map, if we have a value x with:
-// x.Op == k[0]
-// x.Args[0].Op == k[1]
+//
+// x.Op == k[0]
+// x.Args[0].Op == k[1]
+//
// then we can set x.Op to v and set x.Args like this:
-// x.Args[0].Args + x.Args[1:]
+//
+// x.Args[0].Args + x.Args[1:]
+//
// Additionally, the Aux/AuxInt from x.Args[0] is merged into x.
var combine = map[[2]Op]Op{
// amd64
// Edge represents a CFG edge.
// Example edges for b branching to either c or d.
// (c and d have other predecessors.)
-// b.Succs = [{c,3}, {d,1}]
-// c.Preds = [?, ?, ?, {b,0}]
-// d.Preds = [?, {b,1}, ?]
+//
+// b.Succs = [{c,3}, {d,1}]
+// c.Preds = [?, ?, ?, {b,0}]
+// d.Preds = [?, {b,1}, ?]
+//
// These indexes allow us to edit the CFG in constant time.
// In addition, it informs phi ops in degenerate cases like:
-// b:
-// if k then c else c
-// c:
-// v = Phi(x, y)
+//
+// b:
+// if k then c else c
+// c:
+// v = Phi(x, y)
+//
// Then the indexes tell you whether x is chosen from
// the if or else branch from b.
-// b.Succs = [{c,0},{c,1}]
-// c.Preds = [{b,0},{b,1}]
+//
+// b.Succs = [{c,0},{c,1}]
+// c.Preds = [{b,0},{b,1}]
+//
// means x is chosen if k is true.
type Edge struct {
// block edge goes to (in a Succs list) or from (in a Preds list)
}
// BlockKind is the kind of SSA block.
-// kind controls successors
-// ------------------------------------------
-// Exit [return mem] []
-// Plain [] [next]
-// If [boolean Value] [then, else]
-// Defer [mem] [nopanic, panic] (control opcode should be OpStaticCall to runtime.deferproc)
+//
+// kind controls successors
+// ------------------------------------------
+// Exit [return mem] []
+// Plain [] [next]
+// If [boolean Value] [then, else]
+// Defer [mem] [nopanic, panic] (control opcode should be OpStaticCall to runtime.deferproc)
type BlockKind int8
// short form print
//
// b.removePred(i)
// for _, v := range b.Values {
-// if v.Op != OpPhi {
-// continue
-// }
-// b.removeArg(v, i)
+//
+// if v.Op != OpPhi {
+// continue
+// }
+// b.removeArg(v, i)
+//
// }
func (b *Block) removePhiArg(phi *Value, i int) {
n := len(b.Preds)
//
// Search for basic blocks that look like
//
-// bb0 bb0
-// | \ / \
-// | bb1 or bb1 bb2 <- trivial if/else blocks
-// | / \ /
-// bb2 bb3
+// bb0 bb0
+// | \ / \
+// | bb1 or bb1 bb2 <- trivial if/else blocks
+// | / \ /
+// bb2 bb3
//
// where the intermediate blocks are mostly empty (with no side-effects);
// rewrite Phis in the postdominator as CondSelects.
// version is used as a regular expression to match the phase name(s).
//
// Special cases that have turned out to be useful:
-// - ssa/check/on enables checking after each phase
-// - ssa/all/time enables time reporting for all phases
+// - ssa/check/on enables checking after each phase
+// - ssa/all/time enables time reporting for all phases
//
// See gc/lex.go for dissection of the option string.
// Example uses:
// partitionValues partitions the values into equivalence classes
// based on having all the following features match:
-// - opcode
-// - type
-// - auxint
-// - aux
-// - nargs
-// - block # if a phi op
-// - first two arg's opcodes and auxint
-// - NOT first two arg's aux; that can break CSE.
+// - opcode
+// - type
+// - auxint
+// - aux
+// - nargs
+// - block # if a phi op
+// - first two arg's opcodes and auxint
+// - NOT first two arg's aux; that can break CSE.
+//
// partitionValues returns a list of equivalence classes, each
// being a sorted by ID list of *Values. The eqclass slices are
// backed by the same storage as the input slice.
// OpArg{Int,Float}Reg values, inserting additional values in
// cases where they are missing. Example:
//
-// func foo(s string, used int, notused int) int {
-// return len(s) + used
-// }
+// func foo(s string, used int, notused int) int {
+// return len(s) + used
+// }
//
// In the function above, the incoming parameter "used" is fully live,
// "notused" is not live, and "s" is partially live (only the length
// field of the string is used). At the point where debug value
// analysis runs, we might expect to see an entry block with:
//
-// b1:
-// v4 = ArgIntReg <uintptr> {s+8} [0] : BX
-// v5 = ArgIntReg <int> {used} [0] : CX
+// b1:
+// v4 = ArgIntReg <uintptr> {s+8} [0] : BX
+// v5 = ArgIntReg <int> {used} [0] : CX
//
// While this is an accurate picture of the live incoming params,
// we also want to have debug locations for non-live params (or
// their non-live pieces), e.g. something like
//
-// b1:
-// v9 = ArgIntReg <*uint8> {s+0} [0] : AX
-// v4 = ArgIntReg <uintptr> {s+8} [0] : BX
-// v5 = ArgIntReg <int> {used} [0] : CX
-// v10 = ArgIntReg <int> {unused} [0] : DI
+// b1:
+// v9 = ArgIntReg <*uint8> {s+0} [0] : AX
+// v4 = ArgIntReg <uintptr> {s+8} [0] : BX
+// v5 = ArgIntReg <int> {used} [0] : CX
+// v10 = ArgIntReg <int> {unused} [0] : DI
//
// This function examines the live OpArg{Int,Float}Reg values and
// synthesizes new (dead) values for the non-live params or the
// that spills a register arg. It returns the ID of that instruction
// Example:
//
-// b1:
-// v3 = ArgIntReg <int> {p1+0} [0] : AX
-// ... more arg regs ..
-// v4 = ArgFloatReg <float32> {f1+0} [0] : X0
-// v52 = MOVQstore <mem> {p1} v2 v3 v1
-// ... more stores ...
-// v68 = MOVSSstore <mem> {f4} v2 v67 v66
-// v38 = MOVQstoreconst <mem> {blob} [val=0,off=0] v2 v32
+// b1:
+// v3 = ArgIntReg <int> {p1+0} [0] : AX
+// ... more arg regs ..
+// v4 = ArgFloatReg <float32> {f1+0} [0] : X0
+// v52 = MOVQstore <mem> {p1} v2 v3 v1
+// ... more stores ...
+// v68 = MOVSSstore <mem> {f4} v2 v67 v66
+// v38 = MOVQstoreconst <mem> {blob} [val=0,off=0] v2 v32
//
// Important: locatePrologEnd is expected to work properly only with
// optimization turned off (e.g. "-N"). If optimization is enabled
// "O" is an explicit indication that we expect it to be optimized out.
// For example:
//
-// if len(os.Args) > 1 { //gdb-dbg=(hist/A,cannedInput/A) //dlv-dbg=(hist/A,cannedInput/A)
+// if len(os.Args) > 1 { //gdb-dbg=(hist/A,cannedInput/A) //dlv-dbg=(hist/A,cannedInput/A)
//
// TODO: not implemented for Delve yet, but this is the plan
//
// It decomposes a Load or an Arg into smaller parts and returns the new mem.
// If the type does not match one of the expected aggregate types, it returns nil instead.
// Parameters:
-// pos -- the location of any generated code.
-// b -- the block into which any generated code should normally be placed
-// source -- the value, possibly an aggregate, to be stored.
-// mem -- the mem flowing into this decomposition (loads depend on it, stores updated it)
-// t -- the type of the value to be stored
-// storeOffset -- if the value is stored in memory, it is stored at base (see storeRc) + storeOffset
-// loadRegOffset -- regarding source as a value in registers, the register offset in ABI1. Meaningful only if source is OpArg.
-// storeRc -- storeRC; if the value is stored in registers, this specifies the registers.
-// StoreRc also identifies whether the target is registers or memory, and has the base for the store operation.
+//
+// pos -- the location of any generated code.
+// b -- the block into which any generated code should normally be placed
+// source -- the value, possibly an aggregate, to be stored.
+// mem -- the mem flowing into this decomposition (loads depend on it, stores updated it)
+// t -- the type of the value to be stored
+// storeOffset -- if the value is stored in memory, it is stored at base (see storeRc) + storeOffset
+// loadRegOffset -- regarding source as a value in registers, the register offset in ABI1. Meaningful only if source is OpArg.
+// storeRc -- storeRC; if the value is stored in registers, this specifies the registers.
+// StoreRc also identifies whether the target is registers or memory, and has the base for the store operation.
func (x *expandState) decomposeArg(pos src.XPos, b *Block, source, mem *Value, t *types.Type, storeOffset int64, loadRegOffset Abi1RO, storeRc registerCursor) *Value {
pa := x.prAssignForArg(source)
// It decomposes a Load into smaller parts and returns the new mem.
// If the type does not match one of the expected aggregate types, it returns nil instead.
// Parameters:
-// pos -- the location of any generated code.
-// b -- the block into which any generated code should normally be placed
-// source -- the value, possibly an aggregate, to be stored.
-// mem -- the mem flowing into this decomposition (loads depend on it, stores updated it)
-// t -- the type of the value to be stored
-// storeOffset -- if the value is stored in memory, it is stored at base (see storeRc) + offset
-// loadRegOffset -- regarding source as a value in registers, the register offset in ABI1. Meaningful only if source is OpArg.
-// storeRc -- storeRC; if the value is stored in registers, this specifies the registers.
-// StoreRc also identifies whether the target is registers or memory, and has the base for the store operation.
+//
+// pos -- the location of any generated code.
+// b -- the block into which any generated code should normally be placed
+// source -- the value, possibly an aggregate, to be stored.
+// mem -- the mem flowing into this decomposition (loads depend on it, stores updated it)
+// t -- the type of the value to be stored
+// storeOffset -- if the value is stored in memory, it is stored at base (see storeRc) + offset
+// loadRegOffset -- regarding source as a value in registers, the register offset in ABI1. Meaningful only if source is OpArg.
+// storeRc -- storeRC; if the value is stored in registers, this specifies the registers.
+// StoreRc also identifies whether the target is registers or memory, and has the base for the store operation.
//
// TODO -- this needs cleanup; it just works for SSA-able aggregates, and won't fully generalize to register-args aggregates.
func (x *expandState) decomposeLoad(pos src.XPos, b *Block, source, mem *Value, t *types.Type, storeOffset int64, loadRegOffset Abi1RO, storeRc registerCursor) *Value {
}
// DebugHashMatch reports whether environment variable evname
-// 1) is empty (this is a special more-quickly implemented case of 3)
-// 2) is "y" or "Y"
-// 3) is a suffix of the sha1 hash of name
-// 4) is a suffix of the environment variable
+// 1. is empty (this is a special more-quickly implemented case of 3)
+// 2. is "y" or "Y"
+// 3. is a suffix of the sha1 hash of name
+// 4. is a suffix of the environment variable
// fmt.Sprintf("%s%d", evname, n)
// provided that all such variables are nonempty for 0 <= i <= n
+//
// Otherwise it returns false.
// When true is returned the message
-// "%s triggered %s\n", evname, name
+//
+// "%s triggered %s\n", evname, name
+//
// is printed on the file named in environment variable
-// GSHS_LOGFILE
+//
+// GSHS_LOGFILE
+//
// or standard out if that is empty or there is an error
// opening the file.
func (f *Func) DebugHashMatch(evname string) bool {
// fuseBlockIf handles the following cases where s0 and s1 are empty blocks.
//
-// b b b b
-// \ / \ / | \ / \ / | | |
-// s0 s1 | s1 s0 | | |
-// \ / | / \ | | |
-// ss ss ss ss
+// b b b b
+// \ / \ / | \ / \ / | | |
+// s0 s1 | s1 s0 | | |
+// \ / | / \ | | |
+// ss ss ss ss
//
// If all Phi ops in ss have identical variables for slots corresponding to
// s0, s1 and b then the branch can be dropped.
// This optimization often comes up in switch statements with multiple
// expressions in a case clause:
-// switch n {
-// case 1,2,3: return 4
-// }
+//
+// switch n {
+// case 1,2,3: return 4
+// }
+//
// TODO: If ss doesn't contain any OpPhis, are s0 and s1 dead code anyway.
func fuseBlockIf(b *Block) bool {
if b.Kind != BlockIf {
// of an If block can be derived from its predecessor If block, in
// some such cases, we can redirect the predecessor If block to the
// corresponding successor block directly. For example:
-// p:
-// v11 = Less64 <bool> v10 v8
-// If v11 goto b else u
-// b: <- p ...
-// v17 = Leq64 <bool> v10 v8
-// If v17 goto s else o
+//
+// p:
+// v11 = Less64 <bool> v10 v8
+// If v11 goto b else u
+// b: <- p ...
+// v17 = Leq64 <bool> v10 v8
+// If v17 goto s else o
+//
// We can redirect p to s directly.
//
// The implementation here borrows the framework of the prove pass.
-// 1, Traverse all blocks of function f to find If blocks.
-// 2, For any If block b, traverse all its predecessors to find If blocks.
-// 3, For any If block predecessor p, update relationship p->b.
-// 4, Traverse all successors of b.
-// 5, For any successor s of b, try to update relationship b->s, if a
-// contradiction is found then redirect p to another successor of b.
+//
+// 1, Traverse all blocks of function f to find If blocks.
+// 2, For any If block b, traverse all its predecessors to find If blocks.
+// 3, For any If block predecessor p, update relationship p->b.
+// 4, Traverse all successors of b.
+// 5, For any successor s of b, try to update relationship b->s, if a
+// contradiction is found then redirect p to another successor of b.
func fuseBranchRedirect(f *Func) bool {
ft := newFactsTable(f)
ft.checkpoint()
//
// Look for branch structure like:
//
-// p
-// |\
-// | b
-// |/ \
-// s0 s1
+// p
+// |\
+// | b
+// |/ \
+// s0 s1
//
// In our example, p has control '1 <= x', b has control 'x < 5',
// and s0 and s1 are the if and else results of the comparison.
//
// This will be optimized into:
//
-// p
-// \
-// b
-// / \
-// s0 s1
+// p
+// \
+// b
+// / \
+// s0 s1
//
// where b has the combined control value 'unsigned(x-1) < 4'.
// Later passes will then fuse p and b.
// variable that has been decomposed into multiple stack slots.
// As an example, a string could have the following configurations:
//
-// stack layout LocalSlots
+// stack layout LocalSlots
//
-// Optimizations are disabled. s is on the stack and represented in its entirety.
-// [ ------- s string ---- ] { N: s, Type: string, Off: 0 }
+// Optimizations are disabled. s is on the stack and represented in its entirety.
+// [ ------- s string ---- ] { N: s, Type: string, Off: 0 }
//
-// s was not decomposed, but the SSA operates on its parts individually, so
-// there is a LocalSlot for each of its fields that points into the single stack slot.
-// [ ------- s string ---- ] { N: s, Type: *uint8, Off: 0 }, {N: s, Type: int, Off: 8}
+// s was not decomposed, but the SSA operates on its parts individually, so
+// there is a LocalSlot for each of its fields that points into the single stack slot.
+// [ ------- s string ---- ] { N: s, Type: *uint8, Off: 0 }, {N: s, Type: int, Off: 8}
//
-// s was decomposed. Each of its fields is in its own stack slot and has its own LocalSLot.
-// [ ptr *uint8 ] [ len int] { N: ptr, Type: *uint8, Off: 0, SplitOf: parent, SplitOffset: 0},
-// { N: len, Type: int, Off: 0, SplitOf: parent, SplitOffset: 8}
-// parent = &{N: s, Type: string}
+// s was decomposed. Each of its fields is in its own stack slot and has its own LocalSLot.
+// [ ptr *uint8 ] [ len int] { N: ptr, Type: *uint8, Off: 0, SplitOf: parent, SplitOffset: 0},
+// { N: len, Type: int, Off: 0, SplitOf: parent, SplitOffset: 8}
+// parent = &{N: s, Type: string}
type LocalSlot struct {
N *ir.Name // an ONAME *ir.Name representing a stack location.
Type *types.Type // type of slot
// parseIndVar checks whether the SSA value passed as argument is a valid induction
// variable, and, if so, extracts:
-// * the minimum bound
-// * the increment value
-// * the "next" value (SSA value that is Phi'd into the induction variable every loop)
+// - the minimum bound
+// - the increment value
+// - the "next" value (SSA value that is Phi'd into the induction variable every loop)
+//
// Currently, we detect induction variables that match (Phi min nxt),
// with nxt being (Add inc ind).
// If it can't parse the induction variable correctly, it returns (nil, nil, nil).
//
// Look for variables and blocks that satisfy the following
//
-// loop:
-// ind = (Phi min nxt),
-// if ind < max
-// then goto enter_loop
-// else goto exit_loop
-//
-// enter_loop:
-// do something
-// nxt = inc + ind
-// goto loop
+// loop:
+// ind = (Phi min nxt),
+// if ind < max
+// then goto enter_loop
+// else goto exit_loop
//
-// exit_loop:
+// enter_loop:
+// do something
+// nxt = inc + ind
+// goto loop
//
+// exit_loop:
//
// TODO: handle 32 bit operations
func findIndVar(f *Func) []indVar {
// to loops with a check-loop-condition-at-end.
// This helps loops avoid extra unnecessary jumps.
//
-// loop:
-// CMPQ ...
-// JGE exit
-// ...
-// JMP loop
-// exit:
+// loop:
+// CMPQ ...
+// JGE exit
+// ...
+// JMP loop
+// exit:
//
-// JMP entry
-// loop:
-// ...
-// entry:
-// CMPQ ...
-// JLT loop
+// JMP entry
+// loop:
+// ...
+// entry:
+// CMPQ ...
+// JLT loop
func loopRotate(f *Func) {
loopnest := f.loopnest()
if loopnest.hasIrreducible {
// umagic computes the constants needed to strength reduce unsigned n-bit divides by the constant uint64(c).
// The return values satisfy for all 0 <= x < 2^n
-// floor(x / uint64(c)) = x * (m + 2^n) >> (n+s)
+//
+// floor(x / uint64(c)) = x * (m + 2^n) >> (n+s)
func umagic(n uint, c int64) umagicData {
// Convert from ConstX auxint values to the real uint64 constant they represent.
d := uint64(c) << (64 - n) >> (64 - n)
// magic computes the constants needed to strength reduce signed n-bit divides by the constant c.
// Must have c>0.
// The return values satisfy for all -2^(n-1) <= x < 2^(n-1)
-// trunc(x / c) = x * m >> (n+s) + (x < 0 ? 1 : 0)
+//
+// trunc(x / c) = x * m >> (n+s) + (x < 0 ? 1 : 0)
func smagic(n uint, c int64) smagicData {
C := new(big.Int).SetInt64(c)
s := C.BitLen() - 1
// A Sym represents a symbolic offset from a base register.
// Currently a Sym can be one of 3 things:
-// - a *gc.Node, for an offset from SP (the stack pointer)
-// - a *obj.LSym, for an offset from SB (the global pointer)
-// - nil, for no offset
+// - a *gc.Node, for an offset from SP (the stack pointer)
+// - a *obj.LSym, for an offset from SB (the global pointer)
+// - nil, for no offset
type Sym interface {
CanBeAnSSASym()
CanBeAnSSAAux()
)
// boundsAPI determines which register arguments a bounds check call should use. For an [a:b:c] slice, we do:
-// CMPQ c, cap
-// JA fail1
-// CMPQ b, c
-// JA fail2
-// CMPQ a, b
-// JA fail3
+//
+// CMPQ c, cap
+// JA fail1
+// CMPQ b, c
+// JA fail2
+// CMPQ a, b
+// JA fail3
//
// fail1: CALL panicSlice3Acap (c, cap)
// fail2: CALL panicSlice3B (b, c)
// A phi is redundant if its arguments are all equal. For
// purposes of counting, ignore the phi itself. Both of
// these phis are redundant:
-// v = phi(x,x,x)
-// v = phi(x,v,x,v)
+//
+// v = phi(x,x,x)
+// v = phi(x,v,x,v)
+//
// We repeat this process to also catch situations like:
-// v = phi(x, phi(x, x), phi(x, v))
+//
+// v = phi(x, phi(x, x), phi(x, v))
+//
// TODO: Can we also simplify cases like:
-// v = phi(v, w, x)
-// w = phi(v, w, x)
+//
+// v = phi(v, w, x)
+// w = phi(v, w, x)
+//
// and would that be useful?
func phielim(f *Func) {
for {
// phiopt eliminates boolean Phis based on the previous if.
//
// Main use case is to transform:
-// x := false
-// if b {
-// x = true
-// }
+//
+// x := false
+// if b {
+// x = true
+// }
+//
// into x = b.
//
// In SSA code this appears as
//
-// b0
-// If b -> b1 b2
-// b1
-// Plain -> b2
-// b2
-// x = (OpPhi (ConstBool [true]) (ConstBool [false]))
+// b0
+// If b -> b1 b2
+// b1
+// Plain -> b2
+// b2
+// x = (OpPhi (ConstBool [true]) (ConstBool [false]))
//
// In this case we can replace x with a copy of b.
func phiopt(f *Func) {
// to record that A<I, A<J, A<K (with no known relation between I,J,K), we create the
// following DAG:
//
-// A
-// / \
-// I extra
-// / \
-// J K
+// A
+// / \
+// I extra
+// / \
+// J K
type poset struct {
lastidx uint32 // last generated dense index
flags uint8 // internal flags
//
// E.g.
//
-// r := relation(...)
+// r := relation(...)
//
-// if v < w {
-// newR := r & lt
-// }
-// if v >= w {
-// newR := r & (eq|gt)
-// }
-// if v != w {
-// newR := r & (lt|gt)
-// }
+// if v < w {
+// newR := r & lt
+// }
+// if v >= w {
+// newR := r & (eq|gt)
+// }
+// if v != w {
+// newR := r & (lt|gt)
+// }
type relation uint
const (
// By far, the most common redundant pair are generated by bounds checking.
// For example for the code:
//
-// a[i] = 4
-// foo(a[i])
+// a[i] = 4
+// foo(a[i])
//
// The compiler will generate the following code:
//
-// if i >= len(a) {
-// panic("not in bounds")
-// }
-// a[i] = 4
-// if i >= len(a) {
-// panic("not in bounds")
-// }
-// foo(a[i])
+// if i >= len(a) {
+// panic("not in bounds")
+// }
+// a[i] = 4
+// if i >= len(a) {
+// panic("not in bounds")
+// }
+// foo(a[i])
//
// The second comparison i >= len(a) is clearly redundant because if the
// else branch of the first comparison is executed, we already know that i < len(a).
// clobber invalidates values. Returns true.
// clobber is used by rewrite rules to:
-// A) make sure the values are really dead and never used again.
-// B) decrement use counts of the values' args.
+//
+// A) make sure the values are really dead and never used again.
+// B) decrement use counts of the values' args.
func clobber(vv ...*Value) bool {
for _, v := range vv {
v.reset(OpInvalid)
// noteRule is an easy way to track if a rule is matched when writing
// new ones. Make the rule of interest also conditional on
-// noteRule("note to self: rule of interest matched")
+//
+// noteRule("note to self: rule of interest matched")
+//
// and that message will print when the rule matches.
func noteRule(s string) bool {
fmt.Println(s)
// We happen to match the semantics to those of arm/arm64.
// Note that these semantics differ from x86: the carry flag has the opposite
// sense on a subtraction!
-// On amd64, C=1 represents a borrow, e.g. SBB on amd64 does x - y - C.
-// On arm64, C=0 represents a borrow, e.g. SBC on arm64 does x - y - ^C.
-// (because it does x + ^y + C).
+//
+// On amd64, C=1 represents a borrow, e.g. SBB on amd64 does x - y - C.
+// On arm64, C=0 represents a borrow, e.g. SBC on arm64 does x - y - ^C.
+// (because it does x + ^y + C).
+//
// See https://en.wikipedia.org/wiki/Carry_flag#Vs._borrow_flag
type flagConstant uint8
}
// Profile the aforementioned optimization from two angles:
-// SoloJump: generated branching code has one 'jump', for '<' and '>='
-// CombJump: generated branching code has two consecutive 'jump', for '<=' and '>'
+//
+// SoloJump: generated branching code has one 'jump', for '<' and '>='
+// CombJump: generated branching code has two consecutive 'jump', for '<=' and '>'
+//
// We expect that 'CombJump' is generally on par with the non-optimized code, and
// 'SoloJump' demonstrates some improvement.
// It's for arm64 initially, please see https://github.com/golang/go/issues/38740
// if v transitively depends on store s, v is ordered after s,
// otherwise v is ordered before s.
// Specifically, values are ordered like
-// store1
-// NilCheck that depends on store1
-// other values that depends on store1
-// store2
-// NilCheck that depends on store2
-// other values that depends on store2
-// ...
+//
+// store1
+// NilCheck that depends on store1
+// other values that depends on store1
+// store2
+// NilCheck that depends on store2
+// other values that depends on store2
+// ...
+//
// The order of non-store and non-NilCheck values are undefined
// (not necessarily dependency order). This should be cheaper
// than a full scheduling as done above.
// makeShiftExtensionFunc generates a function containing:
//
-// (rshift (lshift (Const64 [amount])) (Const64 [amount]))
+// (rshift (lshift (Const64 [amount])) (Const64 [amount]))
//
// This may be equivalent to a sign or zero extension.
func makeShiftExtensionFunc(c *Conf, amount int64, lshift, rshift Op, typ *types.Type) fun {
//
// (1) Look for a CFG of the form
//
-// p other pred(s)
-// \ /
-// b
-// / \
-// t other succ
+// p other pred(s)
+// \ /
+// b
+// / \
+// t other succ
//
// in which b is an If block containing a single phi value with a single use (b's Control),
// which has a ConstBool arg.
//
// Rewrite this into
//
-// p other pred(s)
-// | /
-// | b
-// |/ \
-// t u
+// p other pred(s)
+// | /
+// | b
+// |/ \
+// t u
//
// and remove the appropriate phi arg(s).
//
// (2) Look for a CFG of the form
//
-// p q
-// \ /
-// b
-// / \
-// t u
+// p q
+// \ /
+// b
+// / \
+// t u
//
// in which b is as described in (1).
// However, b may also contain other phi values.
// 1. If domorder(x) > domorder(y) then x does not dominate y.
// 2. If domorder(x) < domorder(y) and domorder(y) < domorder(z) and x does not dominate y,
// then x does not dominate z.
+//
// Property (1) means that blocks sorted by domorder always have a maximal dominant block first.
// Property (2) allows searches for dominated blocks to exit early.
func (t SparseTree) domorder(x *Block) int32 {
// trimmableBlock reports whether the block can be trimmed from the CFG,
// subject to the following criteria:
-// - it should not be the first block
-// - it should be BlockPlain
-// - it should not loop back to itself
-// - it either is the single predecessor of the successor block or
-// contains no actual instructions
+// - it should not be the first block
+// - it should be BlockPlain
+// - it should not loop back to itself
+// - it either is the single predecessor of the successor block or
+// contains no actual instructions
func trimmableBlock(b *Block) bool {
if b.Kind != BlockPlain || b == b.Func.Entry {
return false
// for stack variables are specified as the number of bytes below varp (pointer to the
// top of the local variables) for their starting address. The format is:
//
-// - Offset of the deferBits variable
-// - Number of defers in the function
-// - Information about each defer call, in reverse order of appearance in the function:
-// - Offset of the closure value to call
+// - Offset of the deferBits variable
+// - Number of defers in the function
+// - Information about each defer call, in reverse order of appearance in the function:
+// - Offset of the closure value to call
func (s *state) emitOpenDeferInfo() {
x := base.Ctxt.Lookup(s.curfn.LSym.Name + ".opendefer")
x.Set(obj.AttrContentAddressable, true)
// checkBranches checks correct use of labels and branch
// statements (break, continue, goto) in a function body.
// It catches:
-// - misplaced breaks and continues
-// - bad labeled breaks and continues
-// - invalid, unused, duplicate, and missing labels
-// - gotos jumping over variable declarations and into blocks
+// - misplaced breaks and continues
+// - bad labeled breaks and continues
+// - invalid, unused, duplicate, and missing labels
+// - gotos jumping over variable declarations and into blocks
func checkBranches(body *BlockStmt, errh ErrorHandler) {
if body == nil {
return
// get the same type going out.
// force means must assign concrete (non-ideal) type.
// The results of defaultlit2 MUST be assigned back to l and r, e.g.
-// n.Left, n.Right = defaultlit2(n.Left, n.Right, force)
+//
+// n.Left, n.Right = defaultlit2(n.Left, n.Right, force)
func defaultlit2(l ir.Node, r ir.Node, force bool) (ir.Node, ir.Node) {
if l.Type() == nil || r.Type() == nil {
return l, r
// tcArith typechecks operands of a binary arithmetic expression.
// The result of tcArith MUST be assigned back to original operands,
// t is the type of the expression, and should be set by the caller. e.g:
-// n.X, n.Y, t = tcArith(n, op, n.X, n.Y)
-// n.SetType(t)
+//
+// n.X, n.Y, t = tcArith(n, op, n.X, n.Y)
+// n.SetType(t)
func tcArith(n ir.Node, op ir.Op, l, r ir.Node) (ir.Node, ir.Node, *types.Type) {
l, r = defaultlit2(l, r, false)
if l.Type() == nil || r.Type() == nil {
}
// The result of tcCompLit MUST be assigned back to n, e.g.
-// n.Left = tcCompLit(n.Left)
+//
+// n.Left = tcCompLit(n.Left)
func tcCompLit(n *ir.CompLitExpr) (res ir.Node) {
if base.EnableTrace && base.Flag.LowerT {
defer tracePrint("tcCompLit", n)(&res)
// 1: added column details to Pos
// 2: added information for generic function/types. The export of non-generic
// functions/types remains largely backward-compatible. Breaking changes include:
-// - a 'kind' byte is added to constant values
+// - a 'kind' byte is added to constant values
const (
iexportVersionGo1_11 = 0
iexportVersionPosCol = 1
// successive occurrences of the "any" placeholder in the
// type syntax expression n.Type.
// The result of SubstArgTypes MUST be assigned back to old, e.g.
-// n.Left = SubstArgTypes(n.Left, t1, t2)
+//
+// n.Left = SubstArgTypes(n.Left, t1, t2)
func SubstArgTypes(old *ir.Name, types_ ...*types.Type) *ir.Name {
for _, t := range types_ {
types.CalcSize(t)
// typecheck type checks node n.
// The result of typecheck MUST be assigned back to n, e.g.
-// n.Left = typecheck(n.Left, top)
+//
+// n.Left = typecheck(n.Left, top)
func typecheck(n ir.Node, top int) (res ir.Node) {
// cannot type check until all the source has been parsed
if !TypecheckAllowed {
// but also accepts untyped numeric values representable as
// value of type int (see also checkmake for comparison).
// The result of indexlit MUST be assigned back to n, e.g.
-// n.Left = indexlit(n.Left)
+//
+// n.Left = indexlit(n.Left)
func indexlit(n ir.Node) ir.Node {
if n != nil && n.Type() != nil && n.Type().Kind() == types.TIDEAL {
return DefaultLit(n, types.Types[types.TINT])
}
// The result of implicitstar MUST be assigned back to n, e.g.
-// n.Left = implicitstar(n.Left)
+//
+// n.Left = implicitstar(n.Left)
func implicitstar(n ir.Node) ir.Node {
// insert implicit * if needed for fixed array
t := n.Type()
}
// The result of stringtoruneslit MUST be assigned back to n, e.g.
-// n.Left = stringtoruneslit(n.Left)
+//
+// n.Left = stringtoruneslit(n.Left)
func stringtoruneslit(n *ir.ConvExpr) ir.Node {
if n.X.Op() != ir.OLITERAL || n.X.Val().Kind() != constant.String {
base.Fatalf("stringtoarraylit %v", n)
// A Field is a (Sym, Type) pairing along with some other information, and,
// depending on the context, is used to represent:
-// - a field in a struct
-// - a method in an interface or associated with a named type
-// - a function parameter
+// - a field in a struct
+// - a method in an interface or associated with a named type
+// - a function parameter
type Field struct {
flags bitset8
}
// Cmp is a comparison between values a and b.
-// -1 if a < b
-// 0 if a == b
-// 1 if a > b
+//
+// -1 if a < b
+// 0 if a == b
+// 1 if a > b
type Cmp int8
const (
// Type inference computes the type (Type) of every expression (syntax.Expr)
// and checks for compliance with the language specification.
// Use Info.Types[expr].Type for the results of type inference.
-//
package types2
import (
// (and a separating "--"). For instance, to test the package made
// of the files foo.go and bar.go, use:
//
-// go test -run Manual -- foo.go bar.go
+// go test -run Manual -- foo.go bar.go
//
// If no source arguments are provided, the file testdata/manual.go
// is used instead.
//
// Inference proceeds as follows. Starting with given type arguments:
//
-// 1) apply FTI (function type inference) with typed arguments,
-// 2) apply CTI (constraint type inference),
-// 3) apply FTI with untyped function arguments,
-// 4) apply CTI.
+// 1. apply FTI (function type inference) with typed arguments,
+// 2. apply CTI (constraint type inference),
+// 3. apply FTI with untyped function arguments,
+// 4. apply CTI.
//
// The process stops as soon as all type arguments are known or an error occurs.
func (check *Checker) infer(pos syntax.Pos, tparams []*TypeParam, targs []Type, params *Tuple, args []*operand) (result []Type) {
// The last index entry is the field or method index in the (possibly embedded)
// type where the entry was found, either:
//
-// 1) the list of declared methods of a named type; or
-// 2) the list of all methods (method set) of an interface type; or
-// 3) the list of fields of a struct type.
+// 1. the list of declared methods of a named type; or
+// 2. the list of all methods (method set) of an interface type; or
+// 3. the list of fields of a struct type.
//
// The earlier index entries are the indices of the embedded struct fields
// traversed to get to the found entry, starting at depth 0.
// If no entry is found, a nil object is returned. In this case, the returned
// index and indirect values have the following meaning:
//
-// - If index != nil, the index sequence points to an ambiguous entry
-// (the same name appeared more than once at the same embedding level).
+// - If index != nil, the index sequence points to an ambiguous entry
+// (the same name appeared more than once at the same embedding level).
//
-// - If indirect is set, a method with a pointer receiver type was found
-// but there was no pointer on the path from the actual receiver type to
-// the method's formal receiver base type, nor was the receiver addressable.
+// - If indirect is set, a method with a pointer receiver type was found
+// but there was no pointer on the path from the actual receiver type to
+// the method's formal receiver base type, nor was the receiver addressable.
func LookupFieldOrMethod(T Type, addressable bool, pkg *Package, name string) (obj Object, index []int, indirect bool) {
if T == nil {
panic("LookupFieldOrMethod on nil type")
// The last index entry is the field or method index of the type declaring f;
// either:
//
-// 1) the list of declared methods of a named type; or
-// 2) the list of methods of an interface type; or
-// 3) the list of fields of a struct type.
+// 1. the list of declared methods of a named type; or
+// 2. the list of methods of an interface type; or
+// 3. the list of fields of a struct type.
//
// The earlier index entries are the indices of the embedded fields implicitly
// traversed to get from (the type of) x to f, starting at embedding depth 0.
// package-level objects, and may be nil.
//
// Examples:
+//
// "field (T) f int"
// "method (T) f(X) Y"
// "method expr (T) f(X) Y"
// StdSizes is a convenience type for creating commonly used Sizes.
// It makes the following simplifying assumptions:
//
-// - The size of explicitly sized basic types (int16, etc.) is the
-// specified size.
-// - The size of strings and interfaces is 2*WordSize.
-// - The size of slices is 3*WordSize.
-// - The size of an array of n elements corresponds to the size of
-// a struct of n consecutive fields of the array's element type.
-// - The size of a struct is the offset of the last field plus that
-// field's size. As with all element types, if the struct is used
-// in an array its size must first be aligned to a multiple of the
-// struct's alignment.
-// - All other types have size WordSize.
-// - Arrays and structs are aligned per spec definition; all other
-// types are naturally aligned with a maximum alignment MaxAlign.
+// - The size of explicitly sized basic types (int16, etc.) is the
+// specified size.
+// - The size of strings and interfaces is 2*WordSize.
+// - The size of slices is 3*WordSize.
+// - The size of an array of n elements corresponds to the size of
+// a struct of n consecutive fields of the array's element type.
+// - The size of a struct is the offset of the last field plus that
+// field's size. As with all element types, if the struct is used
+// in an array its size must first be aligned to a multiple of the
+// struct's alignment.
+// - All other types have size WordSize.
+// - Arrays and structs are aligned per spec definition; all other
+// types are naturally aligned with a maximum alignment MaxAlign.
//
// *StdSizes implements Sizes.
type StdSizes struct {
// A term describes elementary type sets:
//
-// ∅: (*term)(nil) == ∅ // set of no types (empty set)
-// 𝓤: &term{} == 𝓤 // set of all types (𝓤niverse)
-// T: &term{false, T} == {T} // set of type T
-// ~t: &term{true, t} == {t' | under(t') == t} // set of types with underlying type t
+// ∅: (*term)(nil) == ∅ // set of no types (empty set)
+// 𝓤: &term{} == 𝓤 // set of all types (𝓤niverse)
+// T: &term{false, T} == {T} // set of type T
+// ~t: &term{true, t} == {t' | under(t') == t} // set of types with underlying type t
type term struct {
tilde bool // valid if typ != nil
typ Type
// check assign type list to
// an expression list. called in
+//
// expr-list = func()
func ascompatet(nl ir.Nodes, nr *types.Type) []ir.Node {
if len(nl) != nr.NumFields() {
// check assign expression list to
// an expression list. called in
+//
// expr-list = expr-list
func ascompatee(op ir.Op, nl, nr []ir.Node) []ir.Node {
// cannot happen: should have been rejected during type checking
}
// expand append(l1, l2...) to
-// init {
-// s := l1
-// n := len(s) + len(l2)
-// // Compare as uint so growslice can panic on overflow.
-// if uint(n) > uint(cap(s)) {
-// s = growslice(s, n)
-// }
-// s = s[:n]
-// memmove(&s[len(l1)], &l2[0], len(l2)*sizeof(T))
-// }
-// s
+//
+// init {
+// s := l1
+// n := len(s) + len(l2)
+// // Compare as uint so growslice can panic on overflow.
+// if uint(n) > uint(cap(s)) {
+// s = growslice(s, n)
+// }
+// s = s[:n]
+// memmove(&s[len(l1)], &l2[0], len(l2)*sizeof(T))
+// }
+// s
//
// l2 is allowed to be a string.
func appendSlice(n *ir.CallExpr, init *ir.Nodes) ir.Node {
}
// extendSlice rewrites append(l1, make([]T, l2)...) to
-// init {
-// if l2 >= 0 { // Empty if block here for more meaningful node.SetLikely(true)
-// } else {
-// panicmakeslicelen()
-// }
-// s := l1
-// n := len(s) + l2
-// // Compare n and s as uint so growslice can panic on overflow of len(s) + l2.
-// // cap is a positive int and n can become negative when len(s) + l2
-// // overflows int. Interpreting n when negative as uint makes it larger
-// // than cap(s). growslice will check the int n arg and panic if n is
-// // negative. This prevents the overflow from being undetected.
-// if uint(n) > uint(cap(s)) {
-// s = growslice(T, s, n)
-// }
-// s = s[:n]
-// lptr := &l1[0]
-// sptr := &s[0]
-// if lptr == sptr || !T.HasPointers() {
-// // growslice did not clear the whole underlying array (or did not get called)
-// hp := &s[len(l1)]
-// hn := l2 * sizeof(T)
-// memclr(hp, hn)
-// }
-// }
-// s
+//
+// init {
+// if l2 >= 0 { // Empty if block here for more meaningful node.SetLikely(true)
+// } else {
+// panicmakeslicelen()
+// }
+// s := l1
+// n := len(s) + l2
+// // Compare n and s as uint so growslice can panic on overflow of len(s) + l2.
+// // cap is a positive int and n can become negative when len(s) + l2
+// // overflows int. Interpreting n when negative as uint makes it larger
+// // than cap(s). growslice will check the int n arg and panic if n is
+// // negative. This prevents the overflow from being undetected.
+// if uint(n) > uint(cap(s)) {
+// s = growslice(T, s, n)
+// }
+// s = s[:n]
+// lptr := &l1[0]
+// sptr := &s[0]
+// if lptr == sptr || !T.HasPointers() {
+// // growslice did not clear the whole underlying array (or did not get called)
+// hp := &s[len(l1)]
+// hn := l2 * sizeof(T)
+// memclr(hp, hn)
+// }
+// }
+// s
func extendSlice(n *ir.CallExpr, init *ir.Nodes) ir.Node {
// isAppendOfMake made sure all possible positive values of l2 fit into an uint.
// The case of l2 overflow when converting from e.g. uint to int is handled by an explicit
//
// For race detector, expand append(src, a [, b]* ) to
//
-// init {
-// s := src
-// const argc = len(args) - 1
-// if cap(s) - len(s) < argc {
-// s = growslice(s, len(s)+argc)
-// }
-// n := len(s)
-// s = s[:n+argc]
-// s[n] = a
-// s[n+1] = b
-// ...
-// }
-// s
+// init {
+// s := src
+// const argc = len(args) - 1
+// if cap(s) - len(s) < argc {
+// s = growslice(s, len(s)+argc)
+// }
+// n := len(s)
+// s = s[:n+argc]
+// s[n] = a
+// s[n+1] = b
+// ...
+// }
+// s
func walkAppend(n *ir.CallExpr, init *ir.Nodes, dst ir.Node) ir.Node {
if !ir.SameSafeExpr(dst, n.Args[0]) {
n.Args[0] = safeExpr(n.Args[0], init)
)
// The result of walkCompare MUST be assigned back to n, e.g.
-// n.Left = walkCompare(n.Left, init)
+//
+// n.Left = walkCompare(n.Left, init)
func walkCompare(n *ir.BinaryExpr, init *ir.Nodes) ir.Node {
if n.X.Type().IsInterface() && n.Y.Type().IsInterface() && n.X.Op() != ir.ONIL && n.Y.Op() != ir.ONIL {
return walkCompareInterface(n, init)
}
// The result of finishCompare MUST be assigned back to n, e.g.
-// n.Left = finishCompare(n.Left, x, r, init)
+//
+// n.Left = finishCompare(n.Left, x, r, init)
func finishCompare(n *ir.BinaryExpr, r ir.Node, init *ir.Nodes) ir.Node {
r = typecheck.Expr(r)
r = typecheck.Conv(r, n.Type())
)
// The result of walkExpr MUST be assigned back to n, e.g.
-// n.Left = walkExpr(n.Left, init)
+//
+// n.Left = walkExpr(n.Left, init)
func walkExpr(n ir.Node, init *ir.Nodes) ir.Node {
if n == nil {
return n
// If the original argument n is not okay, addrTemp creates a tmp, emits
// tmp = n, and then returns tmp.
// The result of addrTemp MUST be assigned back to n, e.g.
-// n.Left = o.addrTemp(n.Left)
+//
+// n.Left = o.addrTemp(n.Left)
func (o *orderState) addrTemp(n ir.Node) ir.Node {
if n.Op() == ir.OLITERAL || n.Op() == ir.ONIL {
// TODO: expand this to all static composite literal nodes?
// Returns a bool that signals if a modification was made.
//
// For:
-// x = m[string(k)]
-// x = m[T1{... Tn{..., string(k), ...}]
+//
+// x = m[string(k)]
+// x = m[T1{... Tn{..., string(k), ...}]
+//
// where k is []byte, T1 to Tn is a nesting of struct and array literals,
// the allocation of backing bytes for the string can be avoided
// by reusing the []byte backing array. These are special cases
}
// orderMakeSliceCopy matches the pattern:
-// m = OMAKESLICE([]T, x); OCOPY(m, s)
+//
+// m = OMAKESLICE([]T, x); OCOPY(m, s)
+//
// and rewrites it to:
-// m = OMAKESLICECOPY([]T, x, s); nil
+//
+// m = OMAKESLICECOPY([]T, x, s); nil
func orderMakeSliceCopy(s []ir.Node) {
if base.Flag.N != 0 || base.Flag.Cfg.Instrumenting {
return
// exprInPlace orders the side effects in *np and
// leaves them as the init list of the final *np.
// The result of exprInPlace MUST be assigned back to n, e.g.
-// n.Left = o.exprInPlace(n.Left)
+//
+// n.Left = o.exprInPlace(n.Left)
func (o *orderState) exprInPlace(n ir.Node) ir.Node {
var order orderState
order.free = o.free
// orderStmtInPlace orders the side effects of the single statement *np
// and replaces it with the resulting statement list.
// The result of orderStmtInPlace MUST be assigned back to n, e.g.
-// n.Left = orderStmtInPlace(n.Left)
+//
+// n.Left = orderStmtInPlace(n.Left)
+//
// free is a map that can be used to obtain temporary variables by type.
func orderStmtInPlace(n ir.Node, free map[string][]*ir.Name) ir.Node {
var order orderState
// Otherwise lhs == nil. (When lhs != nil it may be possible
// to avoid copying the result of the expression to a temporary.)
// The result of expr MUST be assigned back to n, e.g.
-// n.Left = o.expr(n.Left, lhs)
+//
+// n.Left = o.expr(n.Left, lhs)
func (o *orderState) expr(n, lhs ir.Node) ir.Node {
if n == nil {
return n
// as2func orders OAS2FUNC nodes. It creates temporaries to ensure left-to-right assignment.
// The caller should order the right-hand side of the assignment before calling order.as2func.
// It rewrites,
+//
// a, b, a = ...
+//
// as
+//
// tmp1, tmp2, tmp3 = ...
// a, b, a = tmp1, tmp2, tmp3
+//
// This is necessary to ensure left to right assignment order.
func (o *orderState) as2func(n *ir.AssignListStmt) {
results := n.Rhs[0].Type()
)
// The result of walkStmt MUST be assigned back to n, e.g.
-// n.Left = walkStmt(n.Left)
+//
+// n.Left = walkStmt(n.Left)
func walkStmt(n ir.Node) ir.Node {
if n == nil {
return n
}
// opregreg emits instructions for
-// dest := dest(To) op src(From)
+//
+// dest := dest(To) op src(From)
+//
// and also returns the created obj.Prog so it
// may be further adjusted (offset, scale, etc).
func opregreg(s *ssagen.State, op obj.As, dest, src int16) *obj.Prog {
// S1
// if cond {
// S2
-// }
+// }
// S3
//
// counters will be added before S1 and before S3. The block containing S2
// Run this shell script, but do it in Go so it can be run by "go test".
//
// replace the word LINE with the line number < testdata/test.go > testdata/test_line.go
-// go build -o testcover
-// testcover -mode=count -var=CoverTest -o ./testdata/test_cover.go testdata/test_line.go
+// go build -o testcover
+// testcover -mode=count -var=CoverTest -o ./testdata/test_cover.go testdata/test_line.go
// go run ./testdata/main.go ./testdata/test.go
func TestCover(t *testing.T) {
t.Parallel()
because cover deletes comments that are significant to cgo.
For usage information, please see:
+
go help testflag
go tool cover -help
*/
// commands (like "go tool dist test" in run.bash) can rely on bug fixes
// made since Go 1.4, but this function cannot. In particular, the uses
// of os/exec in this function cannot assume that
+//
// cmd.Env = append(os.Environ(), "X=Y")
+//
// sets $X to Y in the command's environment. That guarantee was
// added after Go 1.4, and in fact in Go 1.4 it was typically the opposite:
// if $X was already present in os.Environ(), most systems preferred
// Dist helps bootstrap, build, and test the Go distribution.
//
// Usage:
-// go tool dist [command]
+//
+// go tool dist [command]
//
// The commands are:
-// banner print installation banner
-// bootstrap rebuild everything
-// clean deletes all built files
-// env [-p] print environment (-p: include $PATH)
-// install [dir] install individual directory
-// list [-json] list all supported platforms
-// test [-h] run Go test(s)
-// version print Go version
+//
+// banner print installation banner
+// bootstrap rebuild everything
+// clean deletes all built files
+// env [-p] print environment (-p: include $PATH)
+// install [dir] install individual directory
+// list [-json] list all supported platforms
+// test [-h] run Go test(s)
+// version print Go version
package main
}
// Test the code to try multiple packages. Our test case is
+//
// go doc rand.Float64
+//
// This needs to find math/rand.Float64; however crypto/rand, which doesn't
// have the symbol, usually appears first in the directory listing.
func TestMultiplePackages(t *testing.T) {
}
// Test the code to look up packages when given two args. First test case is
+//
// go doc binary BigEndian
+//
// This needs to find encoding/binary.BigEndian, which means
// finding the package encoding/binary given only "binary".
// Second case is
+//
// go doc rand Float64
+//
// which again needs to find math/rand and not give up after crypto/rand,
// which has no such function.
func TestTwoArgLookup(t *testing.T) {
// Doc (usually run as go doc) accepts zero, one or two arguments.
//
// Zero arguments:
+//
// go doc
+//
// Show the documentation for the package in the current directory.
//
// One argument:
+//
// go doc <pkg>
// go doc <sym>[.<methodOrField>]
// go doc [<pkg>.]<sym>[.<methodOrField>]
// go doc [<pkg>.][<sym>.]<methodOrField>
+//
// The first item in this list that succeeds is the one whose documentation
// is printed. If there is a symbol but no package, the package in the current
// directory is chosen. However, if the argument begins with a capital
// letter it is always assumed to be a symbol in the current directory.
//
// Two arguments:
+//
// go doc <pkg> <sym>[.<methodOrField>]
//
// Show the documentation for the package, symbol, and method or field. The
}
// Old state:
-// type CFTypeRef unsafe.Pointer
+//
+// type CFTypeRef unsafe.Pointer
+//
// New state:
-// type CFTypeRef uintptr
+//
+// type CFTypeRef uintptr
+//
// and similar for other *Ref types.
// This fix finds nils initializing these types and replaces the nils with 0s.
func cftypefix(f *ast.File) bool {
the necessary changes to your programs.
Usage:
+
go tool fix [-r name,...] [path ...]
Without an explicit path, fix reads standard input and writes the
to see them, run go tool fix -help.
Fix does not make backup copies of the files that it edits.
-Instead, use a version control system's ``diff'' functionality to inspect
+Instead, use a version control system's “diff” functionality to inspect
the changes that fix makes before committing them.
*/
package main
}
// Old state:
-// type EGLDisplay unsafe.Pointer
+//
+// type EGLDisplay unsafe.Pointer
+//
// New state:
-// type EGLDisplay uintptr
+//
+// type EGLDisplay uintptr
+//
// This fix finds nils initializing these types and replaces the nils with 0s.
func eglfixDisp(f *ast.File) bool {
return typefix(f, func(s string) bool {
}
// Old state:
-// type EGLConfig unsafe.Pointer
+//
+// type EGLConfig unsafe.Pointer
+//
// New state:
-// type EGLConfig uintptr
+//
+// type EGLConfig uintptr
+//
// This fix finds nils initializing these types and replaces the nils with 0s.
func eglfixConfig(f *ast.File) bool {
return typefix(f, func(s string) bool {
}
// Old state:
-// type jobject *_jobject
+//
+// type jobject *_jobject
+//
// New state:
-// type jobject uintptr
+//
+// type jobject uintptr
+//
// and similar for subtypes of jobject.
// This fix finds nils initializing these types and replaces the nils with 0s.
func jnifix(f *ast.File) bool {
// TestGenerateCommandShortHand - similar to TestGenerateCommandParse,
// except:
-// 1. if the result starts with -command, record that shorthand
-// before moving on to the next test.
-// 2. If a source line number is specified, set that in the parser
-// before executing the test. i.e., execute the split as if it
-// processing that source line.
+// 1. if the result starts with -command, record that shorthand
+// before moving on to the next test.
+// 2. If a source line number is specified, set that in the parser
+// before executing the test. i.e., execute the split as if it
+// processing that source line.
func TestGenerateCommandShorthand(t *testing.T) {
g := &Generator{
r: nil, // Unused here.
// TestGenerateCommandShortHand - similar to TestGenerateCommandParse,
// except:
-// 1. if the result starts with -command, record that shorthand
-// before moving on to the next test.
-// 2. If a source line number is specified, set that in the parser
-// before executing the test. i.e., execute the split as if it
-// processing that source line.
+// 1. if the result starts with -command, record that shorthand
+// before moving on to the next test.
+// 2. If a source line number is specified, set that in the parser
+// before executing the test. i.e., execute the split as if it
+// processing that source line.
func TestGenerateCommandShortHand2(t *testing.T) {
g := &Generator{
r: nil, // Unused here.
}
// eval is like
+//
// x.Eval(func(tag string) bool { return matchTag(tag, tags) })
+//
// except that it implements the special case for tags["*"] meaning
// all tags are both true and false at the same time.
func eval(x constraint.Expr, tags map[string]bool, prefer bool) bool {
// suffix which does not match the current system.
// The recognized name formats are:
//
-// name_$(GOOS).*
-// name_$(GOARCH).*
-// name_$(GOOS)_$(GOARCH).*
-// name_$(GOOS)_test.*
-// name_$(GOARCH)_test.*
-// name_$(GOOS)_$(GOARCH)_test.*
+// name_$(GOOS).*
+// name_$(GOARCH).*
+// name_$(GOOS)_$(GOARCH).*
+// name_$(GOOS)_test.*
+// name_$(GOARCH)_test.*
+// name_$(GOOS)_$(GOARCH)_test.*
//
// Exceptions:
-// if GOOS=android, then files with GOOS=linux are also matched.
-// if GOOS=illumos, then files with GOOS=solaris are also matched.
-// if GOOS=ios, then files with GOOS=darwin are also matched.
+//
+// if GOOS=android, then files with GOOS=linux are also matched.
+// if GOOS=illumos, then files with GOOS=solaris are also matched.
+// if GOOS=ios, then files with GOOS=darwin are also matched.
//
// If tags["*"] is true, then MatchFile will consider all possible
// GOOS and GOARCH to be available and will consequently
}
// TestPackagesAndErrors returns three packages:
-// - pmain, the package main corresponding to the test binary (running tests in ptest and pxtest).
-// - ptest, the package p compiled with added "package p" test files.
-// - pxtest, the result of compiling any "package p_test" (external) test files.
+// - pmain, the package main corresponding to the test binary (running tests in ptest and pxtest).
+// - ptest, the package p compiled with added "package p" test files.
+// - pxtest, the result of compiling any "package p_test" (external) test files.
//
// If the package has no "package p_test" test files, pxtest will be nil.
// If the non-test compilation of package p can be reused
// Opening an exclusive-use file returns an error.
// The expected error strings are:
//
-// - "open/create -- file is locked" (cwfs, kfs)
-// - "exclusive lock" (fossil)
-// - "exclusive use file already open" (ramfs)
+// - "open/create -- file is locked" (cwfs, kfs)
+// - "exclusive lock" (fossil)
+// - "exclusive use file already open" (ramfs)
var lockedErrStrings = [...]string{
"file is locked",
"exclusive lock",
}
// queryWildcard adds a candidate set to q for each module for which:
-// - some version of the module is already in the build list, and
-// - that module exists at some version matching q.version, and
-// - either the module path itself matches q.pattern, or some package within
-// the module at q.version matches q.pattern.
+// - some version of the module is already in the build list, and
+// - that module exists at some version matching q.version, and
+// - either the module path itself matches q.pattern, or some package within
+// the module at q.version matches q.pattern.
func (r *resolver) queryWildcard(ctx context.Context, q *query) {
// For wildcard patterns, modload.QueryPattern only identifies modules
// matching the prefix of the path before the wildcard. However, the build
// invariants of the go.mod file needed to support graph pruning for the given
// packages:
//
-// 1. For each package marked with pkgInAll, the module path that provided that
-// package is included as a root.
-// 2. For all packages, the module that provided that package either remains
-// selected at the same version or is upgraded by the dependencies of a
-// root.
+// 1. For each package marked with pkgInAll, the module path that provided that
+// package is included as a root.
+// 2. For all packages, the module that provided that package either remains
+// selected at the same version or is upgraded by the dependencies of a
+// root.
//
// If any module that provided a package has been upgraded above its previous
// version, the caller may need to reload and recompute the package graph.
// updatePrunedRoots returns a set of root requirements that maintains the
// invariants of the go.mod file needed to support graph pruning:
//
-// 1. The selected version of the module providing each package marked with
-// either pkgInAll or pkgIsRoot is included as a root.
-// Note that certain root patterns (such as '...') may explode the root set
-// to contain every module that provides any package imported (or merely
-// required) by any other module.
-// 2. Each root appears only once, at the selected version of its path
-// (if rs.graph is non-nil) or at the highest version otherwise present as a
-// root (otherwise).
-// 3. Every module path that appears as a root in rs remains a root.
-// 4. Every version in add is selected at its given version unless upgraded by
-// (the dependencies of) an existing root or another module in add.
+// 1. The selected version of the module providing each package marked with
+// either pkgInAll or pkgIsRoot is included as a root.
+// Note that certain root patterns (such as '...') may explode the root set
+// to contain every module that provides any package imported (or merely
+// required) by any other module.
+// 2. Each root appears only once, at the selected version of its path
+// (if rs.graph is non-nil) or at the highest version otherwise present as a
+// root (otherwise).
+// 3. Every module path that appears as a root in rs remains a root.
+// 4. Every version in add is selected at its given version unless upgraded by
+// (the dependencies of) an existing root or another module in add.
//
// The packages in pkgs are assumed to have been loaded from either the roots of
// rs or the modules selected in the graph of rs.
// The above invariants together imply the graph-pruning invariants for the
// go.mod file:
//
-// 1. (The import invariant.) Every module that provides a package transitively
-// imported by any package or test in the main module is included as a root.
-// This follows by induction from (1) and (3) above. Transitively-imported
-// packages loaded during this invocation are marked with pkgInAll (1),
-// and by hypothesis any transitively-imported packages loaded in previous
-// invocations were already roots in rs (3).
+// 1. (The import invariant.) Every module that provides a package transitively
+// imported by any package or test in the main module is included as a root.
+// This follows by induction from (1) and (3) above. Transitively-imported
+// packages loaded during this invocation are marked with pkgInAll (1),
+// and by hypothesis any transitively-imported packages loaded in previous
+// invocations were already roots in rs (3).
//
-// 2. (The argument invariant.) Every module that provides a package matching
-// an explicit package pattern is included as a root. This follows directly
-// from (1): packages matching explicit package patterns are marked with
-// pkgIsRoot.
+// 2. (The argument invariant.) Every module that provides a package matching
+// an explicit package pattern is included as a root. This follows directly
+// from (1): packages matching explicit package patterns are marked with
+// pkgIsRoot.
//
-// 3. (The completeness invariant.) Every module that contributed any package
-// to the build is required by either the main module or one of the modules
-// it requires explicitly. This invariant is left up to the caller, who must
-// not load packages from outside the module graph but may add roots to the
-// graph, but is facilited by (3). If the caller adds roots to the graph in
-// order to resolve missing packages, then updatePrunedRoots will retain them,
-// the selected versions of those roots cannot regress, and they will
-// eventually be written back to the main module's go.mod file.
+// 3. (The completeness invariant.) Every module that contributed any package
+// to the build is required by either the main module or one of the modules
+// it requires explicitly. This invariant is left up to the caller, who must
+// not load packages from outside the module graph but may add roots to the
+// graph, but is facilited by (3). If the caller adds roots to the graph in
+// order to resolve missing packages, then updatePrunedRoots will retain them,
+// the selected versions of those roots cannot regress, and they will
+// eventually be written back to the main module's go.mod file.
//
// (See https://golang.org/design/36460-lazy-module-loading#invariants for more
// detail.)
//
// The roots are updated such that:
//
-// 1. The selected version of every module path in direct is included as a root
-// (if it is not "none").
-// 2. Each root is the selected version of its path. (We say that such a root
-// set is “consistent”.)
-// 3. Every version selected in the graph of rs remains selected unless upgraded
-// by a dependency in add.
-// 4. Every version in add is selected at its given version unless upgraded by
-// (the dependencies of) an existing root or another module in add.
+// 1. The selected version of every module path in direct is included as a root
+// (if it is not "none").
+// 2. Each root is the selected version of its path. (We say that such a root
+// set is “consistent”.)
+// 3. Every version selected in the graph of rs remains selected unless upgraded
+// by a dependency in add.
+// 4. Every version in add is selected at its given version unless upgraded by
+// (the dependencies of) an existing root or another module in add.
func updateUnprunedRoots(ctx context.Context, direct map[string]bool, rs *Requirements, add []module.Version) (*Requirements, error) {
mg, err := rs.Graph(ctx)
if err != nil {
// editRequirements returns an edited version of rs such that:
//
-// 1. Each module version in mustSelect is selected.
+// 1. Each module version in mustSelect is selected.
//
-// 2. Each module version in tryUpgrade is upgraded toward the indicated
-// version as far as can be done without violating (1).
+// 2. Each module version in tryUpgrade is upgraded toward the indicated
+// version as far as can be done without violating (1).
//
-// 3. Each module version in rs.rootModules (or rs.graph, if rs is unpruned)
-// is downgraded from its original version only to the extent needed to
-// satisfy (1), or upgraded only to the extent needed to satisfy (1) and
-// (2).
+// 3. Each module version in rs.rootModules (or rs.graph, if rs is unpruned)
+// is downgraded from its original version only to the extent needed to
+// satisfy (1), or upgraded only to the extent needed to satisfy (1) and
+// (2).
//
-// 4. No module is upgraded above the maximum version of its path found in the
-// dependency graph of rs, the combined dependency graph of the versions in
-// mustSelect, or the dependencies of each individual module version in
-// tryUpgrade.
+// 4. No module is upgraded above the maximum version of its path found in the
+// dependency graph of rs, the combined dependency graph of the versions in
+// mustSelect, or the dependencies of each individual module version in
+// tryUpgrade.
//
// Generally, the module versions in mustSelect are due to the module or a
// package within the module matching an explicit command line argument to 'go
//
// In particular:
//
-// - Modules that provide packages directly imported from the main module are
-// marked as direct, and are promoted to explicit roots. If a needed root
-// cannot be promoted due to -mod=readonly or -mod=vendor, the importing
-// package is marked with an error.
+// - Modules that provide packages directly imported from the main module are
+// marked as direct, and are promoted to explicit roots. If a needed root
+// cannot be promoted due to -mod=readonly or -mod=vendor, the importing
+// package is marked with an error.
//
-// - If ld scanned the "all" pattern independent of build constraints, it is
-// guaranteed to have seen every direct import. Module dependencies that did
-// not provide any directly-imported package are then marked as indirect.
+// - If ld scanned the "all" pattern independent of build constraints, it is
+// guaranteed to have seen every direct import. Module dependencies that did
+// not provide any directly-imported package are then marked as indirect.
//
-// - Root dependencies are updated to their selected versions.
+// - Root dependencies are updated to their selected versions.
//
// The "changed" return value reports whether the update changed the selected
// version of any module that either provided a loaded package or may now
// The version must take one of the following forms:
//
// - the literal string "latest", denoting the latest available, allowed
-// tagged version, with non-prereleases preferred over prereleases.
-// If there are no tagged versions in the repo, latest returns the most
-// recent commit.
+//
+// tagged version, with non-prereleases preferred over prereleases.
+// If there are no tagged versions in the repo, latest returns the most
+// recent commit.
+//
// - the literal string "upgrade", equivalent to "latest" except that if
-// current is a newer version, current will be returned (see below).
+//
+// current is a newer version, current will be returned (see below).
+//
// - the literal string "patch", denoting the latest available tagged version
-// with the same major and minor number as current (see below).
+//
+// with the same major and minor number as current (see below).
+//
// - v1, denoting the latest available tagged version v1.x.x.
// - v1.2, denoting the latest available tagged version v1.2.x.
// - v1.2.3, a semantic version string denoting that tagged version.
// - <v1.2.3, <=v1.2.3, >v1.2.3, >=v1.2.3,
-// denoting the version closest to the target and satisfying the given operator,
-// with non-prereleases preferred over prereleases.
+//
+// denoting the version closest to the target and satisfying the given operator,
+// with non-prereleases preferred over prereleases.
+//
// - a repository commit identifier or tag, denoting that commit.
//
// current denotes the currently-selected version of the module; it may be
// filterVersions classifies versions into releases and pre-releases, filtering
// out:
-// 1. versions that do not satisfy the 'allowed' predicate, and
-// 2. "+incompatible" versions, if a compatible one satisfies the predicate
-// and the incompatible version is not preferred.
+// 1. versions that do not satisfy the 'allowed' predicate, and
+// 2. "+incompatible" versions, if a compatible one satisfies the predicate
+// and the incompatible version is not preferred.
//
// If the allowed predicate returns an error not equivalent to ErrDisallowed,
// filterVersions returns that error.
// in this package attempt to mitigate.
//
// Errors considered ephemeral include:
-// - syscall.ERROR_ACCESS_DENIED
-// - syscall.ERROR_FILE_NOT_FOUND
-// - internal/syscall/windows.ERROR_SHARING_VIOLATION
+// - syscall.ERROR_ACCESS_DENIED
+// - syscall.ERROR_FILE_NOT_FOUND
+// - internal/syscall/windows.ERROR_SHARING_VIOLATION
//
// This set may be expanded in the future; programs must not rely on the
// non-ephemerality of any given error.
}
// ToFold returns a string with the property that
+//
// strings.EqualFold(s, t) iff ToFold(s) == ToFold(t)
+//
// This lets us test a large set of strings for fold-equivalent
// duplicates without making a quadratic number of calls
// to EqualFold. Note that strings.ToUpper and strings.ToLower
// pkg.test's arguments.
// We allow known flags both before and after the package name list,
// to allow both
+//
// go test fmt -custom-flag-for-fmt-test
// go test -x math
func testFlags(args []string) (packageNames, passToTest []string) {
// libname returns the filename to use for the shared library when using
// -buildmode=shared. The rules we use are:
// Use arguments for special 'meta' packages:
+//
// std --> libstd.so
// std cmd --> libstd,cmd.so
+//
// A single non-meta argument with trailing "/..." is special cased:
+//
// foo/... --> libfoo.so
// (A relative path like "./..." expands the "." first)
+//
// Use import paths for other cases, changing '/' to '-':
+//
// somelib --> libsubdir-somelib.so
// ./ or ../ --> libsubdir-somelib.so
// gopkg.in/tomb.v2 -> libgopkg.in-tomb.v2.so
// a/... b/... ---> liba/c,b/d.so - all matching import paths
+//
// Name parts are joined with ','.
func libname(args []string, pkgs []*load.Package) (string, error) {
var libname string
}
// cover runs, in effect,
+//
// go tool cover -mode=b.coverMode -var="varName" -o dst.go src.go
func (b *Builder) cover(a *Action, dst, src string, varName string) error {
return b.run(a, a.Objdir, "cover "+a.Package.ImportPath, nil,
By default, gofmt prints the reformatted sources to standard output.
Usage:
+
gofmt [flags] [path ...]
The flags are:
+
-d
Do not print reformatted sources to standard output.
If a file's formatting is different than gofmt's, print diffs
the original file is restored from an automatic backup.
Debugging support:
+
-cpuprofile filename
Write cpu profile to the specified file.
-
The rewrite rule specified with the -r flag must be a string of the form:
pattern -> replacement
and trailing spaces, so that individual sections of a Go program can be
formatted by piping them through gofmt.
-Examples
+# Examples
To check files for unnecessary parentheses:
gofmt -r 'α[β:len(α)] -> α[β:]' -w $GOROOT/src
-The simplify command
+# The simplify command
When invoked with -s gofmt will make the following source transformations where possible.
// because some operating systems place a limit on the number of
// distinct mapped regions per process. As of this writing:
//
-// Darwin unlimited
-// DragonFly 1000000 (vm.max_proc_mmap)
-// FreeBSD unlimited
-// Linux 65530 (vm.max_map_count) // TODO: query /proc/sys/vm/max_map_count?
-// NetBSD unlimited
-// OpenBSD unlimited
+// Darwin unlimited
+// DragonFly 1000000 (vm.max_proc_mmap)
+// FreeBSD unlimited
+// Linux 65530 (vm.max_map_count) // TODO: query /proc/sys/vm/max_map_count?
+// NetBSD unlimited
+// OpenBSD unlimited
var mmapLimit int32 = 1<<31 - 1
func init() {
// Package gcprog implements an encoder for packed GC pointer bitmaps,
// known as GC programs.
//
-// Program Format
+// # Program Format
//
// The GC program encodes a sequence of 0 and 1 bits indicating scalar or pointer words in an object.
// The encoding is a simple Lempel-Ziv program, with codes to emit literal bits and to repeat the
//
// The numbers n and c, when they follow a code, are encoded as varints
// using the same encoding as encoding/binary's Uvarint.
-//
package gcprog
import (
// Symbol definition.
//
// Serialized format:
-// Sym struct {
-// Name string
-// ABI uint16
-// Type uint8
-// Flag uint8
-// Flag2 uint8
-// Siz uint32
-// Align uint32
-// }
+//
+// Sym struct {
+// Name string
+// ABI uint16
+// Type uint8
+// Flag uint8
+// Flag2 uint8
+// Siz uint32
+// Align uint32
+// }
type Sym [SymSize]byte
const SymSize = stringRefSize + 2 + 1 + 1 + 1 + 4 + 4
// Relocation.
//
// Serialized format:
-// Reloc struct {
-// Off int32
-// Siz uint8
-// Type uint16
-// Add int64
-// Sym SymRef
-// }
+//
+// Reloc struct {
+// Off int32
+// Siz uint8
+// Type uint16
+// Add int64
+// Sym SymRef
+// }
type Reloc [RelocSize]byte
const RelocSize = 4 + 1 + 2 + 8 + 8
// Aux symbol info.
//
// Serialized format:
-// Aux struct {
-// Type uint8
-// Sym SymRef
-// }
+//
+// Aux struct {
+// Type uint8
+// Sym SymRef
+// }
type Aux [AuxSize]byte
const AuxSize = 1 + 8
// Referenced symbol flags.
//
// Serialized format:
-// RefFlags struct {
-// Sym symRef
-// Flag uint8
-// Flag2 uint8
-// }
+//
+// RefFlags struct {
+// Sym symRef
+// Flag uint8
+// Flag2 uint8
+// }
type RefFlags [RefFlagsSize]byte
const RefFlagsSize = 8 + 1 + 1
// Referenced symbol name.
//
// Serialized format:
-// RefName struct {
-// Sym symRef
-// Name string
-// }
+//
+// RefName struct {
+// Sym symRef
+// Name string
+// }
type RefName [RefNameSize]byte
const RefNameSize = 8 + stringRefSize
Package arm64 implements an ARM64 assembler. Go assembly syntax is different from GNU ARM64
syntax, but we can still follow the general rules to map between them.
-Instructions mnemonics mapping rules
+# Instructions mnemonics mapping rules
1. Most instructions use width suffixes of instruction names to indicate operand width rather than
using different register names.
Examples:
- ADC R24, R14, R12 <=> adc x12, x24
- ADDW R26->24, R21, R15 <=> add w15, w21, w26, asr #24
- FCMPS F2, F3 <=> fcmp s3, s2
- FCMPD F2, F3 <=> fcmp d3, d2
- FCVTDH F2, F3 <=> fcvt h3, d2
+
+ ADC R24, R14, R12 <=> adc x12, x24
+ ADDW R26->24, R21, R15 <=> add w15, w21, w26, asr #24
+ FCMPS F2, F3 <=> fcmp s3, s2
+ FCMPD F2, F3 <=> fcmp d3, d2
+ FCVTDH F2, F3 <=> fcvt h3, d2
2. Go uses .P and .W suffixes to indicate post-increment and pre-increment.
Examples:
- MOVD.P -8(R10), R8 <=> ldr x8, [x10],#-8
- MOVB.W 16(R16), R10 <=> ldrsb x10, [x16,#16]!
- MOVBU.W 16(R16), R10 <=> ldrb x10, [x16,#16]!
+
+ MOVD.P -8(R10), R8 <=> ldr x8, [x10],#-8
+ MOVB.W 16(R16), R10 <=> ldrsb x10, [x16,#16]!
+ MOVBU.W 16(R16), R10 <=> ldrb x10, [x16,#16]!
3. Go uses a series of MOV instructions as load and store.
instructions and floating-point(scalar) instructions.
Examples:
- VADD V5.H8, V18.H8, V9.H8 <=> add v9.8h, v18.8h, v5.8h
- VLD1.P (R6)(R11), [V31.D1] <=> ld1 {v31.1d}, [x6], x11
- VFMLA V29.S2, V20.S2, V14.S2 <=> fmla v14.2s, v20.2s, v29.2s
- AESD V22.B16, V19.B16 <=> aesd v19.16b, v22.16b
- SCVTFWS R3, F16 <=> scvtf s17, w6
+
+ VADD V5.H8, V18.H8, V9.H8 <=> add v9.8h, v18.8h, v5.8h
+ VLD1.P (R6)(R11), [V31.D1] <=> ld1 {v31.1d}, [x6], x11
+ VFMLA V29.S2, V20.S2, V14.S2 <=> fmla v14.2s, v20.2s, v29.2s
+ AESD V22.B16, V19.B16 <=> aesd v19.16b, v22.16b
+ SCVTFWS R3, F16 <=> scvtf s17, w6
6. Align directive
must be a power of 2 and in the range of [8, 2048].
Examples:
- PCALIGN $16
- MOVD $2, R0 // This instruction is aligned with 16 bytes.
- PCALIGN $1024
- MOVD $3, R1 // This instruction is aligned with 1024 bytes.
+
+ PCALIGN $16
+ MOVD $2, R0 // This instruction is aligned with 16 bytes.
+ PCALIGN $1024
+ MOVD $3, R1 // This instruction is aligned with 1024 bytes.
PCALIGN also changes the function alignment. If a function has one or more PCALIGN directives,
its address will be aligned to the same or coarser boundary, which is the maximum of all the
In the following example, the function Add is aligned with 128 bytes.
Examples:
- TEXT ·Add(SB),$40-16
- MOVD $2, R0
- PCALIGN $32
- MOVD $4, R1
- PCALIGN $128
- MOVD $8, R2
- RET
+
+ TEXT ·Add(SB),$40-16
+ MOVD $2, R0
+ PCALIGN $32
+ MOVD $4, R1
+ PCALIGN $128
+ MOVD $8, R2
+ RET
On arm64, functions in Go are aligned to 16 bytes by default, we can also use PCALGIN to set the
function alignment. The functions that need to be aligned are preferably using NOFRAME and NOSPLIT
In the following example, PCALIGN at the entry of the function Add will align its address to 2048 bytes.
Examples:
- TEXT ·Add(SB),NOSPLIT|NOFRAME,$0
- PCALIGN $2048
- MOVD $1, R0
- MOVD $1, R1
- RET
+
+ TEXT ·Add(SB),NOSPLIT|NOFRAME,$0
+ PCALIGN $2048
+ MOVD $1, R0
+ MOVD $1, R1
+ RET
7. Move large constants to vector registers.
And for a 128-bit interger, it take two 64-bit operands, for the low and high parts separately.
Examples:
- VMOVS $0x11223344, V0
- VMOVD $0x1122334455667788, V1
- VMOVQ $0x1122334455667788, $0x99aabbccddeeff00, V2 // V2=0x99aabbccddeeff001122334455667788
+
+ VMOVS $0x11223344, V0
+ VMOVD $0x1122334455667788, V1
+ VMOVQ $0x1122334455667788, $0x99aabbccddeeff00, V2 // V2=0x99aabbccddeeff001122334455667788
8. Move an optionally-shifted 16-bit immediate value to a register.
The current Go assembler does not accept zero shifts, such as "op $0, Rd" and "op $(0<<(16|32|48)), Rd" instructions.
Examples:
- MOVK $(10<<32), R20 <=> movk x20, #10, lsl #32
- MOVZW $(20<<16), R8 <=> movz w8, #20, lsl #16
- MOVK $(0<<16), R10 will be reported as an error by the assembler.
+
+ MOVK $(10<<32), R20 <=> movk x20, #10, lsl #32
+ MOVZW $(20<<16), R8 <=> movz w8, #20, lsl #16
+ MOVK $(0<<16), R10 will be reported as an error by the assembler.
Special Cases.
HINT $0.
Examples:
- VMOV V13.B[1], R20 <=> mov x20, v13.b[1]
- VMOV V13.H[1], R20 <=> mov w20, v13.h[1]
- JMP (R3) <=> br x3
- CALL (R17) <=> blr x17
- LDAXRB (R19), R16 <=> ldaxrb w16, [x19]
- NOOP <=> nop
+ VMOV V13.B[1], R20 <=> mov x20, v13.b[1]
+ VMOV V13.H[1], R20 <=> mov w20, v13.h[1]
+ JMP (R3) <=> br x3
+ CALL (R17) <=> blr x17
+ LDAXRB (R19), R16 <=> ldaxrb w16, [x19]
+ NOOP <=> nop
-Register mapping rules
+# Register mapping rules
1. All basic register names are written as Rn.
3. Bn, Hn, Dn, Sn and Qn instructions are written as Fn in floating-point instructions and as Vn
in SIMD instructions.
-
-Argument mapping rules
+# Argument mapping rules
1. The operands appear in left-to-right assignment order.
Go reverses the arguments of most instructions.
Examples:
- ADD R11.SXTB<<1, RSP, R25 <=> add x25, sp, w11, sxtb #1
- VADD V16, V19, V14 <=> add d14, d19, d16
+
+ ADD R11.SXTB<<1, RSP, R25 <=> add x25, sp, w11, sxtb #1
+ VADD V16, V19, V14 <=> add d14, d19, d16
Special Cases.
such as str, stur, strb, sturb, strh, sturh stlr, stlrb. stlrh, st1.
Examples:
- MOVD R29, 384(R19) <=> str x29, [x19,#384]
- MOVB.P R30, 30(R4) <=> strb w30, [x4],#30
- STLRH R21, (R19) <=> stlrh w21, [x19]
+
+ MOVD R29, 384(R19) <=> str x29, [x19,#384]
+ MOVB.P R30, 30(R4) <=> strb w30, [x4],#30
+ STLRH R21, (R19) <=> stlrh w21, [x19]
(2) MADD, MADDW, MSUB, MSUBW, SMADDL, SMSUBL, UMADDL, UMSUBL <Rm>, <Ra>, <Rn>, <Rd>
Examples:
- MADD R2, R30, R22, R6 <=> madd x6, x22, x2, x30
- SMSUBL R10, R3, R17, R27 <=> smsubl x27, w17, w10, x3
+
+ MADD R2, R30, R22, R6 <=> madd x6, x22, x2, x30
+ SMSUBL R10, R3, R17, R27 <=> smsubl x27, w17, w10, x3
(3) FMADDD, FMADDS, FMSUBD, FMSUBS, FNMADDD, FNMADDS, FNMSUBD, FNMSUBS <Fm>, <Fa>, <Fn>, <Fd>
Examples:
- FMADDD F30, F20, F3, F29 <=> fmadd d29, d3, d30, d20
- FNMSUBS F7, F25, F7, F22 <=> fnmsub s22, s7, s7, s25
+
+ FMADDD F30, F20, F3, F29 <=> fmadd d29, d3, d30, d20
+ FNMSUBS F7, F25, F7, F22 <=> fnmsub s22, s7, s7, s25
(4) BFI, BFXIL, SBFIZ, SBFX, UBFIZ, UBFX $<lsb>, <Rn>, $<width>, <Rd>
Examples:
- BFIW $16, R20, $6, R0 <=> bfi w0, w20, #16, #6
- UBFIZ $34, R26, $5, R20 <=> ubfiz x20, x26, #34, #5
+
+ BFIW $16, R20, $6, R0 <=> bfi w0, w20, #16, #6
+ UBFIZ $34, R26, $5, R20 <=> ubfiz x20, x26, #34, #5
(5) FCCMPD, FCCMPS, FCCMPED, FCCMPES <cond>, Fm. Fn, $<nzcv>
Examples:
- FCCMPD AL, F8, F26, $0 <=> fccmp d26, d8, #0x0, al
- FCCMPS VS, F29, F4, $4 <=> fccmp s4, s29, #0x4, vs
- FCCMPED LE, F20, F5, $13 <=> fccmpe d5, d20, #0xd, le
- FCCMPES NE, F26, F10, $0 <=> fccmpe s10, s26, #0x0, ne
+
+ FCCMPD AL, F8, F26, $0 <=> fccmp d26, d8, #0x0, al
+ FCCMPS VS, F29, F4, $4 <=> fccmp s4, s29, #0x4, vs
+ FCCMPED LE, F20, F5, $13 <=> fccmpe d5, d20, #0xd, le
+ FCCMPES NE, F26, F10, $0 <=> fccmpe s10, s26, #0x0, ne
(6) CCMN, CCMNW, CCMP, CCMPW <cond>, <Rn>, $<imm>, $<nzcv>
Examples:
- CCMP MI, R22, $12, $13 <=> ccmp x22, #0xc, #0xd, mi
- CCMNW AL, R1, $11, $8 <=> ccmn w1, #0xb, #0x8, al
+
+ CCMP MI, R22, $12, $13 <=> ccmp x22, #0xc, #0xd, mi
+ CCMNW AL, R1, $11, $8 <=> ccmn w1, #0xb, #0x8, al
(7) CCMN, CCMNW, CCMP, CCMPW <cond>, <Rn>, <Rm>, $<nzcv>
Examples:
- CCMN VS, R13, R22, $10 <=> ccmn x13, x22, #0xa, vs
- CCMPW HS, R19, R14, $11 <=> ccmp w19, w14, #0xb, cs
+
+ CCMN VS, R13, R22, $10 <=> ccmn x13, x22, #0xa, vs
+ CCMPW HS, R19, R14, $11 <=> ccmp w19, w14, #0xb, cs
(9) CSEL, CSELW, CSNEG, CSNEGW, CSINC, CSINCW <cond>, <Rn>, <Rm>, <Rd> ;
FCSELD, FCSELS <cond>, <Fn>, <Fm>, <Fd>
Examples:
- CSEL GT, R0, R19, R1 <=> csel x1, x0, x19, gt
- CSNEGW GT, R7, R17, R8 <=> csneg w8, w7, w17, gt
- FCSELD EQ, F15, F18, F16 <=> fcsel d16, d15, d18, eq
-(10) TBNZ, TBZ $<imm>, <Rt>, <label>
+ CSEL GT, R0, R19, R1 <=> csel x1, x0, x19, gt
+ CSNEGW GT, R7, R17, R8 <=> csneg w8, w7, w17, gt
+ FCSELD EQ, F15, F18, F16 <=> fcsel d16, d15, d18, eq
+(10) TBNZ, TBZ $<imm>, <Rt>, <label>
(11) STLXR, STLXRW, STXR, STXRW, STLXRB, STLXRH, STXRB, STXRH <Rf>, (<Rn|RSP>), <Rs>
Examples:
- STLXR ZR, (R15), R16 <=> stlxr w16, xzr, [x15]
- STXRB R9, (R21), R19 <=> stxrb w19, w9, [x21]
+
+ STLXR ZR, (R15), R16 <=> stlxr w16, xzr, [x15]
+ STXRB R9, (R21), R19 <=> stxrb w19, w9, [x21]
(12) STLXP, STLXPW, STXP, STXPW (<Rf1>, <Rf2>), (<Rn|RSP>), <Rs>
Examples:
- STLXP (R17, R19), (R4), R5 <=> stlxp w5, x17, x19, [x4]
- STXPW (R30, R25), (R22), R13 <=> stxp w13, w30, w25, [x22]
+
+ STLXP (R17, R19), (R4), R5 <=> stlxp w5, x17, x19, [x4]
+ STXPW (R30, R25), (R22), R13 <=> stxp w13, w30, w25, [x22]
2. Expressions for special arguments.
Optionally-shifted immediate.
Examples:
- ADD $(3151<<12), R14, R20 <=> add x20, x14, #0xc4f, lsl #12
- ADDW $1864, R25, R6 <=> add w6, w25, #0x748
+
+ ADD $(3151<<12), R14, R20 <=> add x20, x14, #0xc4f, lsl #12
+ ADDW $1864, R25, R6 <=> add w6, w25, #0x748
Optionally-shifted registers are written as <Rm>{<shift><amount>}.
The <shift> can be <<(lsl), >>(lsr), ->(asr), @>(ror).
Examples:
- ADD R19>>30, R10, R24 <=> add x24, x10, x19, lsr #30
- ADDW R26->24, R21, R15 <=> add w15, w21, w26, asr #24
+
+ ADD R19>>30, R10, R24 <=> add x24, x10, x19, lsr #30
+ ADDW R26->24, R21, R15 <=> add w15, w21, w26, asr #24
Extended registers are written as <Rm>{.<extend>{<<<amount>}}.
<extend> can be UXTB, UXTH, UXTW, UXTX, SXTB, SXTH, SXTW or SXTX.
Examples:
- ADDS R19.UXTB<<4, R9, R26 <=> adds x26, x9, w19, uxtb #4
- ADDSW R14.SXTX, R14, R6 <=> adds w6, w14, w14, sxtx
+
+ ADDS R19.UXTB<<4, R9, R26 <=> adds x26, x9, w19, uxtb #4
+ ADDSW R14.SXTX, R14, R6 <=> adds w6, w14, w14, sxtx
Memory references: [<Xn|SP>{,#0}] is written as (Rn|RSP), a base register and an immediate
offset is written as imm(Rn|RSP), a base register and an offset register is written as (Rn|RSP)(Rm).
Examples:
- LDAR (R22), R9 <=> ldar x9, [x22]
- LDP 28(R17), (R15, R23) <=> ldp x15, x23, [x17,#28]
- MOVWU (R4)(R12<<2), R8 <=> ldr w8, [x4, x12, lsl #2]
- MOVD (R7)(R11.UXTW<<3), R25 <=> ldr x25, [x7,w11,uxtw #3]
- MOVBU (R27)(R23), R14 <=> ldrb w14, [x27,x23]
+
+ LDAR (R22), R9 <=> ldar x9, [x22]
+ LDP 28(R17), (R15, R23) <=> ldp x15, x23, [x17,#28]
+ MOVWU (R4)(R12<<2), R8 <=> ldr w8, [x4, x12, lsl #2]
+ MOVD (R7)(R11.UXTW<<3), R25 <=> ldr x25, [x7,w11,uxtw #3]
+ MOVBU (R27)(R23), R14 <=> ldrb w14, [x27,x23]
Register pairs are written as (Rt1, Rt2).
Examples:
- LDP.P -240(R11), (R12, R26) <=> ldp x12, x26, [x11],#-240
+
+ LDP.P -240(R11), (R12, R26) <=> ldp x12, x26, [x11],#-240
Register with arrangement and register with arrangement and index.
Examples:
- VADD V5.H8, V18.H8, V9.H8 <=> add v9.8h, v18.8h, v5.8h
- VLD1 (R2), [V21.B16] <=> ld1 {v21.16b}, [x2]
- VST1.P V9.S[1], (R16)(R21) <=> st1 {v9.s}[1], [x16], x28
- VST1.P [V13.H8, V14.H8, V15.H8], (R3)(R14) <=> st1 {v13.8h-v15.8h}, [x3], x14
- VST1.P [V14.D1, V15.D1], (R7)(R23) <=> st1 {v14.1d, v15.1d}, [x7], x23
+
+ VADD V5.H8, V18.H8, V9.H8 <=> add v9.8h, v18.8h, v5.8h
+ VLD1 (R2), [V21.B16] <=> ld1 {v21.16b}, [x2]
+ VST1.P V9.S[1], (R16)(R21) <=> st1 {v9.s}[1], [x16], x28
+ VST1.P [V13.H8, V14.H8, V15.H8], (R3)(R14) <=> st1 {v13.8h-v15.8h}, [x3], x14
+ VST1.P [V14.D1, V15.D1], (R7)(R23) <=> st1 {v14.1d, v15.1d}, [x7], x23
*/
package arm64
// every time a function is inlined. For example, suppose f() calls g()
// and g has two calls to h(), and that f, g, and h are inlineable:
//
-// 1 func main() {
-// 2 f()
-// 3 }
-// 4 func f() {
-// 5 g()
-// 6 }
-// 7 func g() {
-// 8 h()
-// 9 h()
-// 10 }
-// 11 func h() {
-// 12 println("H")
-// 13 }
+// 1 func main() {
+// 2 f()
+// 3 }
+// 4 func f() {
+// 5 g()
+// 6 }
+// 7 func g() {
+// 8 h()
+// 9 h()
+// 10 }
+// 11 func h() {
+// 12 println("H")
+// 13 }
//
// Assuming the global tree starts empty, inlining will produce the
// following tree:
//
-// []InlinedCall{
-// {Parent: -1, Func: "f", Pos: <line 2>},
-// {Parent: 0, Func: "g", Pos: <line 5>},
-// {Parent: 1, Func: "h", Pos: <line 8>},
-// {Parent: 1, Func: "h", Pos: <line 9>},
-// }
+// []InlinedCall{
+// {Parent: -1, Func: "f", Pos: <line 2>},
+// {Parent: 0, Func: "g", Pos: <line 5>},
+// {Parent: 1, Func: "h", Pos: <line 8>},
+// {Parent: 1, Func: "h", Pos: <line 9>},
+// }
//
// The nodes of h inlined into main will have inlining indexes 2 and 3.
//
// Depending on the category of the referenced symbol, we choose
// different hash algorithms such that the hash is globally
// consistent.
-// - For referenced content-addressable symbol, its content hash
-// is globally consistent.
-// - For package symbol and builtin symbol, its local index is
-// globally consistent.
-// - For non-package symbol, its fully-expanded name is globally
-// consistent. For now, we require we know the current package
-// path so we can always expand symbol names. (Otherwise,
-// symbols with relocations are not considered hashable.)
+// - For referenced content-addressable symbol, its content hash
+// is globally consistent.
+// - For package symbol and builtin symbol, its local index is
+// globally consistent.
+// - For non-package symbol, its fully-expanded name is globally
+// consistent. For now, we require we know the current package
+// path so we can always expand symbol names. (Otherwise,
+// symbols with relocations are not considered hashable.)
//
// For now, we assume there is no circular dependencies among
// hashed symbols.
Example:
- ADD R3, R4, R5 <=> add r5, r4, r3
+ ADD R3, R4, R5 <=> add r5, r4, r3
2. Constant operands
Example:
- ADD $1, R3, R4 <=> addi r4, r3, 1
+ ADD $1, R3, R4 <=> addi r4, r3, 1
3. Opcodes setting condition codes
Example:
- ANDCC R3, R4, R5 <=> and. r5, r3, r4 (set CR0)
+ ANDCC R3, R4, R5 <=> and. r5, r3, r4 (set CR0)
4. Loads and stores from memory
Examples:
- MOVD (R3), R4 <=> ld r4,0(r3)
- MOVW (R3), R4 <=> lwa r4,0(r3)
- MOVWZU 4(R3), R4 <=> lwzu r4,4(r3)
- MOVWZ (R3+R5), R4 <=> lwzx r4,r3,r5
- MOVHZ (R3), R4 <=> lhz r4,0(r3)
- MOVHU 2(R3), R4 <=> lhau r4,2(r3)
- MOVBZ (R3), R4 <=> lbz r4,0(r3)
-
- MOVD R4,(R3) <=> std r4,0(r3)
- MOVW R4,(R3) <=> stw r4,0(r3)
- MOVW R4,(R3+R5) <=> stwx r4,r3,r5
- MOVWU R4,4(R3) <=> stwu r4,4(r3)
- MOVH R4,2(R3) <=> sth r4,2(r3)
- MOVBU R4,(R3)(R5) <=> stbux r4,r3,r5
+ MOVD (R3), R4 <=> ld r4,0(r3)
+ MOVW (R3), R4 <=> lwa r4,0(r3)
+ MOVWZU 4(R3), R4 <=> lwzu r4,4(r3)
+ MOVWZ (R3+R5), R4 <=> lwzx r4,r3,r5
+ MOVHZ (R3), R4 <=> lhz r4,0(r3)
+ MOVHU 2(R3), R4 <=> lhau r4,2(r3)
+ MOVBZ (R3), R4 <=> lbz r4,0(r3)
+
+ MOVD R4,(R3) <=> std r4,0(r3)
+ MOVW R4,(R3) <=> stw r4,0(r3)
+ MOVW R4,(R3+R5) <=> stwx r4,r3,r5
+ MOVWU R4,4(R3) <=> stwu r4,4(r3)
+ MOVH R4,2(R3) <=> sth r4,2(r3)
+ MOVBU R4,(R3)(R5) <=> stbux r4,r3,r5
4. Compares
Examples:
- CMP R3, R4 <=> cmp r3, r4 (CR0 assumed)
- CMP R3, R4, CR1 <=> cmp cr1, r3, r4
+ CMP R3, R4 <=> cmp r3, r4 (CR0 assumed)
+ CMP R3, R4, CR1 <=> cmp cr1, r3, r4
Note that the condition register is the target operand of compare opcodes, so
the remaining operands are in the same order for Go asm and PPC64 asm.
BC op1, op2, op3
- op1: type of branch
- 16 -> bctr (branch on ctr)
- 12 -> bcr (branch if cr bit is set)
- 8 -> bcr+bctr (branch on ctr and cr values)
- 4 -> bcr != 0 (branch if specified cr bit is not set)
+ op1: type of branch
+ 16 -> bctr (branch on ctr)
+ 12 -> bcr (branch if cr bit is set)
+ 8 -> bcr+bctr (branch on ctr and cr values)
+ 4 -> bcr != 0 (branch if specified cr bit is not set)
- There are more combinations but these are the most common.
+ There are more combinations but these are the most common.
- op2: condition register field and condition bit
+ op2: condition register field and condition bit
- This contains an immediate value indicating which condition field
- to read and what bits to test. Each field is 4 bits long with CR0
- at bit 0, CR1 at bit 4, etc. The value is computed as 4*CR+condition
- with these condition values:
+ This contains an immediate value indicating which condition field
+ to read and what bits to test. Each field is 4 bits long with CR0
+ at bit 0, CR1 at bit 4, etc. The value is computed as 4*CR+condition
+ with these condition values:
- 0 -> LT
- 1 -> GT
- 2 -> EQ
- 3 -> OVG
+ 0 -> LT
+ 1 -> GT
+ 2 -> EQ
+ 3 -> OVG
- Thus 0 means test CR0 for LT, 5 means CR1 for GT, 30 means CR7 for EQ.
+ Thus 0 means test CR0 for LT, 5 means CR1 for GT, 30 means CR7 for EQ.
- op3: branch target
+ op3: branch target
Examples:
- BC 12, 0, target <=> blt cr0, target
- BC 12, 2, target <=> beq cr0, target
- BC 12, 5, target <=> bgt cr1, target
- BC 12, 30, target <=> beq cr7, target
- BC 4, 6, target <=> bne cr1, target
- BC 4, 1, target <=> ble cr1, target
+ BC 12, 0, target <=> blt cr0, target
+ BC 12, 2, target <=> beq cr0, target
+ BC 12, 5, target <=> bgt cr1, target
+ BC 12, 30, target <=> beq cr7, target
+ BC 4, 6, target <=> bne cr1, target
+ BC 4, 1, target <=> ble cr1, target
- The following extended opcodes are available for ease of use and readability:
+ The following extended opcodes are available for ease of use and readability:
- BNE CR2, target <=> bne cr2, target
- BEQ CR4, target <=> beq cr4, target
- BLT target <=> blt target (cr0 default)
- BGE CR7, target <=> bge cr7, target
+ BNE CR2, target <=> bne cr2, target
+ BEQ CR4, target <=> beq cr4, target
+ BLT target <=> blt target (cr0 default)
+ BGE CR7, target <=> bge cr7, target
Refer to the ISA for more information on additional values for the BC instruction,
how to handle OVG information, and much more.
Examples:
- SRAD $8,R3,R4 => sradi r4,r3,8
- SRD $8,R3,R4 => rldicl r4,r3,56,8
- SLD $8,R3,R4 => rldicr r4,r3,8,55
- SRAW $16,R4,R5 => srawi r5,r4,16
- SRW $40,R4,R5 => rlwinm r5,r4,0,0,31
- SLW $12,R4,R5 => rlwinm r5,r4,12,0,19
+ SRAD $8,R3,R4 => sradi r4,r3,8
+ SRD $8,R3,R4 => rldicl r4,r3,56,8
+ SLD $8,R3,R4 => rldicr r4,r3,8,55
+ SRAW $16,R4,R5 => srawi r5,r4,16
+ SRW $40,R4,R5 => rlwinm r5,r4,0,0,31
+ SLW $12,R4,R5 => rlwinm r5,r4,12,0,19
Some non-simple shifts have operands in the Go assembly which don't map directly
onto operands in the PPC64 assembly. When an operand in a shift instruction in the
PPC64 assembly instead of a mask. See the ISA for more detail on these types of shifts.
Here are a few examples:
- RLWMI $7,R3,$65535,R6 => rlwimi r6,r3,7,16,31
- RLDMI $0,R4,$7,R6 => rldimi r6,r4,0,61
+ RLWMI $7,R3,$65535,R6 => rlwimi r6,r3,7,16,31
+ RLDMI $0,R4,$7,R6 => rldimi r6,r4,0,61
More recently, Go opcodes were added which map directly onto the PPC64 opcodes. It is
recommended to use the newer opcodes to avoid confusion.
- RLDICL $0,R4,$15,R6 => rldicl r6,r4,0,15
- RLDICR $0,R4,$15,R6 => rldicr r6.r4,0,15
+ RLDICL $0,R4,$15,R6 => rldicl r6,r4,0,15
+ RLDICR $0,R4,$15,R6 => rldicr r6.r4,0,15
-Register naming
+# Register naming
1. Special register usage in Go asm
The following registers should not be modified by user Go assembler code.
- R0: Go code expects this register to contain the value 0.
- R1: Stack pointer
- R2: TOC pointer when compiled with -shared or -dynlink (a.k.a position independent code)
- R13: TLS pointer
- R30: g (goroutine)
+ R0: Go code expects this register to contain the value 0.
+ R1: Stack pointer
+ R2: TOC pointer when compiled with -shared or -dynlink (a.k.a position independent code)
+ R13: TLS pointer
+ R30: g (goroutine)
Register names:
- Rn is used for general purpose registers. (0-31)
- Fn is used for floating point registers. (0-31)
- Vn is used for vector registers. Slot 0 of Vn overlaps with Fn. (0-31)
- VSn is used for vector-scalar registers. V0-V31 overlap with VS32-VS63. (0-63)
- CTR represents the count register.
- LR represents the link register.
- CR represents the condition register
- CRn represents a condition register field. (0-7)
- CRnLT represents CR bit 0 of CR field n. (0-7)
- CRnGT represents CR bit 1 of CR field n. (0-7)
- CRnEQ represents CR bit 2 of CR field n. (0-7)
- CRnSO represents CR bit 3 of CR field n. (0-7)
-
+ Rn is used for general purpose registers. (0-31)
+ Fn is used for floating point registers. (0-31)
+ Vn is used for vector registers. Slot 0 of Vn overlaps with Fn. (0-31)
+ VSn is used for vector-scalar registers. V0-V31 overlap with VS32-VS63. (0-63)
+ CTR represents the count register.
+ LR represents the link register.
+ CR represents the condition register
+ CRn represents a condition register field. (0-7)
+ CRnLT represents CR bit 0 of CR field n. (0-7)
+ CRnGT represents CR bit 1 of CR field n. (0-7)
+ CRnEQ represents CR bit 2 of CR field n. (0-7)
+ CRnSO represents CR bit 3 of CR field n. (0-7)
*/
package ppc64
/*
// instruction scheduling
+
if(debug['Q'] == 0)
return;
// up in instinit. For example, oclass distinguishes the constants 0 and 1
// from the more general 8-bit constants, but instinit says
//
-// ycover[Yi0*Ymax+Ys32] = 1
-// ycover[Yi1*Ymax+Ys32] = 1
-// ycover[Yi8*Ymax+Ys32] = 1
+// ycover[Yi0*Ymax+Ys32] = 1
+// ycover[Yi1*Ymax+Ys32] = 1
+// ycover[Yi8*Ymax+Ys32] = 1
//
// which means that Yi0, Yi1, and Yi8 all count as Ys32 (signed 32)
// if that's what an instruction can handle.
// is, the Ztype) and the z bytes.
//
// For example, let's look at AADDL. The optab line says:
-// {AADDL, yaddl, Px, opBytes{0x83, 00, 0x05, 0x81, 00, 0x01, 0x03}},
+//
+// {AADDL, yaddl, Px, opBytes{0x83, 00, 0x05, 0x81, 00, 0x01, 0x03}},
//
// and yaddl says
-// var yaddl = []ytab{
-// {Yi8, Ynone, Yml, Zibo_m, 2},
-// {Yi32, Ynone, Yax, Zil_, 1},
-// {Yi32, Ynone, Yml, Zilo_m, 2},
-// {Yrl, Ynone, Yml, Zr_m, 1},
-// {Yml, Ynone, Yrl, Zm_r, 1},
-// }
+//
+// var yaddl = []ytab{
+// {Yi8, Ynone, Yml, Zibo_m, 2},
+// {Yi32, Ynone, Yax, Zil_, 1},
+// {Yi32, Ynone, Yml, Zilo_m, 2},
+// {Yrl, Ynone, Yml, Zr_m, 1},
+// {Yml, Ynone, Yrl, Zm_r, 1},
+// }
//
// so there are 5 possible types of ADDL instruction that can be laid down, and
// possible states used to lay them down (Ztype and z pointer, assuming z
// points at opBytes{0x83, 00, 0x05,0x81, 00, 0x01, 0x03}) are:
//
-// Yi8, Yml -> Zibo_m, z (0x83, 00)
-// Yi32, Yax -> Zil_, z+2 (0x05)
-// Yi32, Yml -> Zilo_m, z+2+1 (0x81, 0x00)
-// Yrl, Yml -> Zr_m, z+2+1+2 (0x01)
-// Yml, Yrl -> Zm_r, z+2+1+2+1 (0x03)
+// Yi8, Yml -> Zibo_m, z (0x83, 00)
+// Yi32, Yax -> Zil_, z+2 (0x05)
+// Yi32, Yml -> Zilo_m, z+2+1 (0x81, 0x00)
+// Yrl, Yml -> Zr_m, z+2+1+2 (0x01)
+// Yml, Yrl -> Zm_r, z+2+1+2+1 (0x03)
//
// The Pconstant in the optab line controls the prefix bytes to emit. That's
// relatively straightforward as this program goes.
// encoded addressing mode for the Yml arg), and then a single immediate byte.
// Zilo_m is the same but a long (32-bit) immediate.
var optab =
-// as, ytab, andproto, opcode
+// as, ytab, andproto, opcode
[...]Optab{
{obj.AXXX, nil, 0, opBytes{}},
{AAAA, ynone, P32, opBytes{0x37}},
// EVEX.R : 1 bit | EVEX extension bit | RxrEvex
//
// Examples:
+//
// REG_Z30 => 30
// REG_X15 => 15
// REG_R9 => 9
// evexSuffixBits carries instruction EVEX suffix set flags.
//
// Examples:
+//
// "RU_SAE.Z" => {rounding: 3, zeroing: true}
// "Z" => {zeroing: true}
// "BCST" => {broadcast: true}
// so we can burn some clocks to construct good error message.
//
// Reported issues:
-// - duplicated suffixes
-// - illegal rounding/SAE+broadcast combinations
-// - unknown suffixes
-// - misplaced suffix (e.g. wrong Z suffix position)
+// - duplicated suffixes
+// - illegal rounding/SAE+broadcast combinations
+// - unknown suffixes
+// - misplaced suffix (e.g. wrong Z suffix position)
func inferSuffixError(cond string) error {
suffixSet := make(map[string]bool) // Set for duplicates detection.
unknownSet := make(map[string]bool) // Set of unknown suffixes.
}
// NewLinePragmaBase returns a new *PosBase for a line directive of the form
-// //line filename:line:col
-// /*line filename:line:col*/
+//
+// //line filename:line:col
+// /*line filename:line:col*/
+//
// at position pos.
func NewLinePragmaBase(pos Pos, filename, absFilename string, line, col uint) *PosBase {
return &PosBase{pos, filename, absFilename, FileSymPrefix + absFilename, line, col, -1}
// The input buffer needs to be able to hold any single test
// directive line we want to recognize, like:
//
-// <many spaces> --- PASS: very/nested/s/u/b/t/e/s/t
+// <many spaces> --- PASS: very/nested/s/u/b/t/e/s/t
//
// If anyone reports a test directive line > 4k not working, it will
// be defensible to suggest they restructure their test or test names.
// license that can be found in the LICENSE file.
/*
-Link, typically invoked as ``go tool link'', reads the Go archive or object
+Link, typically invoked as “go tool link”, reads the Go archive or object
for a package main, along with its dependencies, and combines them
into an executable binary.
-Command Line
+# Command Line
Usage:
//
// Typical usage should look like:
//
-// func main() {
-// filename := "" // Set to enable per-phase pprof file output.
-// bench := benchmark.New(benchmark.GC, filename)
-// defer bench.Report(os.Stdout)
-// // etc
-// bench.Start("foo")
-// foo()
-// bench.Start("bar")
-// bar()
-// }
+// func main() {
+// filename := "" // Set to enable per-phase pprof file output.
+// bench := benchmark.New(benchmark.GC, filename)
+// defer bench.Report(os.Stdout)
+// // etc
+// bench.Start("foo")
+// foo()
+// bench.Start("bar")
+// bar()
+// }
//
// Note that a nil Metrics object won't cause any errors, so one could write
// code like:
//
-// func main() {
-// enableBenchmarking := flag.Bool("enable", true, "enables benchmarking")
-// flag.Parse()
-// var bench *benchmark.Metrics
-// if *enableBenchmarking {
-// bench = benchmark.New(benchmark.GC)
-// }
-// bench.Start("foo")
-// // etc.
-// }
+// func main() {
+// enableBenchmarking := flag.Bool("enable", true, "enables benchmarking")
+// flag.Parse()
+// var bench *benchmark.Metrics
+// if *enableBenchmarking {
+// bench = benchmark.New(benchmark.GC)
+// }
+// bench.Start("foo")
+// // etc.
+// }
func New(gc Flags, filebase string) *Metrics {
if gc == GC {
runtime.GC()
)
// Assembling the binary is broken into two steps:
-// - writing out the code/data/dwarf Segments, applying relocations on the fly
-// - writing out the architecture specific pieces.
+// - writing out the code/data/dwarf Segments, applying relocations on the fly
+// - writing out the architecture specific pieces.
+//
// This function handles the first part.
func asmb(ctxt *Link) {
// TODO(jfaller): delete me.
}
// Assembling the binary is broken into two steps:
-// - writing out the code/data/dwarf Segments
-// - writing out the architecture specific pieces.
+// - writing out the code/data/dwarf Segments
+// - writing out the architecture specific pieces.
+//
// This function handles the second part.
func asmb2(ctxt *Link) {
if thearch.Asmb2 != nil {
// (to be applied by the external linker). For more on how relocations
// work in general, see
//
-// "Linkers and Loaders", by John R. Levine (Morgan Kaufmann, 1999), ch. 7
+// "Linkers and Loaders", by John R. Levine (Morgan Kaufmann, 1999), ch. 7
//
// This is a performance-critical function for the linker; be careful
// to avoid introducing unnecessary allocations in the main loop.
//
// There are three ways a method of a reachable type can be invoked:
//
-// 1. direct call
-// 2. through a reachable interface type
-// 3. reflect.Value.Method (or MethodByName), or reflect.Type.Method
-// (or MethodByName)
+// 1. direct call
+// 2. through a reachable interface type
+// 3. reflect.Value.Method (or MethodByName), or reflect.Type.Method
+// (or MethodByName)
//
// The first case is handled by the flood fill, a directly called method
// is marked as reachable.
// as reachable. This is extremely conservative, but easy and correct.
//
// The third case is handled by looking to see if any of:
-// - reflect.Value.Method or MethodByName is reachable
-// - reflect.Type.Method or MethodByName is called (through the
-// REFLECTMETHOD attribute marked by the compiler).
+// - reflect.Value.Method or MethodByName is reachable
+// - reflect.Type.Method or MethodByName is called (through the
+// REFLECTMETHOD attribute marked by the compiler).
+//
// If any of these happen, all bets are off and all exported methods
// of reachable types are marked reachable.
//
// tflag is documented in reflect/type.go.
//
// tflag values must be kept in sync with copies in:
+//
// cmd/compile/internal/reflectdata/reflect.go
// cmd/link/internal/ld/decodesym.go
// reflect/type.go
// captures the name, order, and classification of the subprogram's
// input and output parameters. For example, for the go function
//
-// func foo(i1 int, f1 float64) (string, bool) {
+// func foo(i1 int, f1 float64) (string, bool) {
//
// this function would return a string something like
//
-// i1:0:1 f1:1:1 ~r0:2:2 ~r1:3:2
+// i1:0:1 f1:1:1 ~r0:2:2 ~r1:3:2
//
// where each chunk above is of the form NAME:ORDER:INOUTCLASSIFICATION
func processParams(die *dwarf.Entry, ex *dwtest.Examiner) string {
var buildinfo []byte
/*
- Initialize the global variable that describes the ELF header. It will be updated as
- we write section and prog headers.
+Initialize the global variable that describes the ELF header. It will be updated as
+we write section and prog headers.
*/
func Elfinit(ctxt *Link) {
ctxt.IsELF = true
}
// Layout is given by this C definition:
-// typedef struct
-// {
-// /* Version of flags structure. */
-// uint16_t version;
-// /* The level of the ISA: 1-5, 32, 64. */
-// uint8_t isa_level;
-// /* The revision of ISA: 0 for MIPS V and below, 1-n otherwise. */
-// uint8_t isa_rev;
-// /* The size of general purpose registers. */
-// uint8_t gpr_size;
-// /* The size of co-processor 1 registers. */
-// uint8_t cpr1_size;
-// /* The size of co-processor 2 registers. */
-// uint8_t cpr2_size;
-// /* The floating-point ABI. */
-// uint8_t fp_abi;
-// /* Processor-specific extension. */
-// uint32_t isa_ext;
-// /* Mask of ASEs used. */
-// uint32_t ases;
-// /* Mask of general flags. */
-// uint32_t flags1;
-// uint32_t flags2;
-// } Elf_Internal_ABIFlags_v0;
+//
+// typedef struct
+// {
+// /* Version of flags structure. */
+// uint16_t version;
+// /* The level of the ISA: 1-5, 32, 64. */
+// uint8_t isa_level;
+// /* The revision of ISA: 0 for MIPS V and below, 1-n otherwise. */
+// uint8_t isa_rev;
+// /* The size of general purpose registers. */
+// uint8_t gpr_size;
+// /* The size of co-processor 1 registers. */
+// uint8_t cpr1_size;
+// /* The size of co-processor 2 registers. */
+// uint8_t cpr2_size;
+// /* The floating-point ABI. */
+// uint8_t fp_abi;
+// /* Processor-specific extension. */
+// uint32_t isa_ext;
+// /* Mask of ASEs used. */
+// uint32_t ases;
+// /* Mask of general flags. */
+// uint32_t flags1;
+// uint32_t flags2;
+// } Elf_Internal_ABIFlags_v0;
func elfWriteMipsAbiFlags(ctxt *Link) int {
sh := elfshname(".MIPS.abiflags")
ctxt.Out.SeekSet(int64(sh.Off))
// any system calls to read the value.
//
// Third, it also mmaps the output file (if available). The intended usage is:
-// - Mmap the output file
-// - Write the content
-// - possibly apply any edits in the output buffer
-// - possibly write more content to the file. These writes take place in a heap
-// backed buffer that will get synced to disk.
-// - Munmap the output file
+// - Mmap the output file
+// - Write the content
+// - possibly apply any edits in the output buffer
+// - possibly write more content to the file. These writes take place in a heap
+// backed buffer that will get synced to disk.
+// - Munmap the output file
//
// And finally, it provides a mechanism by which you can multithread the
// writing of output files. This mechanism is accomplished by copying a OutBuf,
//
// Parallel OutBuf is intended to be used like:
//
-// func write(out *OutBuf) {
-// var wg sync.WaitGroup
-// for i := 0; i < 10; i++ {
-// wg.Add(1)
-// view, err := out.View(start[i])
-// if err != nil {
-// // handle output
-// continue
-// }
-// go func(out *OutBuf, i int) {
-// // do output
-// wg.Done()
-// }(view, i)
-// }
-// wg.Wait()
-// }
+// func write(out *OutBuf) {
+// var wg sync.WaitGroup
+// for i := 0; i < 10; i++ {
+// wg.Add(1)
+// view, err := out.View(start[i])
+// if err != nil {
+// // handle output
+// continue
+// }
+// go func(out *OutBuf, i int) {
+// // do output
+// wg.Done()
+// }(view, i)
+// }
+// wg.Wait()
+// }
type OutBuf struct {
arch *sys.Arch
off int64
// This function creates a per-CU list of filenames if CU[M] references
// files[1-N], the following is generated:
//
-// runtime.cutab:
-// CU[M]
-// offsetToFilename[0]
-// offsetToFilename[1]
-// ..
+// runtime.cutab:
+// CU[M]
+// offsetToFilename[0]
+// offsetToFilename[1]
+// ..
//
-// runtime.filetab
-// filename[0]
-// filename[1]
+// runtime.filetab
+// filename[0]
+// filename[1]
//
// Looking up a filename then becomes:
-// 0) Given a func, and filename index [K]
-// 1) Get Func.CUIndex: M := func.cuOffset
-// 2) Find filename offset: fileOffset := runtime.cutab[M+K]
-// 3) Get the filename: getcstring(runtime.filetab[fileOffset])
+// 0. Given a func, and filename index [K]
+// 1. Get Func.CUIndex: M := func.cuOffset
+// 2. Find filename offset: fileOffset := runtime.cutab[M+K]
+// 3. Get the filename: getcstring(runtime.filetab[fileOffset])
func (state *pclntab) generateFilenameTabs(ctxt *Link, compUnits []*sym.CompilationUnit, funcs []loader.Sym) []uint32 {
// On a per-CU basis, keep track of all the filenames we need.
//
}
// Update values for the previous package.
-// - Svalue of the C_FILE symbol: if it is the last one, this Svalue must be -1
-// - Xsclen of the csect symbol.
+// - Svalue of the C_FILE symbol: if it is the last one, this Svalue must be -1
+// - Xsclen of the csect symbol.
func (f *xcoffFile) updatePreviousFile(ctxt *Link, last bool) {
// first file
if currSymSrcFile.file == nil {
//
// Notes on the layout of global symbol index space:
//
-// - Go object files are read before host object files; each Go object
-// read adds its defined package symbols to the global index space.
-// Nonpackage symbols are not yet added.
+// - Go object files are read before host object files; each Go object
+// read adds its defined package symbols to the global index space.
+// Nonpackage symbols are not yet added.
//
-// - In loader.LoadNonpkgSyms, add non-package defined symbols and
-// references in all object files to the global index space.
+// - In loader.LoadNonpkgSyms, add non-package defined symbols and
+// references in all object files to the global index space.
//
-// - Host object file loading happens; the host object loader does a
-// name/version lookup for each symbol it finds; this can wind up
-// extending the external symbol index space range. The host object
-// loader stores symbol payloads in loader.payloads using SymbolBuilder.
+// - Host object file loading happens; the host object loader does a
+// name/version lookup for each symbol it finds; this can wind up
+// extending the external symbol index space range. The host object
+// loader stores symbol payloads in loader.payloads using SymbolBuilder.
//
-// - Each symbol gets a unique global index. For duplicated and
-// overwriting/overwritten symbols, the second (or later) appearance
-// of the symbol gets the same global index as the first appearance.
+// - Each symbol gets a unique global index. For duplicated and
+// overwriting/overwritten symbols, the second (or later) appearance
+// of the symbol gets the same global index as the first appearance.
type Loader struct {
start map[*oReader]Sym // map from object file to its start index
objs []objIdx // sorted by start index (i.e. objIdx.i)
// when the runtime package is built. The canonical example is
// "runtime.racefuncenter" -- currently if you do something like
//
-// go build -gcflags=-race myprogram.go
+// go build -gcflags=-race myprogram.go
//
// the compiler will insert calls to the builtin runtime.racefuncenter,
// but the version of the runtime used for linkage won't actually contain
// Nm lists the symbols defined or used by an object file, archive, or executable.
//
// Usage:
+//
// go tool nm [options] file...
//
// The default output prints one line per symbol, with three space-separated
// size orders from largest to smallest
// -type
// print symbol type after name
-//
package main
// license that can be found in the LICENSE file.
/*
-
Pack is a simple version of the traditional Unix ar tool.
It implements only the operations needed by Go.
Usage:
+
go tool pack op file.a [name...]
Pack applies the operation to the archive, using the names as arguments to the operation.
For the p command, each file is prefixed by the name on a line by itself.
For the t command, the listing includes additional file metadata.
For the x command, names are printed as files are extracted.
-
*/
package main
// binary's output. To convert the output of a "go test" command,
// use "go test -json" instead of invoking test2json directly.
//
-// Output Format
+// # Output Format
//
// The JSON stream is a newline-separated sequence of TestEvent objects
// corresponding to the Go struct:
// as a sequence of events with Test set to the benchmark name, terminated
// by a final event with Action == "bench" or "fail".
// Benchmarks have no events with Action == "run", "pause", or "cont".
-//
package main
import (
// prog0 starts three goroutines.
//
-// goroutine 1: taskless region
-// goroutine 2: starts task0, do work in task0.region0, starts task1 which ends immediately.
-// goroutine 3: do work in task0.region1 and task0.region2, ends task0
+// goroutine 1: taskless region
+// goroutine 2: starts task0, do work in task0.region0, starts task1 which ends immediately.
+// goroutine 3: do work in task0.region1 and task0.region2, ends task0
func prog0() {
ctx := context.Background()
Trace is a tool for viewing trace files.
Trace files can be generated with:
- - runtime/trace.Start
- - net/http/pprof package
- - go test -trace
+ - runtime/trace.Start
+ - net/http/pprof package
+ - go test -trace
Example usage:
Generate a trace file with 'go test':
+
go test -trace trace.out pkg
+
View the trace in a web browser:
+
go tool trace trace.out
+
Generate a pprof-like profile from the trace:
+
go tool trace -pprof=TYPE trace.out > TYPE.pprof
Supported profile types are:
- - net: network blocking profile
- - sync: synchronization blocking profile
- - syscall: syscall blocking profile
- - sched: scheduler latency profile
+ - net: network blocking profile
+ - sync: synchronization blocking profile
+ - syscall: syscall blocking profile
+ - sched: scheduler latency profile
Then, you can use the pprof tool to analyze the profile:
+
go tool pprof TYPE.pprof
Note that while the various profiles available when launching
// license that can be found in the LICENSE file.
/*
-
Vet examines Go source code and reports suspicious constructs, such as Printf
calls whose arguments do not align with the format string. Vet uses heuristics
that do not guarantee all reports are genuine problems, but it can find errors
To list the available checks, run "go tool vet help":
- asmdecl report mismatches between assembly files and Go declarations
- assign check for useless assignments
- atomic check for common mistakes using the sync/atomic package
- bools check for common mistakes involving boolean operators
- buildtag check that +build tags are well-formed and correctly located
- cgocall detect some violations of the cgo pointer passing rules
- composites check for unkeyed composite literals
- copylocks check for locks erroneously passed by value
- httpresponse check for mistakes using HTTP responses
- loopclosure check references to loop variables from within nested functions
- lostcancel check cancel func returned by context.WithCancel is called
- nilfunc check for useless comparisons between functions and nil
- printf check consistency of Printf format strings and arguments
- shift check for shifts that equal or exceed the width of the integer
- stdmethods check signature of methods of well-known interfaces
- structtag check that struct field tags conform to reflect.StructTag.Get
- tests check for common mistaken usages of tests and examples
- unmarshal report passing non-pointer or non-interface values to unmarshal
- unreachable check for unreachable code
- unsafeptr check for invalid conversions of uintptr to unsafe.Pointer
- unusedresult check for unused results of calls to some functions
+ asmdecl report mismatches between assembly files and Go declarations
+ assign check for useless assignments
+ atomic check for common mistakes using the sync/atomic package
+ bools check for common mistakes involving boolean operators
+ buildtag check that +build tags are well-formed and correctly located
+ cgocall detect some violations of the cgo pointer passing rules
+ composites check for unkeyed composite literals
+ copylocks check for locks erroneously passed by value
+ httpresponse check for mistakes using HTTP responses
+ loopclosure check references to loop variables from within nested functions
+ lostcancel check cancel func returned by context.WithCancel is called
+ nilfunc check for useless comparisons between functions and nil
+ printf check consistency of Printf format strings and arguments
+ shift check for shifts that equal or exceed the width of the integer
+ stdmethods check signature of methods of well-known interfaces
+ structtag check that struct field tags conform to reflect.StructTag.Get
+ tests check for common mistaken usages of tests and examples
+ unmarshal report passing non-pointer or non-interface values to unmarshal
+ unreachable check for unreachable code
+ unsafeptr check for invalid conversions of uintptr to unsafe.Pointer
+ unusedresult check for unused results of calls to some functions
For details and flags of a particular check, such as printf, run "go tool vet help printf".
Core flags:
- -c=N
- display offending line plus N lines of surrounding context
- -json
- emit analysis diagnostics (and errors) in JSON format
-
+ -c=N
+ display offending line plus N lines of surrounding context
+ -json
+ emit analysis diagnostics (and errors) in JSON format
*/
package main
// dictDecoder implements the LZ77 sliding dictionary as used in decompression.
// LZ77 decompresses data through sequences of two forms of commands:
//
-// * Literal insertions: Runs of one or more symbols are inserted into the data
-// stream as is. This is accomplished through the writeByte method for a
-// single symbol, or combinations of writeSlice/writeMark for multiple symbols.
-// Any valid stream must start with a literal insertion if no preset dictionary
-// is used.
+// - Literal insertions: Runs of one or more symbols are inserted into the data
+// stream as is. This is accomplished through the writeByte method for a
+// single symbol, or combinations of writeSlice/writeMark for multiple symbols.
+// Any valid stream must start with a literal insertion if no preset dictionary
+// is used.
//
-// * Backward copies: Runs of one or more symbols are copied from previously
-// emitted data. Backward copies come as the tuple (dist, length) where dist
-// determines how far back in the stream to copy from and length determines how
-// many bytes to copy. Note that it is valid for the length to be greater than
-// the distance. Since LZ77 uses forward copies, that situation is used to
-// perform a form of run-length encoding on repeated runs of symbols.
-// The writeCopy and tryWriteCopy are used to implement this command.
+// - Backward copies: Runs of one or more symbols are copied from previously
+// emitted data. Backward copies come as the tuple (dist, length) where dist
+// determines how far back in the stream to copy from and length determines how
+// many bytes to copy. Note that it is valid for the length to be greater than
+// the distance. Since LZ77 uses forward copies, that situation is used to
+// perform a form of run-length encoding on repeated runs of symbols.
+// The writeCopy and tryWriteCopy are used to implement this command.
//
// For performance reasons, this implementation performs little to no sanity
// checks about the arguments. As such, the invariants documented for each
// Codes 0-15 are single byte codes. Codes 16-18 are followed by additional
// information. Code badCode is an end marker
//
-// numLiterals The number of literals in literalEncoding
-// numOffsets The number of offsets in offsetEncoding
-// litenc, offenc The literal and offset encoder to use
+// numLiterals The number of literals in literalEncoding
+// numOffsets The number of offsets in offsetEncoding
+// litenc, offenc The literal and offset encoder to use
func (w *huffmanBitWriter) generateCodegen(numLiterals int, numOffsets int, litEnc, offEnc *huffmanEncoder) {
for i := range w.codegenFreq {
w.codegenFreq[i] = 0
// Write the header of a dynamic Huffman block to the output stream.
//
-// numLiterals The number of literals specified in codegen
-// numOffsets The number of offsets specified in codegen
-// numCodegens The number of codegens used in codegen
+// numLiterals The number of literals specified in codegen
+// numOffsets The number of offsets specified in codegen
+// numCodegens The number of codegens used in codegen
func (w *huffmanBitWriter) writeDynamicHeader(numLiterals int, numOffsets int, numCodegens int, isEof bool) {
if w.err != nil {
return
// ordering for the Less method, so Push adds items while Pop removes the
// highest-priority item from the queue. The Examples include such an
// implementation; the file example_pq_test.go has the complete source.
-//
package heap
import "sort"
// Package list implements a doubly linked list.
//
// To iterate over a list (where l is a *List):
+//
// for e := l.Front(); e != nil; e = e.Next() {
// // do something with e.Value
// }
-//
package list
// Element is an element of a linked list.
// explicitly to each function that needs it. The Context should be the first
// parameter, typically named ctx:
//
-// func DoSomething(ctx context.Context, arg Arg) error {
-// // ... use ctx ...
-// }
+// func DoSomething(ctx context.Context, arg Arg) error {
+// // ... use ctx ...
+// }
//
// Do not pass a nil Context, even if a function permits it. Pass context.TODO
// if you are unsure about which Context to use.
// Canceling this context releases resources associated with it, so code should
// call cancel as soon as the operations running in this Context complete:
//
-// func slowOperationWithTimeout(ctx context.Context) (Result, error) {
-// ctx, cancel := context.WithTimeout(ctx, 100*time.Millisecond)
-// defer cancel() // releases resources if slowOperation completes before timeout elapses
-// return slowOperation(ctx)
-// }
+// func slowOperationWithTimeout(ctx context.Context) (Result, error) {
+// ctx, cancel := context.WithTimeout(ctx, 100*time.Millisecond)
+// defer cancel() // releases resources if slowOperation completes before timeout elapses
+// return slowOperation(ctx)
+// }
func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc) {
return WithDeadline(parent, time.Now().Add(timeout))
}
// gcmFieldElement represents a value in GF(2¹²⁸). In order to reflect the GCM
// standard and make binary.BigEndian suitable for marshaling these values, the
// bits are stored in big endian order. For example:
-// the coefficient of x⁰ can be obtained by v.low >> 63.
-// the coefficient of x⁶³ can be obtained by v.low & 1.
-// the coefficient of x⁶⁴ can be obtained by v.high >> 63.
-// the coefficient of x¹²⁷ can be obtained by v.high & 1.
+//
+// the coefficient of x⁰ can be obtained by v.low >> 63.
+// the coefficient of x⁶³ can be obtained by v.low & 1.
+// the coefficient of x⁶⁴ can be obtained by v.high >> 63.
+// the coefficient of x¹²⁷ can be obtained by v.high & 1.
type gcmFieldElement struct {
low, high uint64
}
// Although this type is an empty interface for backwards compatibility reasons,
// all public key types in the standard library implement the following interface
//
-// interface{
-// Equal(x crypto.PublicKey) bool
-// }
+// interface{
+// Equal(x crypto.PublicKey) bool
+// }
//
// which can be used for increased type safety within applications.
type PublicKey any
// Although this type is an empty interface for backwards compatibility reasons,
// all private key types in the standard library implement the following interface
//
-// interface{
-// Public() crypto.PublicKey
-// Equal(x crypto.PrivateKey) bool
-// }
+// interface{
+// Public() crypto.PublicKey
+// Equal(x crypto.PrivateKey) bool
+// }
//
// as well as purpose-specific interfaces such as Signer and Decrypter, which
// can be used for increased type safety within applications.
// Package edwards25519 implements group logic for the twisted Edwards curve
//
-// -x^2 + y^2 = 1 + -(121665/121666)*x^2*y^2
+// -x^2 + y^2 = 1 + -(121665/121666)*x^2*y^2
//
// This is better known as the Edwards curve equivalent to Curve25519, and is
// the curve used by the Ed25519 signature scheme.
// TestAliasing checks that receivers and arguments can alias each other without
// leading to incorrect results. That is, it ensures that it's safe to write
//
-// v.Invert(v)
+// v.Invert(v)
//
// or
//
-// v.Add(v, v)
+// v.Add(v, v)
//
// without any of the inputs getting clobbered by the output being written.
func TestAliasing(t *testing.T) {
// A Scalar is an integer modulo
//
-// l = 2^252 + 27742317777372353535851937790883648493
+// l = 2^252 + 27742317777372353535851937790883648493
//
// which is the prime order of the edwards25519 group.
//
}
// Input:
-// a[0]+256*a[1]+...+256^31*a[31] = a
-// b[0]+256*b[1]+...+256^31*b[31] = b
-// c[0]+256*c[1]+...+256^31*c[31] = c
+//
+// a[0]+256*a[1]+...+256^31*a[31] = a
+// b[0]+256*b[1]+...+256^31*b[31] = b
+// c[0]+256*c[1]+...+256^31*c[31] = c
//
// Output:
-// s[0]+256*s[1]+...+256^31*s[31] = (ab+c) mod l
-// where l = 2^252 + 27742317777372353535851937790883648493.
+//
+// s[0]+256*s[1]+...+256^31*s[31] = (ab+c) mod l
+// where l = 2^252 + 27742317777372353535851937790883648493.
func scMulAdd(s, a, b, c *[32]byte) {
a0 := 2097151 & load3(a[:])
a1 := 2097151 & (load4(a[2:]) >> 5)
}
// Input:
-// s[0]+256*s[1]+...+256^63*s[63] = s
+//
+// s[0]+256*s[1]+...+256^63*s[63] = s
//
// Output:
-// s[0]+256*s[1]+...+256^31*s[31] = s mod l
-// where l = 2^252 + 27742317777372353535851937790883648493.
+//
+// s[0]+256*s[1]+...+256^31*s[31] = s mod l
+// where l = 2^252 + 27742317777372353535851937790883648493.
func scReduce(out *[32]byte, s *[64]byte) {
s0 := 2097151 & load3(s[:])
s1 := 2097151 & (load4(s[2:]) >> 5)
// p224CmovznzU64 is a single-word conditional move.
//
// Postconditions:
-// out1 = (if arg1 = 0 then arg2 else arg3)
+//
+// out1 = (if arg1 = 0 then arg2 else arg3)
//
// Input Bounds:
-// arg1: [0x0 ~> 0x1]
-// arg2: [0x0 ~> 0xffffffffffffffff]
-// arg3: [0x0 ~> 0xffffffffffffffff]
+//
+// arg1: [0x0 ~> 0x1]
+// arg2: [0x0 ~> 0xffffffffffffffff]
+// arg3: [0x0 ~> 0xffffffffffffffff]
+//
// Output Bounds:
-// out1: [0x0 ~> 0xffffffffffffffff]
+//
+// out1: [0x0 ~> 0xffffffffffffffff]
func p224CmovznzU64(out1 *uint64, arg1 p224Uint1, arg2 uint64, arg3 uint64) {
x1 := (uint64(arg1) * 0xffffffffffffffff)
x2 := ((x1 & arg3) | ((^x1) & arg2))
// p224Mul multiplies two field elements in the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
-// 0 ≤ eval arg2 < m
+//
+// 0 ≤ eval arg1 < m
+// 0 ≤ eval arg2 < m
+//
// Postconditions:
-// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) * eval (from_montgomery arg2)) mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) * eval (from_montgomery arg2)) mod m
+// 0 ≤ eval out1 < m
func p224Mul(out1 *p224MontgomeryDomainFieldElement, arg1 *p224MontgomeryDomainFieldElement, arg2 *p224MontgomeryDomainFieldElement) {
x1 := arg1[1]
x2 := arg1[2]
// p224Square squares a field element in the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
+//
+// 0 ≤ eval arg1 < m
+//
// Postconditions:
-// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) * eval (from_montgomery arg1)) mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) * eval (from_montgomery arg1)) mod m
+// 0 ≤ eval out1 < m
func p224Square(out1 *p224MontgomeryDomainFieldElement, arg1 *p224MontgomeryDomainFieldElement) {
x1 := arg1[1]
x2 := arg1[2]
// p224Add adds two field elements in the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
-// 0 ≤ eval arg2 < m
+//
+// 0 ≤ eval arg1 < m
+// 0 ≤ eval arg2 < m
+//
// Postconditions:
-// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) + eval (from_montgomery arg2)) mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) + eval (from_montgomery arg2)) mod m
+// 0 ≤ eval out1 < m
func p224Add(out1 *p224MontgomeryDomainFieldElement, arg1 *p224MontgomeryDomainFieldElement, arg2 *p224MontgomeryDomainFieldElement) {
var x1 uint64
var x2 uint64
// p224Sub subtracts two field elements in the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
-// 0 ≤ eval arg2 < m
+//
+// 0 ≤ eval arg1 < m
+// 0 ≤ eval arg2 < m
+//
// Postconditions:
-// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) - eval (from_montgomery arg2)) mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) - eval (from_montgomery arg2)) mod m
+// 0 ≤ eval out1 < m
func p224Sub(out1 *p224MontgomeryDomainFieldElement, arg1 *p224MontgomeryDomainFieldElement, arg2 *p224MontgomeryDomainFieldElement) {
var x1 uint64
var x2 uint64
// p224SetOne returns the field element one in the Montgomery domain.
//
// Postconditions:
-// eval (from_montgomery out1) mod m = 1 mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = 1 mod m
+// 0 ≤ eval out1 < m
func p224SetOne(out1 *p224MontgomeryDomainFieldElement) {
out1[0] = 0xffffffff00000000
out1[1] = 0xffffffffffffffff
// p224FromMontgomery translates a field element out of the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
+//
+// 0 ≤ eval arg1 < m
+//
// Postconditions:
-// eval out1 mod m = (eval arg1 * ((2^64)⁻¹ mod m)^4) mod m
-// 0 ≤ eval out1 < m
+//
+// eval out1 mod m = (eval arg1 * ((2^64)⁻¹ mod m)^4) mod m
+// 0 ≤ eval out1 < m
func p224FromMontgomery(out1 *p224NonMontgomeryDomainFieldElement, arg1 *p224MontgomeryDomainFieldElement) {
x1 := arg1[0]
var x2 uint64
// p224ToMontgomery translates a field element into the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
+//
+// 0 ≤ eval arg1 < m
+//
// Postconditions:
-// eval (from_montgomery out1) mod m = eval arg1 mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = eval arg1 mod m
+// 0 ≤ eval out1 < m
func p224ToMontgomery(out1 *p224MontgomeryDomainFieldElement, arg1 *p224NonMontgomeryDomainFieldElement) {
x1 := arg1[1]
x2 := arg1[2]
// p224Selectznz is a multi-limb conditional select.
//
// Postconditions:
-// eval out1 = (if arg1 = 0 then eval arg2 else eval arg3)
+//
+// eval out1 = (if arg1 = 0 then eval arg2 else eval arg3)
//
// Input Bounds:
-// arg1: [0x0 ~> 0x1]
-// arg2: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
-// arg3: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
+//
+// arg1: [0x0 ~> 0x1]
+// arg2: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
+// arg3: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
+//
// Output Bounds:
-// out1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
+//
+// out1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
func p224Selectznz(out1 *[4]uint64, arg1 p224Uint1, arg2 *[4]uint64, arg3 *[4]uint64) {
var x1 uint64
p224CmovznzU64(&x1, arg1, arg2[0], arg3[0])
// p224ToBytes serializes a field element NOT in the Montgomery domain to bytes in little-endian order.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
+//
+// 0 ≤ eval arg1 < m
+//
// Postconditions:
-// out1 = map (λ x, ⌊((eval arg1 mod m) mod 2^(8 * (x + 1))) / 2^(8 * x)⌋) [0..27]
+//
+// out1 = map (λ x, ⌊((eval arg1 mod m) mod 2^(8 * (x + 1))) / 2^(8 * x)⌋) [0..27]
//
// Input Bounds:
-// arg1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffff]]
+//
+// arg1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffff]]
+//
// Output Bounds:
-// out1: [[0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff]]
+//
+// out1: [[0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff]]
func p224ToBytes(out1 *[28]uint8, arg1 *[4]uint64) {
x1 := arg1[3]
x2 := arg1[2]
// p224FromBytes deserializes a field element NOT in the Montgomery domain from bytes in little-endian order.
//
// Preconditions:
-// 0 ≤ bytes_eval arg1 < m
+//
+// 0 ≤ bytes_eval arg1 < m
+//
// Postconditions:
-// eval out1 mod m = bytes_eval arg1 mod m
-// 0 ≤ eval out1 < m
+//
+// eval out1 mod m = bytes_eval arg1 mod m
+// 0 ≤ eval out1 < m
//
// Input Bounds:
-// arg1: [[0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff]]
+//
+// arg1: [[0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff]]
+//
// Output Bounds:
-// out1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffff]]
+//
+// out1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffff]]
func p224FromBytes(out1 *[4]uint64, arg1 *[28]uint8) {
x1 := (uint64(arg1[27]) << 24)
x2 := (uint64(arg1[26]) << 16)
// p384CmovznzU64 is a single-word conditional move.
//
// Postconditions:
-// out1 = (if arg1 = 0 then arg2 else arg3)
+//
+// out1 = (if arg1 = 0 then arg2 else arg3)
//
// Input Bounds:
-// arg1: [0x0 ~> 0x1]
-// arg2: [0x0 ~> 0xffffffffffffffff]
-// arg3: [0x0 ~> 0xffffffffffffffff]
+//
+// arg1: [0x0 ~> 0x1]
+// arg2: [0x0 ~> 0xffffffffffffffff]
+// arg3: [0x0 ~> 0xffffffffffffffff]
+//
// Output Bounds:
-// out1: [0x0 ~> 0xffffffffffffffff]
+//
+// out1: [0x0 ~> 0xffffffffffffffff]
func p384CmovznzU64(out1 *uint64, arg1 p384Uint1, arg2 uint64, arg3 uint64) {
x1 := (uint64(arg1) * 0xffffffffffffffff)
x2 := ((x1 & arg3) | ((^x1) & arg2))
// p384Mul multiplies two field elements in the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
-// 0 ≤ eval arg2 < m
+//
+// 0 ≤ eval arg1 < m
+// 0 ≤ eval arg2 < m
+//
// Postconditions:
-// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) * eval (from_montgomery arg2)) mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) * eval (from_montgomery arg2)) mod m
+// 0 ≤ eval out1 < m
func p384Mul(out1 *p384MontgomeryDomainFieldElement, arg1 *p384MontgomeryDomainFieldElement, arg2 *p384MontgomeryDomainFieldElement) {
x1 := arg1[1]
x2 := arg1[2]
// p384Square squares a field element in the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
+//
+// 0 ≤ eval arg1 < m
+//
// Postconditions:
-// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) * eval (from_montgomery arg1)) mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) * eval (from_montgomery arg1)) mod m
+// 0 ≤ eval out1 < m
func p384Square(out1 *p384MontgomeryDomainFieldElement, arg1 *p384MontgomeryDomainFieldElement) {
x1 := arg1[1]
x2 := arg1[2]
// p384Add adds two field elements in the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
-// 0 ≤ eval arg2 < m
+//
+// 0 ≤ eval arg1 < m
+// 0 ≤ eval arg2 < m
+//
// Postconditions:
-// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) + eval (from_montgomery arg2)) mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) + eval (from_montgomery arg2)) mod m
+// 0 ≤ eval out1 < m
func p384Add(out1 *p384MontgomeryDomainFieldElement, arg1 *p384MontgomeryDomainFieldElement, arg2 *p384MontgomeryDomainFieldElement) {
var x1 uint64
var x2 uint64
// p384Sub subtracts two field elements in the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
-// 0 ≤ eval arg2 < m
+//
+// 0 ≤ eval arg1 < m
+// 0 ≤ eval arg2 < m
+//
// Postconditions:
-// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) - eval (from_montgomery arg2)) mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) - eval (from_montgomery arg2)) mod m
+// 0 ≤ eval out1 < m
func p384Sub(out1 *p384MontgomeryDomainFieldElement, arg1 *p384MontgomeryDomainFieldElement, arg2 *p384MontgomeryDomainFieldElement) {
var x1 uint64
var x2 uint64
// p384SetOne returns the field element one in the Montgomery domain.
//
// Postconditions:
-// eval (from_montgomery out1) mod m = 1 mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = 1 mod m
+// 0 ≤ eval out1 < m
func p384SetOne(out1 *p384MontgomeryDomainFieldElement) {
out1[0] = 0xffffffff00000001
out1[1] = 0xffffffff
// p384FromMontgomery translates a field element out of the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
+//
+// 0 ≤ eval arg1 < m
+//
// Postconditions:
-// eval out1 mod m = (eval arg1 * ((2^64)⁻¹ mod m)^6) mod m
-// 0 ≤ eval out1 < m
+//
+// eval out1 mod m = (eval arg1 * ((2^64)⁻¹ mod m)^6) mod m
+// 0 ≤ eval out1 < m
func p384FromMontgomery(out1 *p384NonMontgomeryDomainFieldElement, arg1 *p384MontgomeryDomainFieldElement) {
x1 := arg1[0]
var x2 uint64
// p384ToMontgomery translates a field element into the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
+//
+// 0 ≤ eval arg1 < m
+//
// Postconditions:
-// eval (from_montgomery out1) mod m = eval arg1 mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = eval arg1 mod m
+// 0 ≤ eval out1 < m
func p384ToMontgomery(out1 *p384MontgomeryDomainFieldElement, arg1 *p384NonMontgomeryDomainFieldElement) {
x1 := arg1[1]
x2 := arg1[2]
// p384Selectznz is a multi-limb conditional select.
//
// Postconditions:
-// eval out1 = (if arg1 = 0 then eval arg2 else eval arg3)
+//
+// eval out1 = (if arg1 = 0 then eval arg2 else eval arg3)
//
// Input Bounds:
-// arg1: [0x0 ~> 0x1]
-// arg2: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
-// arg3: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
+//
+// arg1: [0x0 ~> 0x1]
+// arg2: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
+// arg3: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
+//
// Output Bounds:
-// out1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
+//
+// out1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
func p384Selectznz(out1 *[6]uint64, arg1 p384Uint1, arg2 *[6]uint64, arg3 *[6]uint64) {
var x1 uint64
p384CmovznzU64(&x1, arg1, arg2[0], arg3[0])
// p384ToBytes serializes a field element NOT in the Montgomery domain to bytes in little-endian order.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
+//
+// 0 ≤ eval arg1 < m
+//
// Postconditions:
-// out1 = map (λ x, ⌊((eval arg1 mod m) mod 2^(8 * (x + 1))) / 2^(8 * x)⌋) [0..47]
+//
+// out1 = map (λ x, ⌊((eval arg1 mod m) mod 2^(8 * (x + 1))) / 2^(8 * x)⌋) [0..47]
//
// Input Bounds:
-// arg1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
+//
+// arg1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
+//
// Output Bounds:
-// out1: [[0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff]]
+//
+// out1: [[0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff]]
func p384ToBytes(out1 *[48]uint8, arg1 *[6]uint64) {
x1 := arg1[5]
x2 := arg1[4]
// p384FromBytes deserializes a field element NOT in the Montgomery domain from bytes in little-endian order.
//
// Preconditions:
-// 0 ≤ bytes_eval arg1 < m
+//
+// 0 ≤ bytes_eval arg1 < m
+//
// Postconditions:
-// eval out1 mod m = bytes_eval arg1 mod m
-// 0 ≤ eval out1 < m
+//
+// eval out1 mod m = bytes_eval arg1 mod m
+// 0 ≤ eval out1 < m
//
// Input Bounds:
-// arg1: [[0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff]]
+//
+// arg1: [[0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff]]
+//
// Output Bounds:
-// out1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
+//
+// out1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
func p384FromBytes(out1 *[6]uint64, arg1 *[48]uint8) {
x1 := (uint64(arg1[47]) << 56)
x2 := (uint64(arg1[46]) << 48)
// p521CmovznzU64 is a single-word conditional move.
//
// Postconditions:
-// out1 = (if arg1 = 0 then arg2 else arg3)
+//
+// out1 = (if arg1 = 0 then arg2 else arg3)
//
// Input Bounds:
-// arg1: [0x0 ~> 0x1]
-// arg2: [0x0 ~> 0xffffffffffffffff]
-// arg3: [0x0 ~> 0xffffffffffffffff]
+//
+// arg1: [0x0 ~> 0x1]
+// arg2: [0x0 ~> 0xffffffffffffffff]
+// arg3: [0x0 ~> 0xffffffffffffffff]
+//
// Output Bounds:
-// out1: [0x0 ~> 0xffffffffffffffff]
+//
+// out1: [0x0 ~> 0xffffffffffffffff]
func p521CmovznzU64(out1 *uint64, arg1 p521Uint1, arg2 uint64, arg3 uint64) {
x1 := (uint64(arg1) * 0xffffffffffffffff)
x2 := ((x1 & arg3) | ((^x1) & arg2))
// p521Mul multiplies two field elements in the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
-// 0 ≤ eval arg2 < m
+//
+// 0 ≤ eval arg1 < m
+// 0 ≤ eval arg2 < m
+//
// Postconditions:
-// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) * eval (from_montgomery arg2)) mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) * eval (from_montgomery arg2)) mod m
+// 0 ≤ eval out1 < m
func p521Mul(out1 *p521MontgomeryDomainFieldElement, arg1 *p521MontgomeryDomainFieldElement, arg2 *p521MontgomeryDomainFieldElement) {
x1 := arg1[1]
x2 := arg1[2]
// p521Square squares a field element in the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
+//
+// 0 ≤ eval arg1 < m
+//
// Postconditions:
-// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) * eval (from_montgomery arg1)) mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) * eval (from_montgomery arg1)) mod m
+// 0 ≤ eval out1 < m
func p521Square(out1 *p521MontgomeryDomainFieldElement, arg1 *p521MontgomeryDomainFieldElement) {
x1 := arg1[1]
x2 := arg1[2]
// p521Add adds two field elements in the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
-// 0 ≤ eval arg2 < m
+//
+// 0 ≤ eval arg1 < m
+// 0 ≤ eval arg2 < m
+//
// Postconditions:
-// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) + eval (from_montgomery arg2)) mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) + eval (from_montgomery arg2)) mod m
+// 0 ≤ eval out1 < m
func p521Add(out1 *p521MontgomeryDomainFieldElement, arg1 *p521MontgomeryDomainFieldElement, arg2 *p521MontgomeryDomainFieldElement) {
var x1 uint64
var x2 uint64
// p521Sub subtracts two field elements in the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
-// 0 ≤ eval arg2 < m
+//
+// 0 ≤ eval arg1 < m
+// 0 ≤ eval arg2 < m
+//
// Postconditions:
-// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) - eval (from_montgomery arg2)) mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = (eval (from_montgomery arg1) - eval (from_montgomery arg2)) mod m
+// 0 ≤ eval out1 < m
func p521Sub(out1 *p521MontgomeryDomainFieldElement, arg1 *p521MontgomeryDomainFieldElement, arg2 *p521MontgomeryDomainFieldElement) {
var x1 uint64
var x2 uint64
// p521SetOne returns the field element one in the Montgomery domain.
//
// Postconditions:
-// eval (from_montgomery out1) mod m = 1 mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = 1 mod m
+// 0 ≤ eval out1 < m
func p521SetOne(out1 *p521MontgomeryDomainFieldElement) {
out1[0] = 0x80000000000000
out1[1] = uint64(0x0)
// p521FromMontgomery translates a field element out of the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
+//
+// 0 ≤ eval arg1 < m
+//
// Postconditions:
-// eval out1 mod m = (eval arg1 * ((2^64)⁻¹ mod m)^9) mod m
-// 0 ≤ eval out1 < m
+//
+// eval out1 mod m = (eval arg1 * ((2^64)⁻¹ mod m)^9) mod m
+// 0 ≤ eval out1 < m
func p521FromMontgomery(out1 *p521NonMontgomeryDomainFieldElement, arg1 *p521MontgomeryDomainFieldElement) {
x1 := arg1[0]
var x2 uint64
// p521ToMontgomery translates a field element into the Montgomery domain.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
+//
+// 0 ≤ eval arg1 < m
+//
// Postconditions:
-// eval (from_montgomery out1) mod m = eval arg1 mod m
-// 0 ≤ eval out1 < m
+//
+// eval (from_montgomery out1) mod m = eval arg1 mod m
+// 0 ≤ eval out1 < m
func p521ToMontgomery(out1 *p521MontgomeryDomainFieldElement, arg1 *p521NonMontgomeryDomainFieldElement) {
var x1 uint64
var x2 uint64
// p521Selectznz is a multi-limb conditional select.
//
// Postconditions:
-// eval out1 = (if arg1 = 0 then eval arg2 else eval arg3)
+//
+// eval out1 = (if arg1 = 0 then eval arg2 else eval arg3)
//
// Input Bounds:
-// arg1: [0x0 ~> 0x1]
-// arg2: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
-// arg3: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
+//
+// arg1: [0x0 ~> 0x1]
+// arg2: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
+// arg3: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
+//
// Output Bounds:
-// out1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
+//
+// out1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff]]
func p521Selectznz(out1 *[9]uint64, arg1 p521Uint1, arg2 *[9]uint64, arg3 *[9]uint64) {
var x1 uint64
p521CmovznzU64(&x1, arg1, arg2[0], arg3[0])
// p521ToBytes serializes a field element NOT in the Montgomery domain to bytes in little-endian order.
//
// Preconditions:
-// 0 ≤ eval arg1 < m
+//
+// 0 ≤ eval arg1 < m
+//
// Postconditions:
-// out1 = map (λ x, ⌊((eval arg1 mod m) mod 2^(8 * (x + 1))) / 2^(8 * x)⌋) [0..65]
+//
+// out1 = map (λ x, ⌊((eval arg1 mod m) mod 2^(8 * (x + 1))) / 2^(8 * x)⌋) [0..65]
//
// Input Bounds:
-// arg1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0x1ff]]
+//
+// arg1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0x1ff]]
+//
// Output Bounds:
-// out1: [[0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0x1]]
+//
+// out1: [[0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0x1]]
func p521ToBytes(out1 *[66]uint8, arg1 *[9]uint64) {
x1 := arg1[8]
x2 := arg1[7]
// p521FromBytes deserializes a field element NOT in the Montgomery domain from bytes in little-endian order.
//
// Preconditions:
-// 0 ≤ bytes_eval arg1 < m
+//
+// 0 ≤ bytes_eval arg1 < m
+//
// Postconditions:
-// eval out1 mod m = bytes_eval arg1 mod m
-// 0 ≤ eval out1 < m
+//
+// eval out1 mod m = bytes_eval arg1 mod m
+// 0 ≤ eval out1 < m
//
// Input Bounds:
-// arg1: [[0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0x1]]
+//
+// arg1: [[0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0xff], [0x0 ~> 0x1]]
+//
// Output Bounds:
-// out1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0x1ff]]
+//
+// out1: [[0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0xffffffffffffffff], [0x0 ~> 0x1ff]]
func p521FromBytes(out1 *[9]uint64, arg1 *[66]uint8) {
x1 := (uint64(p521Uint1(arg1[65])) << 8)
x2 := arg1[64]
// The first table contains (x,y) field element pairs for 16 multiples of the
// base point, G.
//
-// Index | Index (binary) | Value
-// 0 | 0000 | 0G (all zeros, omitted)
-// 1 | 0001 | G
-// 2 | 0010 | 2**64G
-// 3 | 0011 | 2**64G + G
-// 4 | 0100 | 2**128G
-// 5 | 0101 | 2**128G + G
-// 6 | 0110 | 2**128G + 2**64G
-// 7 | 0111 | 2**128G + 2**64G + G
-// 8 | 1000 | 2**192G
-// 9 | 1001 | 2**192G + G
-// 10 | 1010 | 2**192G + 2**64G
-// 11 | 1011 | 2**192G + 2**64G + G
-// 12 | 1100 | 2**192G + 2**128G
-// 13 | 1101 | 2**192G + 2**128G + G
-// 14 | 1110 | 2**192G + 2**128G + 2**64G
-// 15 | 1111 | 2**192G + 2**128G + 2**64G + G
+// Index | Index (binary) | Value
+// 0 | 0000 | 0G (all zeros, omitted)
+// 1 | 0001 | G
+// 2 | 0010 | 2**64G
+// 3 | 0011 | 2**64G + G
+// 4 | 0100 | 2**128G
+// 5 | 0101 | 2**128G + G
+// 6 | 0110 | 2**128G + 2**64G
+// 7 | 0111 | 2**128G + 2**64G + G
+// 8 | 1000 | 2**192G
+// 9 | 1001 | 2**192G + G
+// 10 | 1010 | 2**192G + 2**64G
+// 11 | 1011 | 2**192G + 2**64G + G
+// 12 | 1100 | 2**192G + 2**128G
+// 13 | 1101 | 2**192G + 2**128G + G
+// 14 | 1110 | 2**192G + 2**128G + 2**64G
+// 15 | 1111 | 2**192G + 2**128G + 2**64G + G
//
// The second table follows the same style, but the terms are 2**32G,
// 2**96G, 2**160G, 2**224G.
const bottom28Bits = 0xfffffff
// nonZeroToAllOnes returns:
-// 0xffffffff for 0 < x <= 2**31
-// 0 for x == 0 or x > 2**31.
+//
+// 0xffffffff for 0 < x <= 2**31
+// 0 for x == 0 or x > 2**31.
func nonZeroToAllOnes(x uint32) uint32 {
return ((x - 1) >> 31) - 1
}
// p256Mul sets out=in*in2.
//
// On entry: in[0,2,...] < 2**30, in[1,3,...] < 2**29 and
-// in2[0,2,...] < 2**30, in2[1,3,...] < 2**29.
+//
+// in2[0,2,...] < 2**30, in2[1,3,...] < 2**29.
+//
// On exit: out[0,2,...] < 2**30, out[1,3,...] < 2**29.
func p256Mul(out, in, in2 *[p256Limbs]uint32) {
var tmp [17]uint64
// p256Invert calculates |out| = |in|^{-1}
//
// Based on Fermat's Little Theorem:
-// a^p = a (mod p)
-// a^{p-1} = 1 (mod p)
-// a^{p-2} = a^{-1} (mod p)
+//
+// a^p = a (mod p)
+// a^{p-1} = 1 (mod p)
+// a^{p-2} = a^{-1} (mod p)
func p256Invert(out, in *[p256Limbs]uint32) {
var ftmp, ftmp2 [p256Limbs]uint32
// getrandom() syscall. In linux at most 2^25-1 bytes will be returned per call.
// From the manpage
//
-// * When reading from the urandom source, a maximum of 33554431 bytes
-// is returned by a single call to getrandom() on systems where int
-// has a size of 32 bits.
+// - When reading from the urandom source, a maximum of 33554431 bytes
+// is returned by a single call to getrandom() on systems where int
+// has a size of 32 bits.
const maxGetRandomRead = (1 << 25) - 1
}
// These are ASN1 DER structures:
-// DigestInfo ::= SEQUENCE {
-// digestAlgorithm AlgorithmIdentifier,
-// digest OCTET STRING
-// }
+//
+// DigestInfo ::= SEQUENCE {
+// digestAlgorithm AlgorithmIdentifier,
+// digest OCTET STRING
+// }
+//
// For performance, we don't use the generic ASN1 encoder. Rather, we
// precompute a prefix of the digest value that makes a valid ASN1 DER string
// with the correct contents.
}
// These vectors have been tested with
-// `openssl rsautl -verify -inkey pk -in signature | hexdump -C`
+//
+// `openssl rsautl -verify -inkey pk -in signature | hexdump -C`
var signPKCS1v15Tests = []signPKCS1v15Test{
{"Test.\n", "a4f3fa6ea93bcdd0c57be020c1193ecbfd6f200a3d95c409769b029578fa0e336ad9a347600e40d3ae823b8c7e6bad88cc07c1d54c3a1523cbbb6d58efc362ae"},
}
//
// - Anything else comes before RC4
//
-// RC4 has practically exploitable biases. See https://www.rc4nomore.com.
+// RC4 has practically exploitable biases. See https://www.rc4nomore.com.
//
// - Anything else comes before CBC_SHA256
//
-// SHA-256 variants of the CBC ciphersuites don't implement any Lucky13
-// countermeasures. See http://www.isg.rhul.ac.uk/tls/Lucky13.html and
-// https://www.imperialviolet.org/2013/02/04/luckythirteen.html.
+// SHA-256 variants of the CBC ciphersuites don't implement any Lucky13
+// countermeasures. See http://www.isg.rhul.ac.uk/tls/Lucky13.html and
+// https://www.imperialviolet.org/2013/02/04/luckythirteen.html.
//
// - Anything else comes before 3DES
//
-// 3DES has 64-bit blocks, which makes it fundamentally susceptible to
-// birthday attacks. See https://sweet32.info.
+// 3DES has 64-bit blocks, which makes it fundamentally susceptible to
+// birthday attacks. See https://sweet32.info.
//
// - ECDHE comes before anything else
//
-// Once we got the broken stuff out of the way, the most important
-// property a cipher suite can have is forward secrecy. We don't
-// implement FFDHE, so that means ECDHE.
+// Once we got the broken stuff out of the way, the most important
+// property a cipher suite can have is forward secrecy. We don't
+// implement FFDHE, so that means ECDHE.
//
// - AEADs come before CBC ciphers
//
-// Even with Lucky13 countermeasures, MAC-then-Encrypt CBC cipher suites
-// are fundamentally fragile, and suffered from an endless sequence of
-// padding oracle attacks. See https://eprint.iacr.org/2015/1129,
-// https://www.imperialviolet.org/2014/12/08/poodleagain.html, and
-// https://blog.cloudflare.com/yet-another-padding-oracle-in-openssl-cbc-ciphersuites/.
+// Even with Lucky13 countermeasures, MAC-then-Encrypt CBC cipher suites
+// are fundamentally fragile, and suffered from an endless sequence of
+// padding oracle attacks. See https://eprint.iacr.org/2015/1129,
+// https://www.imperialviolet.org/2014/12/08/poodleagain.html, and
+// https://blog.cloudflare.com/yet-another-padding-oracle-in-openssl-cbc-ciphersuites/.
//
// - AES comes before ChaCha20
//
-// When AES hardware is available, AES-128-GCM and AES-256-GCM are faster
-// than ChaCha20Poly1305.
+// When AES hardware is available, AES-128-GCM and AES-256-GCM are faster
+// than ChaCha20Poly1305.
//
-// When AES hardware is not available, AES-128-GCM is one or more of: much
-// slower, way more complex, and less safe (because not constant time)
-// than ChaCha20Poly1305.
+// When AES hardware is not available, AES-128-GCM is one or more of: much
+// slower, way more complex, and less safe (because not constant time)
+// than ChaCha20Poly1305.
//
-// We use this list if we think both peers have AES hardware, and
-// cipherSuitesPreferenceOrderNoAES otherwise.
+// We use this list if we think both peers have AES hardware, and
+// cipherSuitesPreferenceOrderNoAES otherwise.
//
// - AES-128 comes before AES-256
//
-// The only potential advantages of AES-256 are better multi-target
-// margins, and hypothetical post-quantum properties. Neither apply to
-// TLS, and AES-256 is slower due to its four extra rounds (which don't
-// contribute to the advantages above).
+// The only potential advantages of AES-256 are better multi-target
+// margins, and hypothetical post-quantum properties. Neither apply to
+// TLS, and AES-256 is slower due to its four extra rounds (which don't
+// contribute to the advantages above).
//
// - ECDSA comes before RSA
//
-// The relative order of ECDSA and RSA cipher suites doesn't matter,
-// as they depend on the certificate. Pick one to get a stable order.
+// The relative order of ECDSA and RSA cipher suites doesn't matter,
+// as they depend on the certificate. Pick one to get a stable order.
var cipherSuitesPreferenceOrder = []uint16{
// AEADs w/ ECDHE
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
// readRecordOrCCS reads one or more TLS records from the connection and
// updates the record layer state. Some invariants:
-// * c.in must be locked
-// * c.input must be empty
+// - c.in must be locked
+// - c.input must be empty
+//
// During the handshake one and only one of the following will happen:
// - c.hand grows
// - c.in.changeCipherSpec is called
// - an error is returned
+//
// After the handshake one and only one of the following will happen:
// - c.hand grows
// - c.input is set
)
// Generated using:
-// openssl genrsa 1024 | openssl pkcs8 -topk8 -nocrypt
+//
+// openssl genrsa 1024 | openssl pkcs8 -topk8 -nocrypt
var pkcs8RSAPrivateKeyHex = `30820278020100300d06092a864886f70d0101010500048202623082025e02010002818100cfb1b5bf9685ffa97b4f99df4ff122b70e59ac9b992f3bc2b3dde17d53c1a34928719b02e8fd17839499bfbd515bd6ef99c7a1c47a239718fe36bfd824c0d96060084b5f67f0273443007a24dfaf5634f7772c9346e10eb294c2306671a5a5e719ae24b4de467291bc571014b0e02dec04534d66a9bb171d644b66b091780e8d020301000102818100b595778383c4afdbab95d2bfed12b3f93bb0a73a7ad952f44d7185fd9ec6c34de8f03a48770f2009c8580bcd275e9632714e9a5e3f32f29dc55474b2329ff0ebc08b3ffcb35bc96e6516b483df80a4a59cceb71918cbabf91564e64a39d7e35dce21cb3031824fdbc845dba6458852ec16af5dddf51a8397a8797ae0337b1439024100ea0eb1b914158c70db39031dd8904d6f18f408c85fbbc592d7d20dee7986969efbda081fdf8bc40e1b1336d6b638110c836bfdc3f314560d2e49cd4fbde1e20b024100e32a4e793b574c9c4a94c8803db5152141e72d03de64e54ef2c8ed104988ca780cd11397bc359630d01b97ebd87067c5451ba777cf045ca23f5912f1031308c702406dfcdbbd5a57c9f85abc4edf9e9e29153507b07ce0a7ef6f52e60dcfebe1b8341babd8b789a837485da6c8d55b29bbb142ace3c24a1f5b54b454d01b51e2ad03024100bd6a2b60dee01e1b3bfcef6a2f09ed027c273cdbbaf6ba55a80f6dcc64e4509ee560f84b4f3e076bd03b11e42fe71a3fdd2dffe7e0902c8584f8cad877cdc945024100aa512fa4ada69881f1d8bb8ad6614f192b83200aef5edf4811313d5ef30a86cbd0a90f7b025c71ea06ec6b34db6306c86b1040670fd8654ad7291d066d06d031`
// Generated using:
-// openssl ecparam -genkey -name secp224r1 | openssl pkcs8 -topk8 -nocrypt
+//
+// openssl ecparam -genkey -name secp224r1 | openssl pkcs8 -topk8 -nocrypt
var pkcs8P224PrivateKeyHex = `3078020100301006072a8648ce3d020106052b810400210461305f020101041cca3d72b3e88fed2684576dad9b80a9180363a5424986900e3abcab3fa13c033a0004f8f2a6372872a4e61263ed893afb919576a4cacfecd6c081a2cbc76873cf4ba8530703c6042b3a00e2205087e87d2435d2e339e25702fae1`
// Generated using:
-// openssl ecparam -genkey -name secp256r1 | openssl pkcs8 -topk8 -nocrypt
+//
+// openssl ecparam -genkey -name secp256r1 | openssl pkcs8 -topk8 -nocrypt
var pkcs8P256PrivateKeyHex = `308187020100301306072a8648ce3d020106082a8648ce3d030107046d306b0201010420dad6b2f49ca774c36d8ae9517e935226f667c929498f0343d2424d0b9b591b43a14403420004b9c9b90095476afe7b860d8bd43568cab7bcb2eed7b8bf2fa0ce1762dd20b04193f859d2d782b1e4cbfd48492f1f533113a6804903f292258513837f07fda735`
// Generated using:
-// openssl ecparam -genkey -name secp384r1 | openssl pkcs8 -topk8 -nocrypt
+//
+// openssl ecparam -genkey -name secp384r1 | openssl pkcs8 -topk8 -nocrypt
var pkcs8P384PrivateKeyHex = `3081b6020100301006072a8648ce3d020106052b8104002204819e30819b02010104309bf832f6aaaeacb78ce47ffb15e6fd0fd48683ae79df6eca39bfb8e33829ac94aa29d08911568684c2264a08a4ceb679a164036200049070ad4ed993c7770d700e9f6dc2baa83f63dd165b5507f98e8ff29b5d2e78ccbe05c8ddc955dbf0f7497e8222cfa49314fe4e269459f8e880147f70d785e530f2939e4bf9f838325bb1a80ad4cf59272ae0e5efe9a9dc33d874492596304bd3`
// Generated using:
-// openssl ecparam -genkey -name secp521r1 | openssl pkcs8 -topk8 -nocrypt
+//
+// openssl ecparam -genkey -name secp521r1 | openssl pkcs8 -topk8 -nocrypt
//
// Note that OpenSSL will truncate the private key if it can (i.e. it emits it
// like an integer, even though it's an OCTET STRING field). Thus if you
// ToRDNSequence converts n into a single RDNSequence. The following
// attributes are encoded as multi-value RDNs:
//
-// - Country
-// - Organization
-// - OrganizationalUnit
-// - Locality
-// - Province
-// - StreetAddress
-// - PostalCode
+// - Country
+// - Organization
+// - OrganizationalUnit
+// - Locality
+// - Province
+// - StreetAddress
+// - PostalCode
//
// Each ExtraNames entry is encoded as an individual RDN.
func (n Name) ToRDNSequence() (ret RDNSequence) {
// ecPrivateKey reflects an ASN.1 Elliptic Curve Private Key Structure.
// References:
-// RFC 5915
-// SEC1 - http://www.secg.org/sec1-v2.pdf
+//
+// RFC 5915
+// SEC1 - http://www.secg.org/sec1-v2.pdf
+//
// Per RFC 5915 the NamedCurveOID is marked as ASN.1 OPTIONAL, however in
// most cases it is not.
type ecPrivateKey struct {
// pkcs-1 OBJECT IDENTIFIER ::= {
// iso(1) member-body(2) us(840) rsadsi(113549) pkcs(1) 1 }
//
-//
// RFC 3279 2.2.1 RSA Signature Algorithms
//
// md2WithRSAEncryption OBJECT IDENTIFIER ::= { pkcs-1 2 }
//
// sha512WithRSAEncryption OBJECT IDENTIFIER ::= { pkcs-1 13 }
//
-//
// RFC 5758 3.1 DSA Signature Algorithms
//
// dsaWithSha256 OBJECT IDENTIFIER ::= {
// hashToPSSParameters contains the DER encoded RSA PSS parameters for the
// SHA256, SHA384, and SHA512 hashes as defined in RFC 3447, Appendix A.2.3.
// The parameters contain the following values:
-// * hashAlgorithm contains the associated hash identifier with NULL parameters
-// * maskGenAlgorithm always contains the default mgf1SHA1 identifier
-// * saltLength contains the length of the associated hash
-// * trailerField always contains the default trailerFieldBC value
+// - hashAlgorithm contains the associated hash identifier with NULL parameters
+// - maskGenAlgorithm always contains the default mgf1SHA1 identifier
+// - saltLength contains the length of the associated hash
+// - trailerField always contains the default trailerFieldBC value
var hashToPSSParameters = map[crypto.Hash]asn1.RawValue{
crypto.SHA256: asn1.RawValue{FullBytes: []byte{48, 52, 160, 15, 48, 13, 6, 9, 96, 134, 72, 1, 101, 3, 4, 2, 1, 5, 0, 161, 28, 48, 26, 6, 9, 42, 134, 72, 134, 247, 13, 1, 1, 8, 48, 13, 6, 9, 96, 134, 72, 1, 101, 3, 4, 2, 1, 5, 0, 162, 3, 2, 1, 32}},
crypto.SHA384: asn1.RawValue{FullBytes: []byte{48, 52, 160, 15, 48, 13, 6, 9, 96, 134, 72, 1, 101, 3, 4, 2, 2, 5, 0, 161, 28, 48, 26, 6, 9, 42, 134, 72, 134, 247, 13, 1, 1, 8, 48, 13, 6, 9, 96, 134, 72, 1, 101, 3, 4, 2, 2, 5, 0, 162, 3, 2, 1, 48}},
// RFC 5480, 2.1.1.1. Named Curve
//
-// secp224r1 OBJECT IDENTIFIER ::= {
-// iso(1) identified-organization(3) certicom(132) curve(0) 33 }
+// secp224r1 OBJECT IDENTIFIER ::= {
+// iso(1) identified-organization(3) certicom(132) curve(0) 33 }
//
-// secp256r1 OBJECT IDENTIFIER ::= {
-// iso(1) member-body(2) us(840) ansi-X9-62(10045) curves(3)
-// prime(1) 7 }
+// secp256r1 OBJECT IDENTIFIER ::= {
+// iso(1) member-body(2) us(840) ansi-X9-62(10045) curves(3)
+// prime(1) 7 }
//
-// secp384r1 OBJECT IDENTIFIER ::= {
-// iso(1) identified-organization(3) certicom(132) curve(0) 34 }
+// secp384r1 OBJECT IDENTIFIER ::= {
+// iso(1) identified-organization(3) certicom(132) curve(0) 34 }
//
-// secp521r1 OBJECT IDENTIFIER ::= {
-// iso(1) identified-organization(3) certicom(132) curve(0) 35 }
+// secp521r1 OBJECT IDENTIFIER ::= {
+// iso(1) identified-organization(3) certicom(132) curve(0) 35 }
//
// NB: secp256r1 is equivalent to prime256v1
var (
// CreateCertificate creates a new X.509 v3 certificate based on a template.
// The following members of template are currently used:
//
-// - AuthorityKeyId
-// - BasicConstraintsValid
-// - CRLDistributionPoints
-// - DNSNames
-// - EmailAddresses
-// - ExcludedDNSDomains
-// - ExcludedEmailAddresses
-// - ExcludedIPRanges
-// - ExcludedURIDomains
-// - ExtKeyUsage
-// - ExtraExtensions
-// - IPAddresses
-// - IsCA
-// - IssuingCertificateURL
-// - KeyUsage
-// - MaxPathLen
-// - MaxPathLenZero
-// - NotAfter
-// - NotBefore
-// - OCSPServer
-// - PermittedDNSDomains
-// - PermittedDNSDomainsCritical
-// - PermittedEmailAddresses
-// - PermittedIPRanges
-// - PermittedURIDomains
-// - PolicyIdentifiers
-// - SerialNumber
-// - SignatureAlgorithm
-// - Subject
-// - SubjectKeyId
-// - URIs
-// - UnknownExtKeyUsage
+// - AuthorityKeyId
+// - BasicConstraintsValid
+// - CRLDistributionPoints
+// - DNSNames
+// - EmailAddresses
+// - ExcludedDNSDomains
+// - ExcludedEmailAddresses
+// - ExcludedIPRanges
+// - ExcludedURIDomains
+// - ExtKeyUsage
+// - ExtraExtensions
+// - IPAddresses
+// - IsCA
+// - IssuingCertificateURL
+// - KeyUsage
+// - MaxPathLen
+// - MaxPathLenZero
+// - NotAfter
+// - NotBefore
+// - OCSPServer
+// - PermittedDNSDomains
+// - PermittedDNSDomainsCritical
+// - PermittedEmailAddresses
+// - PermittedIPRanges
+// - PermittedURIDomains
+// - PolicyIdentifiers
+// - SerialNumber
+// - SignatureAlgorithm
+// - Subject
+// - SubjectKeyId
+// - URIs
+// - UnknownExtKeyUsage
//
// The certificate is signed by parent. If parent is equal to template then the
// certificate is self-signed. The parameter pub is the public key of the
// CreateCertificateRequest creates a new certificate request based on a
// template. The following members of template are used:
//
-// - SignatureAlgorithm
-// - Subject
-// - DNSNames
-// - EmailAddresses
-// - IPAddresses
-// - URIs
-// - ExtraExtensions
-// - Attributes (deprecated)
+// - SignatureAlgorithm
+// - Subject
+// - DNSNames
+// - EmailAddresses
+// - IPAddresses
+// - URIs
+// - ExtraExtensions
+// - Attributes (deprecated)
//
// priv is the private key to sign the CSR with, and the corresponding public
// key will be included in the CSR. It must implement crypto.Signer and its
}
// These CSR was generated with OpenSSL:
-// openssl req -out CSR.csr -new -sha256 -nodes -keyout privateKey.key -config openssl.cnf
+//
+// openssl req -out CSR.csr -new -sha256 -nodes -keyout privateKey.key -config openssl.cnf
//
// With openssl.cnf containing the following sections:
-// [ v3_req ]
-// basicConstraints = CA:FALSE
-// keyUsage = nonRepudiation, digitalSignature, keyEncipherment
-// subjectAltName = email:gopher@golang.org,DNS:test.example.com
-// [ req_attributes ]
-// challengePassword = ignored challenge
-// unstructuredName = ignored unstructured name
+//
+// [ v3_req ]
+// basicConstraints = CA:FALSE
+// keyUsage = nonRepudiation, digitalSignature, keyEncipherment
+// subjectAltName = email:gopher@golang.org,DNS:test.example.com
+// [ req_attributes ]
+// challengePassword = ignored challenge
+// unstructuredName = ignored unstructured name
var csrBase64Array = [...]string{
// Just [ v3_req ]
"MIIDHDCCAgQCAQAwfjELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoMGEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDEUMBIGA1UEAwwLQ29tbW9uIE5hbWUxITAfBgkqhkiG9w0BCQEWEnRlc3RAZW1haWwuYWRkcmVzczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK1GY4YFx2ujlZEOJxQVYmsjUnLsd5nFVnNpLE4cV+77sgv9NPNlB8uhn3MXt5leD34rm/2BisCHOifPucYlSrszo2beuKhvwn4+2FxDmWtBEMu/QA16L5IvoOfYZm/gJTsPwKDqvaR0tTU67a9OtxwNTBMI56YKtmwd/o8d3hYv9cg+9ZGAZ/gKONcg/OWYx/XRh6bd0g8DMbCikpWgXKDsvvK1Nk+VtkDO1JxuBaj4Lz/p/MifTfnHoqHxWOWl4EaTs4Ychxsv34/rSj1KD1tJqorIv5Xv2aqv4sjxfbrYzX4kvS5SC1goIovLnhj5UjmQ3Qy8u65eow/LLWw+YFcCAwEAAaBZMFcGCSqGSIb3DQEJDjFKMEgwCQYDVR0TBAIwADALBgNVHQ8EBAMCBeAwLgYDVR0RBCcwJYERZ29waGVyQGdvbGFuZy5vcmeCEHRlc3QuZXhhbXBsZS5jb20wDQYJKoZIhvcNAQELBQADggEBAB6VPMRrchvNW61Tokyq3ZvO6/NoGIbuwUn54q6l5VZW0Ep5Nq8juhegSSnaJ0jrovmUgKDN9vEo2KxuAtwG6udS6Ami3zP+hRd4k9Q8djJPb78nrjzWiindLK5Fps9U5mMoi1ER8ViveyAOTfnZt/jsKUaRsscY2FzE9t9/o5moE6LTcHUS4Ap1eheR+J72WOnQYn3cifYaemsA9MJuLko+kQ6xseqttbh9zjqd9fiCSh/LNkzos9c+mg2yMADitaZinAh+HZi50ooEbjaT3erNq9O6RqwJlgD00g6MQdoz9bTAryCUhCQfkIaepmQ7BxS0pqWNW3MMwfDwx/Snz6g=",
// It is either nil, a type handled by a database driver's NamedValueChecker
// interface, or an instance of one of these types:
//
-// int64
-// float64
-// bool
-// []byte
-// string
-// time.Time
+// int64
+// float64
+// bool
+// []byte
+// string
+// time.Time
//
// If the driver supports cursors, a returned Value may also implement the Rows interface
// in this package. This is used, for example, when a user selects a cursor
// not a variable length type ok should return false.
// If length is not limited other than system limits, it should return math.MaxInt64.
// The following are examples of returned values for various types:
-// TEXT (math.MaxInt64, true)
-// varchar(10) (10, true)
-// nvarchar(10) (10, true)
-// decimal (0, false)
-// int (0, false)
-// bytea(30) (30, true)
+//
+// TEXT (math.MaxInt64, true)
+// varchar(10) (10, true)
+// nvarchar(10) (10, true)
+// decimal (0, false)
+// int (0, false)
+// bytea(30) (30, true)
type RowsColumnTypeLength interface {
Rows
ColumnTypeLength(index int) (length int64, ok bool)
// RowsColumnTypePrecisionScale may be implemented by Rows. It should return
// the precision and scale for decimal types. If not applicable, ok should be false.
// The following are examples of returned values for various types:
-// decimal(38, 4) (38, 4, true)
-// int (0, 0, false)
-// decimal (math.MaxInt64, math.MaxInt64, true)
+//
+// decimal(38, 4) (38, 4, true)
+// int (0, 0, false)
+// decimal (math.MaxInt64, math.MaxInt64, true)
type RowsColumnTypePrecisionScale interface {
Rows
ColumnTypePrecisionScale(index int) (precision, scale int64, ok bool)
// driver package to provide consistent implementations of conversions
// between drivers. The ValueConverters have several uses:
//
-// * converting from the Value types as provided by the sql package
-// into a database table's specific column type and making sure it
-// fits, such as making sure a particular int64 fits in a
-// table's uint16 column.
+// - converting from the Value types as provided by the sql package
+// into a database table's specific column type and making sure it
+// fits, such as making sure a particular int64 fits in a
+// table's uint16 column.
//
-// * converting a value as given from the database into one of the
-// driver Value types.
+// - converting a value as given from the database into one of the
+// driver Value types.
//
-// * by the sql package, for converting from a driver's Value type
-// to a user's type in a scan.
+// - by the sql package, for converting from a driver's Value type
+// to a user's type in a scan.
type ValueConverter interface {
// ConvertValue converts a value to a driver Value.
ConvertValue(v any) (Value, error)
// Bool is a ValueConverter that converts input values to bools.
//
// The conversion rules are:
-// - booleans are returned unchanged
-// - for integer types,
-// 1 is true
-// 0 is false,
-// other integers are an error
-// - for strings and []byte, same rules as strconv.ParseBool
-// - all other types are an error
+// - booleans are returned unchanged
+// - for integer types,
+// 1 is true
+// 0 is false,
+// other integers are an error
+// - for strings and []byte, same rules as strconv.ParseBool
+// - all other types are an error
var Bool boolType
type boolType struct{}
// syntactically different and simpler than SQL. The syntax is as
// follows:
//
-// WIPE
-// CREATE|<tablename>|<col>=<type>,<col>=<type>,...
-// where types are: "string", [u]int{8,16,32,64}, "bool"
-// INSERT|<tablename>|col=val,col2=val2,col3=?
-// SELECT|<tablename>|projectcol1,projectcol2|filtercol=?,filtercol2=?
-// SELECT|<tablename>|projectcol1,projectcol2|filtercol=?param1,filtercol2=?param2
+// WIPE
+// CREATE|<tablename>|<col>=<type>,<col>=<type>,...
+// where types are: "string", [u]int{8,16,32,64}, "bool"
+// INSERT|<tablename>|col=val,col2=val2,col3=?
+// SELECT|<tablename>|projectcol1,projectcol2|filtercol=?,filtercol2=?
+// SELECT|<tablename>|projectcol1,projectcol2|filtercol=?param1,filtercol2=?param2
//
// Any of these can be preceded by PANIC|<method>|, to cause the
// named method on fakeStmt to panic.
}
// Supports dsn forms:
-// <dbname>
-// <dbname>;<opts> (only currently supported option is `badConn`,
-// which causes driver.ErrBadConn to be returned on
-// every other conn.Begin())
+//
+// <dbname>
+// <dbname>;<opts> (only currently supported option is `badConn`,
+// which causes driver.ErrBadConn to be returned on
+// every other conn.Begin())
func (d *fakeDriver) Open(dsn string) (driver.Conn, error) {
hookOpenErr.Lock()
fn := hookOpenErr.fn
//
// Example usage:
//
-// db.ExecContext(ctx, `
-// delete from Invoice
-// where
-// TimeCreated < @end
-// and TimeCreated >= @start;`,
-// sql.Named("start", startTime),
-// sql.Named("end", endTime),
-// )
+// db.ExecContext(ctx, `
+// delete from Invoice
+// where
+// TimeCreated < @end
+// and TimeCreated >= @start;`,
+// sql.Named("start", startTime),
+// sql.Named("end", endTime),
+// )
func Named(name string, value any) NamedArg {
// This method exists because the go1compat promise
// doesn't guarantee that structs don't grow more fields,
// NullString implements the Scanner interface so
// it can be used as a scan destination:
//
-// var s NullString
-// err := db.QueryRow("SELECT name FROM foo WHERE id=?", id).Scan(&s)
-// ...
-// if s.Valid {
-// // use s.String
-// } else {
-// // NULL value
-// }
+// var s NullString
+// err := db.QueryRow("SELECT name FROM foo WHERE id=?", id).Scan(&s)
+// ...
+// if s.Valid {
+// // use s.String
+// } else {
+// // NULL value
+// }
type NullString struct {
String string
Valid bool // Valid is true if String is not NULL
//
// Example usage:
//
-// var outArg string
-// _, err := db.ExecContext(ctx, "ProcName", sql.Named("Arg1", sql.Out{Dest: &outArg}))
+// var outArg string
+// _, err := db.ExecContext(ctx, "ProcName", sql.Named("Arg1", sql.Out{Dest: &outArg}))
type Out struct {
_Named_Fields_Required struct{}
// an existing statement.
//
// Example:
-// updateMoney, err := db.Prepare("UPDATE balance SET money=money+? WHERE id=?")
-// ...
-// tx, err := db.Begin()
-// ...
-// res, err := tx.StmtContext(ctx, updateMoney).Exec(123.45, 98293203)
+//
+// updateMoney, err := db.Prepare("UPDATE balance SET money=money+? WHERE id=?")
+// ...
+// tx, err := db.Begin()
+// ...
+// res, err := tx.StmtContext(ctx, updateMoney).Exec(123.45, 98293203)
//
// The provided context is used for the preparation of the statement, not for the
// execution of the statement.
// an existing statement.
//
// Example:
-// updateMoney, err := db.Prepare("UPDATE balance SET money=money+? WHERE id=?")
-// ...
-// tx, err := db.Begin()
-// ...
-// res, err := tx.Stmt(updateMoney).Exec(123.45, 98293203)
+//
+// updateMoney, err := db.Prepare("UPDATE balance SET money=money+? WHERE id=?")
+// ...
+// tx, err := db.Begin()
+// ...
+// res, err := tx.Stmt(updateMoney).Exec(123.45, 98293203)
//
// The returned statement operates within the transaction and will be closed
// when the transaction has been committed or rolled back.
//
// Example usage:
//
-// var name string
-// err := nameByUseridStmt.QueryRow(id).Scan(&name)
+// var name string
+// err := nameByUseridStmt.QueryRow(id).Scan(&name)
//
// QueryRow uses context.Background internally; to specify the context, use
// QueryRowContext.
// Scan converts columns read from the database into the following
// common Go types and special types provided by the sql package:
//
-// *string
-// *[]byte
-// *int, *int8, *int16, *int32, *int64
-// *uint, *uint8, *uint16, *uint32, *uint64
-// *bool
-// *float32, *float64
-// *interface{}
-// *RawBytes
-// *Rows (cursor value)
-// any type implementing Scanner (see Scanner docs)
+// *string
+// *[]byte
+// *int, *int8, *int16, *int32, *int64
+// *uint, *uint8, *uint16, *uint32, *uint64
+// *bool
+// *float32, *float64
+// *interface{}
+// *RawBytes
+// *Rows (cursor value)
+// any type implementing Scanner (see Scanner docs)
//
// In the most simple case, if the type of the value from the source
// column is an integer, bool or string type T and dest is of type *T,
// A value can be one of several "attribute classes" defined by DWARF.
// The Go types corresponding to each class are:
//
-// DWARF class Go type Class
-// ----------- ------- -----
-// address uint64 ClassAddress
-// block []byte ClassBlock
-// constant int64 ClassConstant
-// flag bool ClassFlag
-// reference
-// to info dwarf.Offset ClassReference
-// to type unit uint64 ClassReferenceSig
-// string string ClassString
-// exprloc []byte ClassExprLoc
-// lineptr int64 ClassLinePtr
-// loclistptr int64 ClassLocListPtr
-// macptr int64 ClassMacPtr
-// rangelistptr int64 ClassRangeListPtr
+// DWARF class Go type Class
+// ----------- ------- -----
+// address uint64 ClassAddress
+// block []byte ClassBlock
+// constant int64 ClassConstant
+// flag bool ClassFlag
+// reference
+// to info dwarf.Offset ClassReference
+// to type unit uint64 ClassReferenceSig
+// string string ClassString
+// exprloc []byte ClassExprLoc
+// lineptr int64 ClassLinePtr
+// loclistptr int64 ClassLocListPtr
+// macptr int64 ClassMacPtr
+// rangelistptr int64 ClassRangeListPtr
//
// For unrecognized or vendor-defined attributes, Class may be
// ClassUnknown.
//
// A common idiom is to merge the check for nil return with
// the check that the value has the expected dynamic type, as in:
+//
// v, ok := e.Val(AttrSibling).(int64)
func (e *Entry) Val(a Attr) any {
if f := e.AttrField(a); f != nil {
// data, _ := f.ReadFile("hello.txt")
// print(string(data))
//
-// Directives
+// # Directives
//
// A //go:embed directive above a variable declaration specifies which files to embed,
// using one or more path.Match patterns.
//
// If any patterns are invalid or have invalid matches, the build will fail.
//
-// Strings and Bytes
+// # Strings and Bytes
//
// The //go:embed line for a variable of type string or []byte can have only a single pattern,
// and that pattern can match only a single file. The string or []byte is initialized with
// The //go:embed directive requires importing "embed", even when using a string or []byte.
// In source files that don't refer to embed.FS, use a blank import (import _ "embed").
//
-// File Systems
+// # File Systems
//
// For embedding a single file, a variable of type string or []byte is often best.
// The FS type enables embedding a tree of files, such as a directory of static
//
// template.ParseFS(content, "*.tmpl")
//
-// Tools
+// # Tools
//
// To support tools that analyze Go packages, the patterns found in //go:embed lines
// are available in “go list” output. See the EmbedPatterns, TestEmbedPatterns,
// and XTestEmbedPatterns fields in the “go help list” output.
-//
package embed
import (
// number of bytes read (> 0). If an error occurred, the value is 0
// and the number of bytes n is <= 0 meaning:
//
-// n == 0: buf too small
-// n < 0: value larger than 64 bits (overflow)
-// and -n is the number of bytes read
+// n == 0: buf too small
+// n < 0: value larger than 64 bits (overflow)
+// and -n is the number of bytes read
func Uvarint(buf []byte) (uint64, int) {
var x uint64
var s uint
// number of bytes read (> 0). If an error occurred, the value is 0
// and the number of bytes n is <= 0 with the following meaning:
//
-// n == 0: buf too small
-// n < 0: value larger than 64 bits (overflow)
-// and -n is the number of bytes read
+// n == 0: buf too small
+// n < 0: value larger than 64 bits (overflow)
+// and -n is the number of bytes read
func Varint(buf []byte) (int64, int) {
ux, n := Uvarint(buf) // ok to continue in presence of error
x := int64(ux >> 1)
}
// GobStream:
+//
// DelimitedMessage* (until EOF)
func (deb *debugger) gobStream() {
// Make sure we're single-threaded through here.
}
// DelimitedMessage:
+//
// uint(lengthOfMessage) Message
func (deb *debugger) delimitedMessage(indent tab) bool {
for {
}
// Message:
+//
// TypeSequence TypedValue
+//
// TypeSequence
+//
// (TypeDefinition DelimitedTypeDefinition*)?
+//
// DelimitedTypeDefinition:
+//
// uint(lengthOfTypeDefinition) TypeDefinition
+//
// TypedValue:
+//
// int(typeId) Value
func (deb *debugger) message(indent tab) bool {
for {
}
// TypeDefinition:
+//
// [int(-typeId) (already read)] encodingOfWireType
func (deb *debugger) typeDefinition(indent tab, id typeId) {
deb.dump("type definition for id %d", id)
}
// Value:
+//
// SingletonValue | StructValue
func (deb *debugger) value(indent tab, id typeId) {
wire, ok := deb.wireType[id]
}
// SingletonValue:
+//
// uint(0) FieldValue
func (deb *debugger) singletonValue(indent tab, id typeId) {
deb.dump("Singleton value")
}
// InterfaceValue:
+//
// NilInterfaceValue | NonNilInterfaceValue
func (deb *debugger) interfaceValue(indent tab) {
deb.dump("Start of interface value")
}
// NilInterfaceValue:
+//
// uint(0) [already read]
func (deb *debugger) nilInterfaceValue(indent tab) int {
fmt.Fprintf(os.Stderr, "%snil interface\n", indent)
}
// NonNilInterfaceValue:
+//
// ConcreteTypeName TypeSequence InterfaceContents
+//
// ConcreteTypeName:
+//
// uint(lengthOfName) [already read=n] name
+//
// InterfaceContents:
+//
// int(concreteTypeId) DelimitedValue
+//
// DelimitedValue:
+//
// uint(length) Value
func (deb *debugger) nonNilInterfaceValue(indent tab, nameLen int) {
// ConcreteTypeName
// fieldValue prints a value of any type, such as a struct field.
// FieldValue:
+//
// builtinValue | ArrayValue | MapValue | SliceValue | StructValue | InterfaceValue
func (deb *debugger) fieldValue(indent tab, id typeId) {
_, ok := builtinIdToType[id]
}
// ArrayValue:
+//
// uint(n) FieldValue*n
func (deb *debugger) arrayValue(indent tab, wire *wireType) {
elemId := wire.ArrayT.Elem
}
// MapValue:
+//
// uint(n) (FieldValue FieldValue)*n [n (key, value) pairs]
func (deb *debugger) mapValue(indent tab, wire *wireType) {
keyId := wire.MapT.Key
}
// SliceValue:
+//
// uint(n) (n FieldValue)
func (deb *debugger) sliceValue(indent tab, wire *wireType) {
elemId := wire.SliceT.Elem
}
// StructValue:
+//
// (uint(fieldDelta) FieldValue)*
func (deb *debugger) structValue(indent tab, id typeId) {
deb.dump("Start of struct value of %q id=%d\n<<\n", id.name(), id)
}
// GobEncoderValue:
+//
// uint(n) byte*n
func (deb *debugger) gobEncoderValue(indent tab, id typeId) {
len := deb.uint64()
// decodeTypeSequence parses:
// TypeSequence
+//
// (TypeDefinition DelimitedTypeDefinition*)?
+//
// and returns the type id of the next value. It returns -1 at
// EOF. Upon return, the remainder of dec.buf is the value to be
// decoded. If this is an interface value, it can be ignored by
is most efficient when a single Encoder is used to transmit a stream of values,
amortizing the cost of compilation.
-Basics
+# Basics
A stream of gobs is self-describing. Each data item in the stream is preceded by
a specification of its type, expressed in terms of a small set of predefined
Decoder retrieves values from the encoded stream and unpacks them into local
variables.
-Types and Values
+# Types and Values
The source and destination values/types need not correspond exactly. For structs,
fields (identified by name) that are in the source but absent from the receiving
encoding.BinaryUnmarshaler interfaces by calling the corresponding method,
again in that order of preference.
-Encoding Details
+# Encoding Details
This section documents the encoding, details that are not important for most
users. Details are presented bottom-up.
//
// Examples of struct field tags and their meanings:
//
-// // Field appears in JSON as key "myName".
-// Field int `json:"myName"`
+// // Field appears in JSON as key "myName".
+// Field int `json:"myName"`
//
-// // Field appears in JSON as key "myName" and
-// // the field is omitted from the object if its value is empty,
-// // as defined above.
-// Field int `json:"myName,omitempty"`
+// // Field appears in JSON as key "myName" and
+// // the field is omitted from the object if its value is empty,
+// // as defined above.
+// Field int `json:"myName,omitempty"`
//
-// // Field appears in JSON as key "Field" (the default), but
-// // the field is skipped if empty.
-// // Note the leading comma.
-// Field int `json:",omitempty"`
+// // Field appears in JSON as key "Field" (the default), but
+// // the field is skipped if empty.
+// // Note the leading comma.
+// Field int `json:",omitempty"`
//
-// // Field is ignored by this package.
-// Field int `json:"-"`
+// // Field is ignored by this package.
+// Field int `json:"-"`
//
-// // Field appears in JSON as key "-".
-// Field int `json:"-,"`
+// // Field appears in JSON as key "-".
+// Field int `json:"-,"`
//
// The "string" option signals that a field is stored as JSON inside a
// JSON-encoded string. It applies only to fields of string, floating point,
// integer, or boolean types. This extra level of encoding is sometimes used
// when communicating with JavaScript programs:
//
-// Int64String int64 `json:",string"`
+// Int64String int64 `json:",string"`
//
// The key name will be used if it's a non-empty string consisting of
// only Unicode letters, digits, and ASCII punctuation except quotation
// 4) simpleLetterEqualFold, no specials, no non-letters.
//
// The letters S and K are special because they map to 3 runes, not just 2:
-// * S maps to s and to U+017F 'ſ' Latin small letter long s
-// * k maps to K and to U+212A 'K' Kelvin sign
+// - S maps to s and to U+017F 'ſ' Latin small letter long s
+// - k maps to K and to U+212A 'K' Kelvin sign
+//
// See https://play.golang.org/p/tTxjOc0OGo
//
// The returned function is specialized for matching against s and
// A Block represents a PEM encoded structure.
//
// The encoded form is:
-// -----BEGIN Type-----
-// Headers
-// base64-encoded Bytes
-// -----END Type-----
+//
+// -----BEGIN Type-----
+// Headers
+// base64-encoded Bytes
+// -----END Type-----
+//
// where Headers is a possibly empty sequence of Key: Value lines.
type Block struct {
Type string // The type, taken from the preamble (i.e. "RSA PRIVATE KEY").
// elements containing the data.
//
// The name for the XML elements is taken from, in order of preference:
-// - the tag on the XMLName field, if the data is a struct
-// - the value of the XMLName field of type Name
-// - the tag of the struct field used to obtain the data
-// - the name of the struct field used to obtain the data
-// - the name of the marshaled type
+// - the tag on the XMLName field, if the data is a struct
+// - the value of the XMLName field of type Name
+// - the tag of the struct field used to obtain the data
+// - the name of the struct field used to obtain the data
+// - the name of the marshaled type
//
// The XML element for a struct contains marshaled elements for each of the
// exported fields of the struct, with these exceptions:
-// - the XMLName field, described above, is omitted.
-// - a field with tag "-" is omitted.
-// - a field with tag "name,attr" becomes an attribute with
-// the given name in the XML element.
-// - a field with tag ",attr" becomes an attribute with the
-// field name in the XML element.
-// - a field with tag ",chardata" is written as character data,
-// not as an XML element.
-// - a field with tag ",cdata" is written as character data
-// wrapped in one or more <![CDATA[ ... ]]> tags, not as an XML element.
-// - a field with tag ",innerxml" is written verbatim, not subject
-// to the usual marshaling procedure.
-// - a field with tag ",comment" is written as an XML comment, not
-// subject to the usual marshaling procedure. It must not contain
-// the "--" string within it.
-// - a field with a tag including the "omitempty" option is omitted
-// if the field value is empty. The empty values are false, 0, any
-// nil pointer or interface value, and any array, slice, map, or
-// string of length zero.
-// - an anonymous struct field is handled as if the fields of its
-// value were part of the outer struct.
-// - a field implementing Marshaler is written by calling its MarshalXML
-// method.
-// - a field implementing encoding.TextMarshaler is written by encoding the
-// result of its MarshalText method as text.
+// - the XMLName field, described above, is omitted.
+// - a field with tag "-" is omitted.
+// - a field with tag "name,attr" becomes an attribute with
+// the given name in the XML element.
+// - a field with tag ",attr" becomes an attribute with the
+// field name in the XML element.
+// - a field with tag ",chardata" is written as character data,
+// not as an XML element.
+// - a field with tag ",cdata" is written as character data
+// wrapped in one or more <![CDATA[ ... ]]> tags, not as an XML element.
+// - a field with tag ",innerxml" is written verbatim, not subject
+// to the usual marshaling procedure.
+// - a field with tag ",comment" is written as an XML comment, not
+// subject to the usual marshaling procedure. It must not contain
+// the "--" string within it.
+// - a field with a tag including the "omitempty" option is omitted
+// if the field value is empty. The empty values are false, 0, any
+// nil pointer or interface value, and any array, slice, map, or
+// string of length zero.
+// - an anonymous struct field is handled as if the fields of its
+// value were part of the outer struct.
+// - a field implementing Marshaler is written by calling its MarshalXML
+// method.
+// - a field implementing encoding.TextMarshaler is written by encoding the
+// result of its MarshalText method as text.
//
// If a field uses a tag "a>b>c", then the element c will be nested inside
// parent elements a and b. Fields that appear next to each other that name
// In the rules, the tag of a field refers to the value associated with the
// key 'xml' in the struct field's tag (see the example above).
//
-// * If the struct has a field of type []byte or string with tag
-// ",innerxml", Unmarshal accumulates the raw XML nested inside the
-// element in that field. The rest of the rules still apply.
+// - If the struct has a field of type []byte or string with tag
+// ",innerxml", Unmarshal accumulates the raw XML nested inside the
+// element in that field. The rest of the rules still apply.
//
-// * If the struct has a field named XMLName of type Name,
-// Unmarshal records the element name in that field.
+// - If the struct has a field named XMLName of type Name,
+// Unmarshal records the element name in that field.
//
-// * If the XMLName field has an associated tag of the form
-// "name" or "namespace-URL name", the XML element must have
-// the given name (and, optionally, name space) or else Unmarshal
-// returns an error.
+// - If the XMLName field has an associated tag of the form
+// "name" or "namespace-URL name", the XML element must have
+// the given name (and, optionally, name space) or else Unmarshal
+// returns an error.
//
-// * If the XML element has an attribute whose name matches a
-// struct field name with an associated tag containing ",attr" or
-// the explicit name in a struct field tag of the form "name,attr",
-// Unmarshal records the attribute value in that field.
+// - If the XML element has an attribute whose name matches a
+// struct field name with an associated tag containing ",attr" or
+// the explicit name in a struct field tag of the form "name,attr",
+// Unmarshal records the attribute value in that field.
//
-// * If the XML element has an attribute not handled by the previous
-// rule and the struct has a field with an associated tag containing
-// ",any,attr", Unmarshal records the attribute value in the first
-// such field.
+// - If the XML element has an attribute not handled by the previous
+// rule and the struct has a field with an associated tag containing
+// ",any,attr", Unmarshal records the attribute value in the first
+// such field.
//
-// * If the XML element contains character data, that data is
-// accumulated in the first struct field that has tag ",chardata".
-// The struct field may have type []byte or string.
-// If there is no such field, the character data is discarded.
+// - If the XML element contains character data, that data is
+// accumulated in the first struct field that has tag ",chardata".
+// The struct field may have type []byte or string.
+// If there is no such field, the character data is discarded.
//
-// * If the XML element contains comments, they are accumulated in
-// the first struct field that has tag ",comment". The struct
-// field may have type []byte or string. If there is no such
-// field, the comments are discarded.
+// - If the XML element contains comments, they are accumulated in
+// the first struct field that has tag ",comment". The struct
+// field may have type []byte or string. If there is no such
+// field, the comments are discarded.
//
-// * If the XML element contains a sub-element whose name matches
-// the prefix of a tag formatted as "a" or "a>b>c", unmarshal
-// will descend into the XML structure looking for elements with the
-// given names, and will map the innermost elements to that struct
-// field. A tag starting with ">" is equivalent to one starting
-// with the field name followed by ">".
+// - If the XML element contains a sub-element whose name matches
+// the prefix of a tag formatted as "a" or "a>b>c", unmarshal
+// will descend into the XML structure looking for elements with the
+// given names, and will map the innermost elements to that struct
+// field. A tag starting with ">" is equivalent to one starting
+// with the field name followed by ">".
//
-// * If the XML element contains a sub-element whose name matches
-// a struct field's XMLName tag and the struct field has no
-// explicit name tag as per the previous rule, unmarshal maps
-// the sub-element to that struct field.
+// - If the XML element contains a sub-element whose name matches
+// a struct field's XMLName tag and the struct field has no
+// explicit name tag as per the previous rule, unmarshal maps
+// the sub-element to that struct field.
//
-// * If the XML element contains a sub-element whose name matches a
-// field without any mode flags (",attr", ",chardata", etc), Unmarshal
-// maps the sub-element to that struct field.
+// - If the XML element contains a sub-element whose name matches a
+// field without any mode flags (",attr", ",chardata", etc), Unmarshal
+// maps the sub-element to that struct field.
//
-// * If the XML element contains a sub-element that hasn't matched any
-// of the above rules and the struct has a field with tag ",any",
-// unmarshal maps the sub-element to that struct field.
+// - If the XML element contains a sub-element that hasn't matched any
+// of the above rules and the struct has a field with tag ",any",
+// unmarshal maps the sub-element to that struct field.
//
-// * An anonymous struct field is handled as if the fields of its
-// value were part of the outer struct.
+// - An anonymous struct field is handled as if the fields of its
+// value were part of the outer struct.
//
-// * A struct field with tag "-" is never unmarshaled into.
+// - A struct field with tag "-" is never unmarshaled into.
//
// If Unmarshal encounters a field type that implements the Unmarshaler
// interface, Unmarshal calls its UnmarshalXML method to produce the value from
// The package is sometimes only imported for the side effect of
// registering its HTTP handler and the above variables. To use it
// this way, link this package into your program:
-// import _ "expvar"
//
+// import _ "expvar"
package expvar
import (
/*
Package flag implements command-line flag parsing.
-Usage
+# Usage
Define flags using flag.String(), Bool(), Int(), etc.
This declares an integer flag, -n, stored in the pointer nFlag, with type *int:
+
import "flag"
var nFlag = flag.Int("n", 1234, "help message for flag n")
+
If you like, you can bind the flag to a variable using the Var() functions.
+
var flagvar int
func init() {
flag.IntVar(&flagvar, "flagname", 1234, "help message for flagname")
}
+
Or you can create custom flags that satisfy the Value interface (with
pointer receivers) and couple them to flag parsing by
+
flag.Var(&flagVal, "name", "help message for flagname")
+
For such flags, the default value is just the initial value of the variable.
After all flags are defined, call
+
flag.Parse()
+
to parse the command line into the defined flags.
Flags may then be used directly. If you're using the flags themselves,
they are all pointers; if you bind to variables, they're values.
+
fmt.Println("ip has value ", *ip)
fmt.Println("flagvar has value ", flagvar)
slice flag.Args() or individually as flag.Arg(i).
The arguments are indexed from 0 through flag.NArg()-1.
-Command line flag syntax
+# Command line flag syntax
The following forms are permitted:
-flag
-flag=x
-flag x // non-boolean flags only
+
One or two minus signs may be used; they are equivalent.
The last form is not permitted for boolean flags because the
meaning of the command
+
cmd -x *
+
where * is a Unix shell wildcard, will change if there is a file
called 0, false, etc. You must use the -flag=false form to turn
off a boolean flag.
Integer flags accept 1234, 0664, 0x1234 and may be negative.
Boolean flags may be:
+
1, 0, t, f, T, F, true, false, TRUE, FALSE, True, False
+
Duration flags accept any input valid for time.ParseDuration.
The default set of command-line flags is controlled by
// a usage message showing the default settings of all defined
// command-line flags.
// For an integer valued flag x, the default output has the form
+//
// -x int
// usage-message-for-x (default 7)
+//
// The usage message will appear on a separate line for anything but
// a bool flag with a one-byte name. For bool flags, the type is
// omitted and if the flag name is one byte the usage message appears
// string; the first such item in the message is taken to be a parameter
// name to show in the message and the back quotes are stripped from
// the message when displayed. For instance, given
+//
// flag.String("I", "", "search `directory` for include files")
+//
// the output will be
+//
// -I directory
// search directory for include files.
//
to C's printf and scanf. The format 'verbs' are derived from C's but
are simpler.
-
-Printing
+# Printing
The verbs:
General:
+
%v the value in a default format
when printing structs, the plus flag (%+v) adds field names
%#v a Go-syntax representation of the value
%% a literal percent sign; consumes no value
Boolean:
+
%t the word true or false
+
Integer:
+
%b base 2
%c the character represented by the corresponding Unicode code point
%d base 10
%x base 16, with lower-case letters for a-f
%X base 16, with upper-case letters for A-F
%U Unicode format: U+1234; same as "U+%04X"
+
Floating-point and complex constituents:
+
%b decimalless scientific notation with exponent a power of two,
in the manner of strconv.FormatFloat with the 'b' format,
e.g. -123456p-78
%G %E for large exponents, %F otherwise
%x hexadecimal notation (with decimal power of two exponent), e.g. -0x1.23abcp+20
%X upper-case hexadecimal notation, e.g. -0X1.23ABCP+20
+
String and slice of bytes (treated equivalently with these verbs):
+
%s the uninterpreted bytes of the string or slice
%q a double-quoted string safely escaped with Go syntax
%x base 16, lower-case, two characters per byte
%X base 16, upper-case, two characters per byte
+
Slice:
+
%p address of 0th element in base 16 notation, with leading 0x
+
Pointer:
+
%p base 16 notation, with leading 0x
The %b, %d, %o, %x and %X verbs also work with pointers,
formatting the value exactly as if it were an integer.
The default format for %v is:
+
bool: %t
int, int8 etc.: %d
uint, uint8 etc.: %d, %#x if printed with %#v
string: %s
chan: %p
pointer: %p
+
For compound objects, the elements are printed using these rules, recursively,
laid out like this:
+
struct: {field0 field1 ...}
array, slice: [elem0 elem1 ...]
maps: map[key1:value1 key2:value2 ...]
decimal number. If no period is present, a default precision is used.
A period with no following number specifies a precision of zero.
Examples:
+
%f default width, default precision
%9f width 9, default precision
%.2f default width, precision 2
Regardless of the verb, if an operand is an interface value,
the internal concrete value is used, not the interface itself.
Thus:
+
var i interface{} = 23
fmt.Printf("%v\n", i)
+
will print 23.
Except when printed using the verbs %T and %p, special
(%s %q %x %X), it is treated identically to a string, as a single item.
To avoid recursion in cases such as
+
type X string
func (x X) String() string { return Sprintf("<%s>", x) }
+
convert the value before recurring:
+
func (x X) String() string { return Sprintf("<%s>", string(x)) }
+
Infinite recursion can also be triggered by self-referential data
structures, such as a slice that contains itself as an element, if
that type has a String method. Such pathologies are rare, however,
When printing a struct, fmt cannot and therefore does not invoke
formatting methods such as Error or String on unexported fields.
-Explicit argument indexes
+# Explicit argument indexes
In Printf, Sprintf, and Fprintf, the default behavior is for each
formatting verb to format successive arguments passed in the call.
will use arguments n+1, n+2, etc. unless otherwise directed.
For example,
+
fmt.Sprintf("%[2]d %[1]d\n", 11, 22)
+
will yield "22 11", while
+
fmt.Sprintf("%[3]*.[2]*[1]f", 12.0, 2, 6)
+
equivalent to
+
fmt.Sprintf("%6.2f", 12.0)
+
will yield " 12.00". Because an explicit index affects subsequent verbs,
this notation can be used to print the same values multiple times
by resetting the index for the first argument to be repeated:
+
fmt.Sprintf("%d %d %#[1]x %#x", 16, 17)
+
will yield "16 17 0x10 0x11".
-Format errors
+# Format errors
If an invalid argument is given for a verb, such as providing
a string to %d, the generated string will contain a
through the fmt package. For example, if a String method
calls panic("bad"), the resulting formatted message will look
like
+
%!s(PANIC=bad)
The %!s just shows the print verb in use when the failure
or String method, however, the output is the undecorated
string, "<nil>".
-Scanning
+# Scanning
An analogous set of functions scans formatted text to yield
values. Scan, Scanf and Scanln read from os.Stdin; Fscan,
If width is provided, it applies after leading spaces are
trimmed and specifies the maximum number of runes to read
to satisfy the verb. For example,
- Sscanf(" 1234567 ", "%5s%d", &s, &i)
+
+ Sscanf(" 1234567 ", "%5s%d", &s, &i)
+
will set s to "12345" and i to 67 while
- Sscanf(" 12 34 567 ", "%5s%d", &s, &i)
+
+ Sscanf(" 12 34 567 ", "%5s%d", &s, &i)
+
will set s to "12" and i to 34.
In all the scanning functions, a carriage return followed
// Package ast declares the types used to represent syntax trees for Go
// packages.
-//
package ast
import (
// In the directory containing the package, .go, .c, .h, and .s files are
// considered part of the package except for:
//
-// - .go files in package documentation
-// - files starting with _ or . (likely editor temporary files)
-// - files with build constraints not satisfied by the context
+// - .go files in package documentation
+// - files starting with _ or . (likely editor temporary files)
+// - files with build constraints not satisfied by the context
//
// If an error occurs, Import returns a non-nil error and a non-nil
// *Package containing partial information.
//
// For example, the following string:
//
-// a b:"c d" 'e''f' "g\""
+// a b:"c d" 'e''f' "g\""
//
// Would be parsed as:
//
-// []string{"a", "b:c d", "ef", `g"`}
+// []string{"a", "b:c d", "ef", `g"`}
func splitQuoted(s string) (r []string, err error) {
var args []string
arg := make([]rune, len(s))
// suffix which does not match the current system.
// The recognized name formats are:
//
-// name_$(GOOS).*
-// name_$(GOARCH).*
-// name_$(GOOS)_$(GOARCH).*
-// name_$(GOOS)_test.*
-// name_$(GOARCH)_test.*
-// name_$(GOOS)_$(GOARCH)_test.*
+// name_$(GOOS).*
+// name_$(GOARCH).*
+// name_$(GOOS)_$(GOARCH).*
+// name_$(GOOS)_test.*
+// name_$(GOARCH)_test.*
+// name_$(GOOS)_$(GOARCH)_test.*
//
// Exceptions:
// if GOOS=android, then files with GOOS=linux are also matched.
//
// The general syntax of a rule is:
//
-// a, b < c, d;
+// a, b < c, d;
//
// which means c and d come after a and b in the partial order
// (that is, c and d can import a and b),
//
// The rules can chain together, as in:
//
-// e < f, g < h;
+// e < f, g < h;
//
// which is equivalent to
//
-// e < f, g;
-// f, g < h;
+// e < f, g;
+// f, g < h;
//
// Except for the special bottom element "NONE", each name
// must appear exactly once on the right-hand side of a rule.
//
// Negative assertions double-check the partial order:
//
-// i !< j
+// i !< j
//
// means that it must NOT be the case that i < j.
// Negative assertions may appear anywhere in the rules,
// Package build gathers information about Go packages.
//
-// Go Path
+// # Go Path
//
// The Go path is a list of directory trees containing Go source code.
// It is consulted to resolve imports that cannot be found in the standard
// foo/
// bar.a (installed package object)
//
-// Build Constraints
+// # Build Constraints
//
// A build constraint, also known as a build tag, is a line comment that begins
//
-// //go:build
+// //go:build
//
// that lists the conditions under which a file should be included in the
// package. Build constraints may also be part of a file's name
// See 'go help buildconstraint'
// (https://golang.org/cmd/go/#hdr-Build_constraints) for details.
//
-// Binary-Only Packages
+// # Binary-Only Packages
//
// In Go 1.12 and earlier, it was possible to distribute packages in binary
// form without including the source code used for compiling the package.
// "go build" and other commands no longer support binary-only-packages.
// Import and ImportDir will still set the BinaryOnly flag in packages
// containing these comments for use in tools and error messages.
-//
package build
// is unknown due to an error. Operations on unknown
// values produce unknown values unless specified
// otherwise.
-//
package constant
import (
// interface, it is up to the caller to type assert the result to the expected
// type. The possible dynamic return types are:
//
-// x Kind type of result
-// -----------------------------------------
-// Bool bool
-// String string
-// Int int64 or *big.Int
-// Float *big.Float or *big.Rat
-// everything else nil
+// x Kind type of result
+// -----------------------------------------
+// Bool bool
+// String string
+// Int int64 or *big.Int
+// Float *big.Float or *big.Rat
+// everything else nil
func Val(x Value) any {
switch x := x.(type) {
case boolVal:
// Make returns the Value for x.
//
-// type of x result Kind
-// ----------------------------
-// bool Bool
-// string String
-// int64 Int
-// *big.Int Int
-// *big.Float Float
-// *big.Rat Float
-// anything else Unknown
+// type of x result Kind
+// ----------------------------
+// bool Bool
+// string String
+// int64 Int
+// *big.Int Int
+// *big.Float Float
+// *big.Rat Float
+// anything else Unknown
func Make(x any) Value {
switch x := x.(type) {
case bool:
}
// parseLink parses a single link definition line:
+//
// [text]: url
+//
// It returns the link definition and whether the line was well formed.
func parseLink(line string) (*LinkDef, bool) {
if line == "" || line[0] != '[' {
}
s = rest
}
-}
\ No newline at end of file
+}
//
// The classification process is ambiguous in some cases:
//
-// - ExampleFoo_Bar matches a type named Foo_Bar
-// or a method named Foo.Bar.
-// - ExampleFoo_bar matches a type named Foo_bar
-// or Foo (with a "bar" suffix).
+// - ExampleFoo_Bar matches a type named Foo_Bar
+// or a method named Foo.Bar.
+// - ExampleFoo_bar matches a type named Foo_bar
+// or Foo (with a "bar" suffix).
//
// Examples with malformed names are not associated with anything.
func classifyExamples(p *Package, examples []*Example) {
// array1 generates an array literal with n elements of the form:
//
// var _ = [...]byte{
-// // 0
-// 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
-// 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
-// 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
-// 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
-// 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27,
-// // 40
-// 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f,
-// 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
-// ...
+//
+// // 0
+// 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+// 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+// 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+// 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
+// 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27,
+// // 40
+// 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f,
+// 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
+// ...
func array1(buf *bytes.Buffer, n int) {
buf.WriteString("var _ = [...]byte{\n")
for i := 0; i < n; {
}
// InitDataDirective = ( "v1" | "v2" | "v3" ) ";" |
-// "priority" int ";" |
-// "init" { PackageInit } ";" |
-// "checksum" unquotedString ";" .
+//
+// "priority" int ";" |
+// "init" { PackageInit } ";" |
+// "checksum" unquotedString ";" .
func (p *parser) parseInitDataDirective() {
if p.tok != scanner.Ident {
// unexpected token kind; panic
}
// Directive = InitDataDirective |
-// "package" unquotedString [ unquotedString ] [ unquotedString ] ";" |
-// "pkgpath" unquotedString ";" |
-// "prefix" unquotedString ";" |
-// "import" unquotedString unquotedString string ";" |
-// "indirectimport" unquotedString unquotedstring ";" |
-// "func" Func ";" |
-// "type" Type ";" |
-// "var" Var ";" |
-// "const" Const ";" .
+//
+// "package" unquotedString [ unquotedString ] [ unquotedString ] ";" |
+// "pkgpath" unquotedString ";" |
+// "prefix" unquotedString ";" |
+// "import" unquotedString unquotedString string ";" |
+// "indirectimport" unquotedString unquotedstring ";" |
+// "func" Func ";" |
+// "type" Type ";" |
+// "var" Var ";" |
+// "const" Const ";" .
func (p *parser) parseDirective() {
if p.tok != scanner.Ident {
// unexpected token kind; panic
// treated like an ordinary parameter list and thus may contain multiple
// entries where the spec permits exactly one. Consequently, the corresponding
// field in the AST (ast.FuncDecl.Recv) field is not restricted to one entry.
-//
package parser
import (
//
// TODO(gri) Consider rewriting this to be independent of []ast.Expr
// so that we can use the algorithm for any kind of list
-// (e.g., pass list via a channel over which to range).
+//
+// (e.g., pass list via a channel over which to range).
func (p *printer) exprList(prev0 token.Pos, list []ast.Expr, depth int, mode exprListMode, next0 token.Pos, isIncomplete bool) {
if len(list) == 0 {
if isIncomplete {
// (Algorithm suggestion by Russ Cox.)
//
// The precedences are:
+//
// 5 * / % << >> & &^
// 4 + - | ^
// 3 == != < <= > >=
// To choose the cutoff, look at the whole expression but excluding primary
// expressions (function calls, parenthesized exprs), and apply these rules:
//
-// 1) If there is a binary operator with a right side unary operand
-// that would clash without a space, the cutoff must be (in order):
+// 1. If there is a binary operator with a right side unary operand
+// that would clash without a space, the cutoff must be (in order):
//
-// /* 6
-// && 6
-// &^ 6
-// ++ 5
-// -- 5
+// /* 6
+// && 6
+// &^ 6
+// ++ 5
+// -- 5
//
-// (Comparison operators always have spaces around them.)
+// (Comparison operators always have spaces around them.)
//
-// 2) If there is a mix of level 5 and level 4 operators, then the cutoff
-// is 5 (use spaces to distinguish precedence) in Normal mode
-// and 4 (never use spaces) in Compact mode.
+// 2. If there is a mix of level 5 and level 4 operators, then the cutoff
+// is 5 (use spaces to distinguish precedence) in Normal mode
+// and 4 (never use spaces) in Compact mode.
//
-// 3) If there are no level 4 operators or no level 5 operators, then the
-// cutoff is 6 (always use spaces) in Normal mode
-// and 4 (never use spaces) in Compact mode.
+// 3. If there are no level 4 operators or no level 5 operators, then the
+// cutoff is 6 (always use spaces) in Normal mode
+// and 4 (never use spaces) in Compact mode.
func (p *printer) binaryExpr(x *ast.BinaryExpr, prec1, cutoff, depth int) {
prec := x.Op.Precedence()
if prec < prec1 {
//
// For example, the declaration:
//
-// const (
-// foobar int = 42 // comment
-// x = 7 // comment
-// foo
-// bar = 991
-// )
+// const (
+// foobar int = 42 // comment
+// x = 7 // comment
+// foo
+// bar = 991
+// )
//
// leads to the type/values matrix below. A run of value columns (V) can
// be moved into the type column if there is no type for any of the values
// in that column (we only move entire columns so that they align properly).
//
-// matrix formatted result
-// matrix
-// T V -> T V -> true there is a T and so the type
-// - V - V true column must be kept
-// - - - - false
-// - V V - false V is moved into T column
+// matrix formatted result
+// matrix
+// T V -> T V -> true there is a T and so the type
+// - V - V true column must be kept
+// - - - - false
+// - V V - false V is moved into T column
func keepTypeColumn(specs []ast.Spec) []bool {
m := make([]bool, len(specs))
// Package scanner implements a scanner for Go source text.
// It takes a []byte as source which can then be tokenized
// through repeated calls to the Scan method.
-//
package scanner
import (
// Package token defines constants representing the lexical tokens of the Go
// programming language and basic operations on tokens (printing, predicates).
-//
package token
import (
// Use Info.Types[expr].Type for the results of type inference.
//
// For a tutorial, see https://golang.org/s/types-tutorial.
-//
package types
import (
// (and a separating "--"). For instance, to test the package made
// of the files foo.go and bar.go, use:
//
-// go test -run Manual -- foo.go bar.go
+// go test -run Manual -- foo.go bar.go
//
// If no source arguments are provided, the file testdata/manual.go
// is used instead.
files to include for such packages.
Usage:
+
gotype [flags] [path...]
The flags are:
+
-t
include local test files in a directory (ignored if -x is provided)
-x
compiler used for installed packages (gc, gccgo, or source); default: source
Flags controlling additional output:
+
-ast
print AST
-trace
To verify the output of a pipe:
echo "package foo" | gotype
-
*/
package main
//
// Inference proceeds as follows:
//
-// Starting with given type arguments
-// 1) apply FTI (function type inference) with typed arguments,
-// 2) apply CTI (constraint type inference),
-// 3) apply FTI with untyped function arguments,
-// 4) apply CTI.
+// Starting with given type arguments
+// 1) apply FTI (function type inference) with typed arguments,
+// 2) apply CTI (constraint type inference),
+// 3) apply FTI with untyped function arguments,
+// 4) apply CTI.
//
// The process stops as soon as all type arguments are known or an error occurs.
func (check *Checker) infer(posn positioner, tparams []*TypeParam, targs []Type, params *Tuple, args []*operand) (result []Type) {
// The last index entry is the field or method index in the (possibly embedded)
// type where the entry was found, either:
//
-// 1) the list of declared methods of a named type; or
-// 2) the list of all methods (method set) of an interface type; or
-// 3) the list of fields of a struct type.
+// 1. the list of declared methods of a named type; or
+// 2. the list of all methods (method set) of an interface type; or
+// 3. the list of fields of a struct type.
//
// The earlier index entries are the indices of the embedded struct fields
// traversed to get to the found entry, starting at depth 0.
// If no entry is found, a nil object is returned. In this case, the returned
// index and indirect values have the following meaning:
//
-// - If index != nil, the index sequence points to an ambiguous entry
-// (the same name appeared more than once at the same embedding level).
+// - If index != nil, the index sequence points to an ambiguous entry
+// (the same name appeared more than once at the same embedding level).
//
-// - If indirect is set, a method with a pointer receiver type was found
-// but there was no pointer on the path from the actual receiver type to
-// the method's formal receiver base type, nor was the receiver addressable.
+// - If indirect is set, a method with a pointer receiver type was found
+// but there was no pointer on the path from the actual receiver type to
+// the method's formal receiver base type, nor was the receiver addressable.
func LookupFieldOrMethod(T Type, addressable bool, pkg *Package, name string) (obj Object, index []int, indirect bool) {
if T == nil {
panic("LookupFieldOrMethod on nil type")
// The last index entry is the field or method index of the type declaring f;
// either:
//
-// 1) the list of declared methods of a named type; or
-// 2) the list of methods of an interface type; or
-// 3) the list of fields of a struct type.
+// 1. the list of declared methods of a named type; or
+// 2. the list of methods of an interface type; or
+// 3. the list of fields of a struct type.
//
// The earlier index entries are the indices of the embedded fields implicitly
// traversed to get from (the type of) x to f, starting at embedding depth 0.
// package-level objects, and may be nil.
//
// Examples:
+//
// "field (T) f int"
// "method (T) f(X) Y"
// "method expr (T) f(X) Y"
// StdSizes is a convenience type for creating commonly used Sizes.
// It makes the following simplifying assumptions:
//
-// - The size of explicitly sized basic types (int16, etc.) is the
-// specified size.
-// - The size of strings and interfaces is 2*WordSize.
-// - The size of slices is 3*WordSize.
-// - The size of an array of n elements corresponds to the size of
-// a struct of n consecutive fields of the array's element type.
-// - The size of a struct is the offset of the last field plus that
-// field's size. As with all element types, if the struct is used
-// in an array its size must first be aligned to a multiple of the
-// struct's alignment.
-// - All other types have size WordSize.
-// - Arrays and structs are aligned per spec definition; all other
-// types are naturally aligned with a maximum alignment MaxAlign.
+// - The size of explicitly sized basic types (int16, etc.) is the
+// specified size.
+// - The size of strings and interfaces is 2*WordSize.
+// - The size of slices is 3*WordSize.
+// - The size of an array of n elements corresponds to the size of
+// a struct of n consecutive fields of the array's element type.
+// - The size of a struct is the offset of the last field plus that
+// field's size. As with all element types, if the struct is used
+// in an array its size must first be aligned to a multiple of the
+// struct's alignment.
+// - All other types have size WordSize.
+// - Arrays and structs are aligned per spec definition; all other
+// types are naturally aligned with a maximum alignment MaxAlign.
//
// *StdSizes implements Sizes.
type StdSizes struct {
// A term describes elementary type sets:
//
-// ∅: (*term)(nil) == ∅ // set of no types (empty set)
-// 𝓤: &term{} == 𝓤 // set of all types (𝓤niverse)
-// T: &term{false, T} == {T} // set of type T
-// ~t: &term{true, t} == {t' | under(t') == t} // set of types with underlying type t
+// ∅: (*term)(nil) == ∅ // set of no types (empty set)
+// 𝓤: &term{} == 𝓤 // set of all types (𝓤niverse)
+// T: &term{false, T} == {T} // set of type T
+// ~t: &term{true, t} == {t' | under(t') == t} // set of types with underlying type t
type term struct {
tilde bool // valid if typ != nil
typ Type
// Package adler32 implements the Adler-32 checksum.
//
// It is defined in RFC 1950:
+//
// Adler-32 is composed of two sums accumulated per byte: s1 is
// the sum of all bytes, s2 is the sum of all s1 values. Both sums
// are done modulo 65521. s1 is initialized to 1, s2 to zero. The
//
// The hash functions are not cryptographically secure.
// (See crypto/sha256 and crypto/sha512 for cryptographic use.)
-//
package maphash
import (
// the sequence of bytes provided to the Hash object, not on the way
// in which the bytes are provided. For example, the three sequences
//
-// h.Write([]byte{'f','o','o'})
-// h.WriteByte('f'); h.WriteByte('o'); h.WriteByte('o')
-// h.WriteString("foo")
+// h.Write([]byte{'f','o','o'})
+// h.WriteByte('f'); h.WriteByte('o'); h.WriteByte('o')
+// h.WriteString("foo")
//
// all have the same effect.
//
// HTML5 parsing algorithm because a single token production in the HTML
// grammar may contain embedded actions in a template. For instance, the quoted
// HTML attribute produced by
-// <div title="Hello {{.World}}">
+//
+// <div title="Hello {{.World}}">
+//
// is a single token in HTML's grammar but in a template spans several nodes.
type state uint8
For information about how to program the templates themselves, see the
documentation for text/template.
-Introduction
+# Introduction
This package wraps package text/template so you can share its template API
to parse and execute HTML templates safely.
- tmpl, err := template.New("name").Parse(...)
- // Error checking elided
- err = tmpl.Execute(out, data)
+ tmpl, err := template.New("name").Parse(...)
+ // Error checking elided
+ err = tmpl.Execute(out, data)
If successful, tmpl will now be injection-safe. Otherwise, err is an error
defined in the docs for ErrorCode.
Example
- import "text/template"
- ...
- t, err := template.New("foo").Parse(`{{define "T"}}Hello, {{.}}!{{end}}`)
- err = t.ExecuteTemplate(out, "T", "<script>alert('you have been pwned')</script>")
+ import "text/template"
+ ...
+ t, err := template.New("foo").Parse(`{{define "T"}}Hello, {{.}}!{{end}}`)
+ err = t.ExecuteTemplate(out, "T", "<script>alert('you have been pwned')</script>")
produces
- Hello, <script>alert('you have been pwned')</script>!
+ Hello, <script>alert('you have been pwned')</script>!
but the contextual autoescaping in html/template
- import "html/template"
- ...
- t, err := template.New("foo").Parse(`{{define "T"}}Hello, {{.}}!{{end}}`)
- err = t.ExecuteTemplate(out, "T", "<script>alert('you have been pwned')</script>")
+ import "html/template"
+ ...
+ t, err := template.New("foo").Parse(`{{define "T"}}Hello, {{.}}!{{end}}`)
+ err = t.ExecuteTemplate(out, "T", "<script>alert('you have been pwned')</script>")
produces safe, escaped HTML output
- Hello, <script>alert('you have been pwned')</script>!
+ Hello, <script>alert('you have been pwned')</script>!
-
-Contexts
+# Contexts
This package understands HTML, CSS, JavaScript, and URIs. It adds sanitizing
functions to each simple action pipeline, so given the excerpt
- <a href="/search?q={{.}}">{{.}}</a>
+ <a href="/search?q={{.}}">{{.}}</a>
At parse time each {{.}} is overwritten to add escaping functions as necessary.
In this case it becomes
- <a href="/search?q={{. | urlescaper | attrescaper}}">{{. | htmlescaper}}</a>
+ <a href="/search?q={{. | urlescaper | attrescaper}}">{{. | htmlescaper}}</a>
where urlescaper, attrescaper, and htmlescaper are aliases for internal escaping
functions.
For these internal escaping functions, if an action pipeline evaluates to
a nil interface value, it is treated as though it were an empty string.
-Namespaced and data- attributes
+# Namespaced and data- attributes
Attributes with a namespace are treated as if they had no namespace.
Given the excerpt
- <a my:href="{{.}}"></a>
+ <a my:href="{{.}}"></a>
At parse time the attribute will be treated as if it were just "href".
So at parse time the template becomes:
- <a my:href="{{. | urlescaper | attrescaper}}"></a>
+ <a my:href="{{. | urlescaper | attrescaper}}"></a>
Similarly to attributes with namespaces, attributes with a "data-" prefix are
treated as if they had no "data-" prefix. So given
- <a data-href="{{.}}"></a>
+ <a data-href="{{.}}"></a>
At parse time this becomes
- <a data-href="{{. | urlescaper | attrescaper}}"></a>
+ <a data-href="{{. | urlescaper | attrescaper}}"></a>
If an attribute has both a namespace and a "data-" prefix, only the namespace
will be removed when determining the context. For example
- <a my:data-href="{{.}}"></a>
+ <a my:data-href="{{.}}"></a>
This is handled as if "my:data-href" was just "data-href" and not "href" as
it would be if the "data-" prefix were to be ignored too. Thus at parse
time this becomes just
- <a my:data-href="{{. | attrescaper}}"></a>
+ <a my:data-href="{{. | attrescaper}}"></a>
As a special case, attributes with the namespace "xmlns" are always treated
as containing URLs. Given the excerpts
- <a xmlns:title="{{.}}"></a>
- <a xmlns:href="{{.}}"></a>
- <a xmlns:onclick="{{.}}"></a>
+ <a xmlns:title="{{.}}"></a>
+ <a xmlns:href="{{.}}"></a>
+ <a xmlns:onclick="{{.}}"></a>
At parse time they become:
- <a xmlns:title="{{. | urlescaper | attrescaper}}"></a>
- <a xmlns:href="{{. | urlescaper | attrescaper}}"></a>
- <a xmlns:onclick="{{. | urlescaper | attrescaper}}"></a>
+ <a xmlns:title="{{. | urlescaper | attrescaper}}"></a>
+ <a xmlns:href="{{. | urlescaper | attrescaper}}"></a>
+ <a xmlns:onclick="{{. | urlescaper | attrescaper}}"></a>
-Errors
+# Errors
See the documentation of ErrorCode for details.
-
-A fuller picture
+# A fuller picture
The rest of this package comment may be skipped on first reading; it includes
details necessary to understand escaping contexts and error messages. Most users
will not need to understand these details.
-
-Contexts
+# Contexts
Assuming {{.}} is `O'Reilly: How are <i>you</i>?`, the table below shows
how {{.}} appears when used in the context to the left.
- Context {{.}} After
- {{.}} O'Reilly: How are <i>you</i>?
- <a title='{{.}}'> O'Reilly: How are you?
- <a href="/{{.}}"> O'Reilly: How are %3ci%3eyou%3c/i%3e?
- <a href="?q={{.}}"> O'Reilly%3a%20How%20are%3ci%3e...%3f
- <a onx='f("{{.}}")'> O\x27Reilly: How are \x3ci\x3eyou...?
- <a onx='f({{.}})'> "O\x27Reilly: How are \x3ci\x3eyou...?"
- <a onx='pattern = /{{.}}/;'> O\x27Reilly: How are \x3ci\x3eyou...\x3f
+ Context {{.}} After
+ {{.}} O'Reilly: How are <i>you</i>?
+ <a title='{{.}}'> O'Reilly: How are you?
+ <a href="/{{.}}"> O'Reilly: How are %3ci%3eyou%3c/i%3e?
+ <a href="?q={{.}}"> O'Reilly%3a%20How%20are%3ci%3e...%3f
+ <a onx='f("{{.}}")'> O\x27Reilly: How are \x3ci\x3eyou...?
+ <a onx='f({{.}})'> "O\x27Reilly: How are \x3ci\x3eyou...?"
+ <a onx='pattern = /{{.}}/;'> O\x27Reilly: How are \x3ci\x3eyou...\x3f
If used in an unsafe context, then the value might be filtered out:
- Context {{.}} After
- <a href="{{.}}"> #ZgotmplZ
+ Context {{.}} After
+ <a href="{{.}}"> #ZgotmplZ
since "O'Reilly:" is not an allowed protocol like "http:".
-
If {{.}} is the innocuous word, `left`, then it can appear more widely,
- Context {{.}} After
- {{.}} left
- <a title='{{.}}'> left
- <a href='{{.}}'> left
- <a href='/{{.}}'> left
- <a href='?dir={{.}}'> left
- <a style="border-{{.}}: 4px"> left
- <a style="align: {{.}}"> left
- <a style="background: '{{.}}'> left
- <a style="background: url('{{.}}')> left
- <style>p.{{.}} {color:red}</style> left
+ Context {{.}} After
+ {{.}} left
+ <a title='{{.}}'> left
+ <a href='{{.}}'> left
+ <a href='/{{.}}'> left
+ <a href='?dir={{.}}'> left
+ <a style="border-{{.}}: 4px"> left
+ <a style="align: {{.}}"> left
+ <a style="background: '{{.}}'> left
+ <a style="background: url('{{.}}')> left
+ <style>p.{{.}} {color:red}</style> left
Non-string values can be used in JavaScript contexts.
If {{.}} is
- struct{A,B string}{ "foo", "bar" }
+ struct{A,B string}{ "foo", "bar" }
in the escaped template
- <script>var pair = {{.}};</script>
+ <script>var pair = {{.}};</script>
then the template output is
- <script>var pair = {"A": "foo", "B": "bar"};</script>
+ <script>var pair = {"A": "foo", "B": "bar"};</script>
See package json to understand how non-string content is marshaled for
embedding in JavaScript contexts.
-
-Typed Strings
+# Typed Strings
By default, this package assumes that all pipelines produce a plain text string.
It adds escaping pipeline stages necessary to correctly and safely embed that
The template
- Hello, {{.}}!
+ Hello, {{.}}!
can be invoked with
- tmpl.Execute(out, template.HTML(`<b>World</b>`))
+ tmpl.Execute(out, template.HTML(`<b>World</b>`))
to produce
- Hello, <b>World</b>!
+ Hello, <b>World</b>!
instead of the
- Hello, <b>World<b>!
+ Hello, <b>World<b>!
that would have been produced if {{.}} was a regular string.
-
-Security Model
+# Security Model
https://rawgit.com/mikesamuel/sanitized-jquery-templates/trunk/safetemplate.html#problem_definition defines "safe" as used by this package.
//
// Output: "ZgotmplZ"
// Example:
-// <img src="{{.X}}">
-// where {{.X}} evaluates to `javascript:...`
+//
+// <img src="{{.X}}">
+// where {{.X}} evaluates to `javascript:...`
+//
// Discussion:
-// "ZgotmplZ" is a special value that indicates that unsafe content reached a
-// CSS or URL context at runtime. The output of the example will be
-// <img src="#ZgotmplZ">
-// If the data comes from a trusted source, use content types to exempt it
-// from filtering: URL(`javascript:...`).
+//
+// "ZgotmplZ" is a special value that indicates that unsafe content reached a
+// CSS or URL context at runtime. The output of the example will be
+// <img src="#ZgotmplZ">
+// If the data comes from a trusted source, use content types to exempt it
+// from filtering: URL(`javascript:...`).
const (
// OK indicates the lack of an error.
OK ErrorCode = iota
// nudge returns the context that would result from following empty string
// transitions from the input context.
// For example, parsing:
-// `<a href=`
+//
+// `<a href=`
+//
// will end in context{stateBeforeValue, attrURL}, but parsing one extra rune:
-// `<a href=x`
+//
+// `<a href=x`
+//
// will end in context{stateURL, delimSpaceOrTagEnd, ...}.
// There are two transitions that happen when the 'x' is seen:
// (1) Transition from a before-value state to a start-of-value state without
-// consuming any character.
+//
+// consuming any character.
+//
// (2) Consume 'x' and transition past the first value character.
// In this case, nudging produces the context after (1) happens.
func nudge(c context) context {
// <script>(function () {
// var a = [], d = document.getElementById("d"), i, c, s;
// for (i = 0; i < 0x10000; ++i) {
-// c = String.fromCharCode(i);
-// d.innerHTML = "<span title=" + c + "lt" + c + "></span>"
-// s = d.getElementsByTagName("SPAN")[0];
-// if (!s || s.title !== c + "lt" + c) { a.push(i.toString(16)); }
+//
+// c = String.fromCharCode(i);
+// d.innerHTML = "<span title=" + c + "lt" + c + "></span>"
+// s = d.getElementsByTagName("SPAN")[0];
+// if (!s || s.title !== c + "lt" + c) { a.push(i.toString(16)); }
+//
// }
// document.write(a.join(", "));
// })()</script>
//
// missingkey: Control the behavior during execution if a map is
// indexed with a key that is not present in the map.
+//
// "missingkey=default" or "missingkey=invalid"
// The default behavior: Do nothing and continue execution.
// If printed, the result of the index operation is the string
// Must is a helper that wraps a call to a function returning (*Template, error)
// and panics if the error is non-nil. It is intended for use in variable initializations
// such as
+//
// var t = template.Must(template.New("name").Parse("html"))
func Must(t *Template, err error) *Template {
if err != nil {
//
// This filter conservatively assumes that all schemes other than the following
// are unsafe:
-// * http: Navigates to a new website, and may open a new window or tab.
-// These side effects can be reversed by navigating back to the
-// previous website, or closing the window or tab. No irreversible
-// changes will take place without further user interaction with
-// the new website.
-// * https: Same as http.
-// * mailto: Opens an email program and starts a new draft. This side effect
-// is not irreversible until the user explicitly clicks send; it
-// can be undone by closing the email program.
+// - http: Navigates to a new website, and may open a new window or tab.
+// These side effects can be reversed by navigating back to the
+// previous website, or closing the window or tab. No irreversible
+// changes will take place without further user interaction with
+// the new website.
+// - https: Same as http.
+// - mailto: Opens an email program and starts a new draft. This side effect
+// is not irreversible until the user explicitly clicks send; it
+// can be undone by closing the email program.
//
// To allow URLs containing other schemes to bypass this filter, developers must
// explicitly indicate that such a URL is expected and safe by encapsulating it
// image format requires the prior registration of a decoder function.
// Registration is typically automatic as a side effect of initializing that
// format's package so that, to decode a PNG image, it suffices to have
+//
// import _ "image/png"
+//
// in a program's main package. The _ means to import a package purely for its
// initialization side effects.
//
}
// sosHeaderY is the SOS marker "\xff\xda" followed by 8 bytes:
-// - the marker length "\x00\x08",
-// - the number of components "\x01",
-// - component 1 uses DC table 0 and AC table 0 "\x01\x00",
-// - the bytes "\x00\x3f\x00". Section B.2.3 of the spec says that for
-// sequential DCTs, those bytes (8-bit Ss, 8-bit Se, 4-bit Ah, 4-bit Al)
-// should be 0x00, 0x3f, 0x00<<4 | 0x00.
+// - the marker length "\x00\x08",
+// - the number of components "\x01",
+// - component 1 uses DC table 0 and AC table 0 "\x01\x00",
+// - the bytes "\x00\x3f\x00". Section B.2.3 of the spec says that for
+// sequential DCTs, those bytes (8-bit Ss, 8-bit Se, 4-bit Ah, 4-bit Al)
+// should be 0x00, 0x3f, 0x00<<4 | 0x00.
var sosHeaderY = []byte{
0xff, 0xda, 0x00, 0x08, 0x01, 0x01, 0x00, 0x00, 0x3f, 0x00,
}
// sosHeaderYCbCr is the SOS marker "\xff\xda" followed by 12 bytes:
-// - the marker length "\x00\x0c",
-// - the number of components "\x03",
-// - component 1 uses DC table 0 and AC table 0 "\x01\x00",
-// - component 2 uses DC table 1 and AC table 1 "\x02\x11",
-// - component 3 uses DC table 1 and AC table 1 "\x03\x11",
-// - the bytes "\x00\x3f\x00". Section B.2.3 of the spec says that for
-// sequential DCTs, those bytes (8-bit Ss, 8-bit Se, 4-bit Ah, 4-bit Al)
-// should be 0x00, 0x3f, 0x00<<4 | 0x00.
+// - the marker length "\x00\x0c",
+// - the number of components "\x03",
+// - component 1 uses DC table 0 and AC table 0 "\x01\x00",
+// - component 2 uses DC table 1 and AC table 1 "\x02\x11",
+// - component 3 uses DC table 1 and AC table 1 "\x03\x11",
+// - the bytes "\x00\x3f\x00". Section B.2.3 of the spec says that for
+// sequential DCTs, those bytes (8-bit Ss, 8-bit Se, 4-bit Ah, 4-bit Al)
+// should be 0x00, 0x3f, 0x00<<4 | 0x00.
var sosHeaderYCbCr = []byte{
0xff, 0xda, 0x00, 0x0c, 0x03, 0x01, 0x00, 0x02,
0x11, 0x03, 0x11, 0x00, 0x3f, 0x00,
// Read presents one or more IDAT chunks as one continuous stream (minus the
// intermediate chunk headers and footers). If the PNG data looked like:
-// ... len0 IDAT xxx crc0 len1 IDAT yy crc1 len2 IEND crc2
+//
+// ... len0 IDAT xxx crc0 len1 IDAT yy crc1 len2 IEND crc2
+//
// then this reader presents xxxyy. For well-formed PNG data, the decoder state
// immediately before the first Read call is that d.r is positioned between the
// first IDAT and xxx, and the decoder state immediately after the last Read
// that map to separate chroma samples.
// It is not an absolute requirement, but YStride and len(Y) are typically
// multiples of 8, and:
+//
// For 4:4:4, CStride == YStride/1 && len(Cb) == len(Cr) == len(Y)/1.
// For 4:2:2, CStride == YStride/2 && len(Cb) == len(Cr) == len(Y)/2.
// For 4:2:0, CStride == YStride/2 && len(Cb) == len(Cr) == len(Y)/4.
// // lookup byte slice s
// offsets1 := index.Lookup(s, -1) // the list of all indices where s occurs in data
// offsets2 := index.Lookup(s, 3) // the list of at most 3 indices where s occurs in data
-//
package suffixarray
import (
//
// The ordering rules are more general than with Go's < operator:
//
-// - when applicable, nil compares low
-// - ints, floats, and strings order by <
-// - NaN compares less than non-NaN floats
-// - bool compares false before true
-// - complex compares real, then imag
-// - pointers compare by machine address
-// - channel values compare by machine address
-// - structs compare each field in turn
-// - arrays compare each element in turn.
-// Otherwise identical arrays compare by length.
-// - interface values compare first by reflect.Type describing the concrete type
-// and then by concrete value as described in the previous rules.
+// - when applicable, nil compares low
+// - ints, floats, and strings order by <
+// - NaN compares less than non-NaN floats
+// - bool compares false before true
+// - complex compares real, then imag
+// - pointers compare by machine address
+// - channel values compare by machine address
+// - structs compare each field in turn
+// - arrays compare each element in turn.
+// Otherwise identical arrays compare by length.
+// - interface values compare first by reflect.Type describing the concrete type
+// and then by concrete value as described in the previous rules.
func Sort(mapValue reflect.Value) *SortedMap {
if mapValue.Type().Kind() != reflect.Map {
return nil
// specify an alternate resolver func.
// It is not exposed to outsider users. (But see issue 12503)
// The value should be the same type as lookupIP:
-// func lookupIP(ctx context.Context, host string) ([]IPAddr, error)
+//
+// func lookupIP(ctx context.Context, host string) ([]IPAddr, error)
type LookupIPAltResolverKey struct{}
// Trace contains a set of hooks for tracing events within
//
// The general format for profilez samples is a sequence of words in
// binary format. The first words are a header with the following data:
-// 1st word -- 0
-// 2nd word -- 3
-// 3rd word -- 0 if a c++ application, 1 if a java application.
-// 4th word -- Sampling period (in microseconds).
-// 5th word -- Padding.
+//
+// 1st word -- 0
+// 2nd word -- 3
+// 3rd word -- 0 if a c++ application, 1 if a java application.
+// 4th word -- Sampling period (in microseconds).
+// 5th word -- Padding.
func parseCPU(b []byte) (*Profile, error) {
var parse func([]byte) (uint64, []byte)
var n1, n2, n3, n4, n5 uint64
//
// profilez samples are a repeated sequence of stack frames of the
// form:
-// 1st word -- The number of times this stack was encountered.
-// 2nd word -- The size of the stack (StackSize).
-// 3rd word -- The first address on the stack.
-// ...
-// StackSize + 2 -- The last address on the stack
+//
+// 1st word -- The number of times this stack was encountered.
+// 2nd word -- The size of the stack (StackSize).
+// 3rd word -- The first address on the stack.
+// ...
+// StackSize + 2 -- The last address on the stack
+//
// The last stack trace is of the form:
-// 1st word -- 0
-// 2nd word -- 1
-// 3rd word -- 0
+//
+// 1st word -- 0
+// 2nd word -- 1
+// 3rd word -- 0
//
// Addresses from stack traces may point to the next instruction after
// each call. Optionally adjust by -1 to land somewhere on the actual
// ToInterface returns v's current value as an interface{}.
// It is equivalent to:
+//
// var i interface{} = (v's underlying value)
+//
// It panics if the Value was obtained by accessing
// unexported struct fields.
func ToInterface(v Value) (i any) {
// available in the memory directly following the rtype value.
//
// tflag values must be kept in sync with copies in:
+//
// cmd/compile/internal/reflectdata/reflect.go
// cmd/link/internal/ld/decodesym.go
// runtime/type.go
//
// The next two bytes are the data length:
//
-// l := uint16(data[1])<<8 | uint16(data[2])
+// l := uint16(data[1])<<8 | uint16(data[2])
//
// Bytes [3:3+l] are the string data.
//
//
// NOTE: This package is a copy of golang.org/x/sys/windows/registry
// with KeyInfo.ModTime removed to prevent dependency cycles.
-//
package registry
import (
//
// The goals for the format are:
//
-// - be trivial enough to create and edit by hand.
-// - be able to store trees of text files describing go command test cases.
-// - diff nicely in git history and code reviews.
+// - be trivial enough to create and edit by hand.
+// - be able to store trees of text files describing go command test cases.
+// - diff nicely in git history and code reviews.
//
// Non-goals include being a completely general archive format,
// storing binary data, storing file modes, storing special files like
// symbolic links, and so on.
//
-// Txtar format
+// # Txtar format
//
// A txtar archive is zero or more comment lines and then a sequence of file entries.
// Each file entry begins with a file marker line of the form "-- FILENAME --"
// The prefix is followed by a colon only when Llongfile or Lshortfile
// is specified.
// For example, flags Ldate | Ltime (or LstdFlags) produce,
+//
// 2009/01/23 01:23:23 message
+//
// while flags Ldate | Ltime | Lmicroseconds | Llongfile produce,
+//
// 2009/01/23 01:23:23.123123 /a/b/c/d.go:23: message
const (
Ldate = 1 << iota // the date in the local time zone: 2009/01/23
}
// formatHeader writes log header to buf in following order:
-// * l.prefix (if it's not blank and Lmsgprefix is unset),
-// * date and/or time (if corresponding flags are provided),
-// * file and line number (if corresponding flags are provided),
-// * l.prefix (if it's not blank and Lmsgprefix is set).
+// - l.prefix (if it's not blank and Lmsgprefix is unset),
+// - date and/or time (if corresponding flags are provided),
+// - file and line number (if corresponding flags are provided),
+// - l.prefix (if it's not blank and Lmsgprefix is set).
func (l *Logger) formatHeader(buf *[]byte, t time.Time, file string, line int) {
if l.flag&Lmsgprefix == 0 {
*buf = append(*buf, l.prefix...)
// The syslog package is frozen and is not accepting new features.
// Some external packages provide more functionality. See:
//
-// https://godoc.org/?q=syslog
+// https://godoc.org/?q=syslog
package syslog
// BUG(brainman): This package is not implemented on Windows. As the
// Abs returns the absolute value of x.
//
// Special cases are:
+//
// Abs(±Inf) = +Inf
// Abs(NaN) = NaN
func Abs(x float64) float64 {
// Acosh returns the inverse hyperbolic cosine of x.
//
// Special cases are:
+//
// Acosh(+Inf) = +Inf
// Acosh(x) = NaN if x < 1
// Acosh(NaN) = NaN
// Asin returns the arcsine, in radians, of x.
//
// Special cases are:
+//
// Asin(±0) = ±0
// Asin(x) = NaN if x < -1 or x > 1
func Asin(x float64) float64 {
// Acos returns the arccosine, in radians, of x.
//
// Special case is:
+//
// Acos(x) = NaN if x < -1 or x > 1
func Acos(x float64) float64 {
if haveArchAcos {
// Asinh returns the inverse hyperbolic sine of x.
//
// Special cases are:
+//
// Asinh(±0) = ±0
// Asinh(±Inf) = ±Inf
// Asinh(NaN) = NaN
// Atan returns the arctangent, in radians, of x.
//
// Special cases are:
-// Atan(±0) = ±0
-// Atan(±Inf) = ±Pi/2
+//
+// Atan(±0) = ±0
+// Atan(±Inf) = ±Pi/2
func Atan(x float64) float64 {
if haveArchAtan {
return archAtan(x)
// of the return value.
//
// Special cases are (in order):
+//
// Atan2(y, NaN) = NaN
// Atan2(NaN, x) = NaN
// Atan2(+0, x>=0) = +0
// Atanh returns the inverse hyperbolic tangent of x.
//
// Special cases are:
+//
// Atanh(1) = +Inf
// Atanh(±0) = ±0
// Atanh(-1) = -Inf
)
// Use the classic continued fraction for e
-// e = [1; 0, 1, 1, 2, 1, 1, ... 2n, 1, 1, ...]
+//
+// e = [1; 0, 1, 1, 2, 1, 1, ... 2n, 1, 1, ...]
+//
// i.e., for the nth term, use
-// 1 if n mod 3 != 1
-// (n-1)/3 * 2 if n mod 3 == 1
+//
+// 1 if n mod 3 != 1
+// (n-1)/3 * 2 if n mod 3 == 1
func recur(n, lim int64) *big.Rat {
term := new(big.Rat)
if n%3 != 1 {
// A nonzero finite Float represents a multi-precision floating point number
//
-// sign × mantissa × 2**exponent
+// sign × mantissa × 2**exponent
//
// with 0.5 <= mantissa < 1.0, and MinExp <= exponent <= MaxExp.
// A Float may also be zero (+0, -0) or infinite (+Inf, -Inf).
// Cmp compares x and y and returns:
//
-// -1 if x < y
-// 0 if x == y (incl. -0 == 0, -Inf == -Inf, and +Inf == +Inf)
-// +1 if x > y
+// -1 if x < y
+// 0 if x == y (incl. -0 == 0, -Inf == -Inf, and +Inf == +Inf)
+// +1 if x > y
func (x *Float) Cmp(y *Float) int {
if debugFloat {
x.validate()
// If z's precision is 0, it is changed to 64 before rounding takes effect.
// The number must be of the form:
//
-// number = [ sign ] ( float | "inf" | "Inf" ) .
-// sign = "+" | "-" .
-// float = ( mantissa | prefix pmantissa ) [ exponent ] .
-// prefix = "0" [ "b" | "B" | "o" | "O" | "x" | "X" ] .
-// mantissa = digits "." [ digits ] | digits | "." digits .
-// pmantissa = [ "_" ] digits "." [ digits ] | [ "_" ] digits | "." digits .
-// exponent = ( "e" | "E" | "p" | "P" ) [ sign ] digits .
-// digits = digit { [ "_" ] digit } .
-// digit = "0" ... "9" | "a" ... "z" | "A" ... "Z" .
+// number = [ sign ] ( float | "inf" | "Inf" ) .
+// sign = "+" | "-" .
+// float = ( mantissa | prefix pmantissa ) [ exponent ] .
+// prefix = "0" [ "b" | "B" | "o" | "O" | "x" | "X" ] .
+// mantissa = digits "." [ digits ] | digits | "." digits .
+// pmantissa = [ "_" ] digits "." [ digits ] | [ "_" ] digits | "." digits .
+// exponent = ( "e" | "E" | "p" | "P" ) [ sign ] digits .
+// digits = digit { [ "_" ] digit } .
+// digit = "0" ... "9" | "a" ... "z" | "A" ... "Z" .
//
// The base argument must be 0, 2, 8, 10, or 16. Providing an invalid base
// argument will lead to a run-time panic.
// Cmp compares x and y and returns:
//
-// -1 if x < y
-// 0 if x == y
-// +1 if x > y
+// -1 if x < y
+// 0 if x == y
+// +1 if x > y
func (x *Int) Cmp(y *Int) (r int) {
// x cmp y == x cmp y
// x cmp (-y) == x
// CmpAbs compares the absolute values of x and y and returns:
//
-// -1 if |x| < |y|
-// 0 if |x| == |y|
-// +1 if |x| > |y|
+// -1 if |x| < |y|
+// 0 if |x| == |y|
+// +1 if |x| > |y|
func (x *Int) CmpAbs(y *Int) int {
return x.abs.cmp(y.abs)
}
// lehmerSimulate attempts to simulate several Euclidean update steps
// using the leading digits of A and B. It returns u0, u1, v0, v1
// such that A and B can be updated as:
-// A = u0*A + v0*B
-// B = u1*A + v1*B
+//
+// A = u0*A + v0*B
+// B = u1*A + v1*B
+//
// Requirements: A >= B and len(B.abs) >= 2
// Since we are calculating with full words to avoid overflow,
// we use 'even' to track the sign of the cosequences.
}
// lehmerUpdate updates the inputs A and B such that:
-// A = u0*A + v0*B
-// B = u1*A + v1*B
+//
+// A = u0*A + v0*B
+// B = u1*A + v1*B
+//
// where the signs of u0, u1, v0, v1 are given by even
// For even == true: u0, v1 >= 0 && u1, v0 <= 0
// For even == false: u0, v1 <= 0 && u1, v0 >= 0
}
// modSqrt3Mod4 uses the identity
-// (a^((p+1)/4))^2 mod p
-// == u^(p+1) mod p
-// == u^2 mod p
+//
+// (a^((p+1)/4))^2 mod p
+// == u^(p+1) mod p
+// == u^2 mod p
+//
// to calculate the square root of any quadratic residue mod p quickly for 3
// mod 4 primes.
func (z *Int) modSqrt3Mod4Prime(x, p *Int) *Int {
}
// modSqrt5Mod8 uses Atkin's observation that 2 is not a square mod p
-// alpha == (2*a)^((p-5)/8) mod p
-// beta == 2*a*alpha^2 mod p is a square root of -1
-// b == a*alpha*(beta-1) mod p is a square root of a
+//
+// alpha == (2*a)^((p-5)/8) mod p
+// beta == 2*a*alpha^2 mod p is a square root of -1
+// b == a*alpha*(beta-1) mod p is a square root of a
+//
// to calculate the square root of any quadratic residue mod p quickly for 5
// mod 8 primes.
func (z *Int) modSqrt5Mod8Prime(x, p *Int) *Int {
// An unsigned integer x of the form
//
-// x = x[n-1]*_B^(n-1) + x[n-2]*_B^(n-2) + ... + x[1]*_B + x[0]
+// x = x[n-1]*_B^(n-1) + x[n-2]*_B^(n-2) + ... + x[1]*_B + x[0]
//
// with 0 <= x[i] < _B and 0 <= i < n is stored in a slice of length n,
// with the digits x[i] as the slice elements.
// not recognized and thus terminate scanning like any other character
// that is not a valid radix point or digit.
//
-// number = mantissa | prefix pmantissa .
-// prefix = "0" [ "b" | "B" | "o" | "O" | "x" | "X" ] .
-// mantissa = digits "." [ digits ] | digits | "." digits .
-// pmantissa = [ "_" ] digits "." [ digits ] | [ "_" ] digits | "." digits .
-// digits = digit { [ "_" ] digit } .
-// digit = "0" ... "9" | "a" ... "z" | "A" ... "Z" .
+// number = mantissa | prefix pmantissa .
+// prefix = "0" [ "b" | "B" | "o" | "O" | "x" | "X" ] .
+// mantissa = digits "." [ digits ] | digits | "." digits .
+// pmantissa = [ "_" ] digits "." [ digits ] | [ "_" ] digits | "." digits .
+// digits = digit { [ "_" ] digit } .
+// digit = "0" ... "9" | "a" ... "z" | "A" ... "Z" .
//
// Unless fracOk is set, the base argument must be 0 or a value between
// 2 and MaxBase. If fracOk is set, the base argument must be one of
// Split blocks greater than leafSize Words (or set to 0 to disable recursive conversion)
// Benchmark and configure leafSize using: go test -bench="Leaf"
-// 8 and 16 effective on 3.0 GHz Xeon "Clovertown" CPU (128 byte cache lines)
-// 8 and 16 effective on 2.66 GHz Core 2 Duo "Penryn" CPU
+//
+// 8 and 16 effective on 3.0 GHz Xeon "Clovertown" CPU (128 byte cache lines)
+// 8 and 16 effective on 2.66 GHz Core 2 Duo "Penryn" CPU
var leafSize int = 8 // number of Word-size binary values treat as a monolithic block
type divisor struct {
// Cmp compares x and y and returns:
//
-// -1 if x < y
-// 0 if x == y
-// +1 if x > y
+// -1 if x < y
+// 0 if x == y
+// +1 if x > y
func (x *Rat) Cmp(y *Rat) int {
var a, b Int
a.scaleDenom(&x.a, y.b.abs)
}
// Compute √x (to z.prec precision) by solving
-// 1/t² - x = 0
+//
+// 1/t² - x = 0
+//
// for t (using Newton's method), and then inverting.
func (z *Float) sqrtInverse(x *Float) {
// let
// Cbrt returns the cube root of x.
//
// Special cases are:
+//
// Cbrt(±0) = ±0
// Cbrt(±Inf) = ±Inf
// Cbrt(NaN) = NaN
// Pow returns x**y, the base-x exponential of y.
// For generalized compatibility with math.Pow:
+//
// Pow(0, ±0) returns 1+0i
// Pow(0, c) for real(c)<0 returns Inf+0i if imag(c) is zero, otherwise Inf+Inf i.
func Pow(x, y complex128) complex128 {
// Dim returns the maximum of x-y or 0.
//
// Special cases are:
+//
// Dim(+Inf, +Inf) = NaN
// Dim(-Inf, -Inf) = NaN
// Dim(x, NaN) = Dim(NaN, x) = NaN
// Max returns the larger of x or y.
//
// Special cases are:
+//
// Max(x, +Inf) = Max(+Inf, x) = +Inf
// Max(x, NaN) = Max(NaN, x) = NaN
// Max(+0, ±0) = Max(±0, +0) = +0
// Min returns the smaller of x or y.
//
// Special cases are:
+//
// Min(x, -Inf) = Min(-Inf, x) = -Inf
// Min(x, NaN) = Min(NaN, x) = NaN
// Min(-0, ±0) = Min(±0, -0) = -0
// Erf returns the error function of x.
//
// Special cases are:
+//
// Erf(+Inf) = 1
// Erf(-Inf) = -1
// Erf(NaN) = NaN
// Erfc returns the complementary error function of x.
//
// Special cases are:
+//
// Erfc(+Inf) = 0
// Erfc(-Inf) = 2
// Erfc(NaN) = NaN
// Erfinv returns the inverse error function of x.
//
// Special cases are:
+//
// Erfinv(1) = +Inf
// Erfinv(-1) = -Inf
// Erfinv(x) = NaN if x < -1 or x > 1
// Erfcinv returns the inverse of Erfc(x).
//
// Special cases are:
+//
// Erfcinv(0) = +Inf
// Erfcinv(2) = -Inf
// Erfcinv(x) = NaN if x < 0 or x > 2
// Exp returns e**x, the base-e exponential of x.
//
// Special cases are:
+//
// Exp(+Inf) = +Inf
// Exp(NaN) = NaN
+//
// Very large values overflow to 0 or +Inf.
// Very small values underflow to 1.
func Exp(x float64) float64 {
// It is more accurate than Exp(x) - 1 when x is near zero.
//
// Special cases are:
+//
// Expm1(+Inf) = +Inf
// Expm1(-Inf) = -1
// Expm1(NaN) = NaN
+//
// Very large values overflow to -1 or +Inf.
func Expm1(x float64) float64 {
if haveArchExpm1 {
// Floor returns the greatest integer value less than or equal to x.
//
// Special cases are:
+//
// Floor(±0) = ±0
// Floor(±Inf) = ±Inf
// Floor(NaN) = NaN
// Ceil returns the least integer value greater than or equal to x.
//
// Special cases are:
+//
// Ceil(±0) = ±0
// Ceil(±Inf) = ±Inf
// Ceil(NaN) = NaN
// Trunc returns the integer value of x.
//
// Special cases are:
+//
// Trunc(±0) = ±0
// Trunc(±Inf) = ±Inf
// Trunc(NaN) = NaN
// Round returns the nearest integer, rounding half away from zero.
//
// Special cases are:
+//
// Round(±0) = ±0
// Round(±Inf) = ±Inf
// Round(NaN) = NaN
// RoundToEven returns the nearest integer, rounding ties to even.
//
// Special cases are:
+//
// RoundToEven(±0) = ±0
// RoundToEven(±Inf) = ±Inf
// RoundToEven(NaN) = NaN
// with the absolute value of frac in the interval [½, 1).
//
// Special cases are:
+//
// Frexp(±0) = ±0, 0
// Frexp(±Inf) = ±Inf, 0
// Frexp(NaN) = NaN, 0
// Gamma returns the Gamma function of x.
//
// Special cases are:
+//
// Gamma(+Inf) = +Inf
// Gamma(+0) = +Inf
// Gamma(-0) = -Inf
// unnecessary overflow and underflow.
//
// Special cases are:
+//
// Hypot(±Inf, q) = +Inf
// Hypot(p, ±Inf) = +Inf
// Hypot(NaN, q) = NaN
// J0 returns the order-zero Bessel function of the first kind.
//
// Special cases are:
+//
// J0(±Inf) = 0
// J0(0) = 1
// J0(NaN) = NaN
// Y0 returns the order-zero Bessel function of the second kind.
//
// Special cases are:
+//
// Y0(+Inf) = 0
// Y0(0) = -Inf
// Y0(x < 0) = NaN
// J1 returns the order-one Bessel function of the first kind.
//
// Special cases are:
+//
// J1(±Inf) = 0
// J1(NaN) = NaN
func J1(x float64) float64 {
// Y1 returns the order-one Bessel function of the second kind.
//
// Special cases are:
+//
// Y1(+Inf) = 0
// Y1(0) = -Inf
// Y1(x < 0) = NaN
// Jn returns the order-n Bessel function of the first kind.
//
// Special cases are:
+//
// Jn(n, ±Inf) = 0
// Jn(n, NaN) = NaN
func Jn(n int, x float64) float64 {
// Yn returns the order-n Bessel function of the second kind.
//
// Special cases are:
+//
// Yn(n, +Inf) = 0
// Yn(n ≥ 0, 0) = -Inf
// Yn(n < 0, 0) = +Inf if n is odd, -Inf if n is even
// It returns frac × 2**exp.
//
// Special cases are:
+//
// Ldexp(±0, exp) = ±0
// Ldexp(±Inf, exp) = ±Inf
// Ldexp(NaN, exp) = NaN
// Lgamma returns the natural logarithm and sign (-1 or +1) of Gamma(x).
//
// Special cases are:
+//
// Lgamma(+Inf) = +Inf
// Lgamma(0) = +Inf
// Lgamma(-integer) = +Inf
// Log returns the natural logarithm of x.
//
// Special cases are:
+//
// Log(+Inf) = +Inf
// Log(0) = -Inf
// Log(x < 0) = NaN
// It is more accurate than Log(1 + x) when x is near zero.
//
// Special cases are:
+//
// Log1p(+Inf) = +Inf
// Log1p(±0) = ±0
// Log1p(-1) = -Inf
// Logb returns the binary exponent of x.
//
// Special cases are:
+//
// Logb(±Inf) = +Inf
// Logb(0) = -Inf
// Logb(NaN) = NaN
// Ilogb returns the binary exponent of x as an integer.
//
// Special cases are:
+//
// Ilogb(±Inf) = MaxInt32
// Ilogb(0) = MinInt32
// Ilogb(NaN) = MaxInt32
// sign agrees with that of x.
//
// Special cases are:
+//
// Mod(±Inf, y) = NaN
// Mod(NaN, y) = NaN
// Mod(x, 0) = NaN
// that sum to f. Both values have the same sign as f.
//
// Special cases are:
+//
// Modf(±Inf) = ±Inf, NaN
// Modf(NaN) = NaN, NaN
func Modf(f float64) (int float64, frac float64) {
// Nextafter32 returns the next representable float32 value after x towards y.
//
// Special cases are:
+//
// Nextafter32(x, x) = x
// Nextafter32(NaN, y) = NaN
// Nextafter32(x, NaN) = NaN
// Nextafter returns the next representable float64 value after x towards y.
//
// Special cases are:
+//
// Nextafter(x, x) = x
// Nextafter(NaN, y) = NaN
// Nextafter(x, NaN) = NaN
// Pow returns x**y, the base-x exponential of y.
//
// Special cases are (in order):
+//
// Pow(x, ±0) = 1 for any x
// Pow(1, y) = 1 for any y
// Pow(x, 1) = x for any x
// Pow10 returns 10**n, the base-10 exponential of n.
//
// Special cases are:
+//
// Pow10(n) = 0 for n < -323
// Pow10(n) = +Inf for n > 308
func Pow10(n int) float64 {
// To produce a distribution with a different rate parameter,
// callers can adjust the output using:
//
-// sample = ExpFloat64() / desiredRateParameter
+// sample = ExpFloat64() / desiredRateParameter
func (r *Rand) ExpFloat64() float64 {
for {
j := r.Uint32()
// To produce a different normal distribution, callers can
// adjust the output using:
//
-// sample = NormFloat64() * desiredStdDev + desiredMean
+// sample = NormFloat64() * desiredStdDev + desiredMean
func (r *Rand) NormFloat64() float64 {
for {
j := int32(r.Uint32()) // Possibly negative
// To produce a different normal distribution, callers can
// adjust the output using:
//
-// sample = NormFloat64() * desiredStdDev + desiredMean
+// sample = NormFloat64() * desiredStdDev + desiredMean
func NormFloat64() float64 { return globalRand.NormFloat64() }
// ExpFloat64 returns an exponentially distributed float64 in the range
// To produce a distribution with a different rate parameter,
// callers can adjust the output using:
//
-// sample = ExpFloat64() / desiredRateParameter
+// sample = ExpFloat64() / desiredRateParameter
func ExpFloat64() float64 { return globalRand.ExpFloat64() }
type lockedSource struct {
// Remainder returns the IEEE 754 floating-point remainder of x/y.
//
// Special cases are:
+//
// Remainder(±Inf, y) = NaN
// Remainder(NaN, y) = NaN
// Remainder(x, 0) = NaN
// Cos returns the cosine of the radian argument x.
//
// Special cases are:
+//
// Cos(±Inf) = NaN
// Cos(NaN) = NaN
func Cos(x float64) float64 {
// Sin returns the sine of the radian argument x.
//
// Special cases are:
+//
// Sin(±0) = ±0
// Sin(±Inf) = NaN
// Sin(NaN) = NaN
// Sincos returns Sin(x), Cos(x).
//
// Special cases are:
+//
// Sincos(±0) = ±0, 1
// Sincos(±Inf) = NaN, NaN
// Sincos(NaN) = NaN, NaN
// Sinh returns the hyperbolic sine of x.
//
// Special cases are:
+//
// Sinh(±0) = ±0
// Sinh(±Inf) = ±Inf
// Sinh(NaN) = NaN
// Cosh returns the hyperbolic cosine of x.
//
// Special cases are:
+//
// Cosh(±0) = 1
// Cosh(±Inf) = +Inf
// Cosh(NaN) = NaN
// Sqrt returns the square root of x.
//
// Special cases are:
+//
// Sqrt(+Inf) = +Inf
// Sqrt(±0) = ±0
// Sqrt(x < 0) = NaN
// Tan returns the tangent of the radian argument x.
//
// Special cases are:
+//
// Tan(±0) = ±0
// Tan(±Inf) = NaN
// Tan(NaN) = NaN
// Tanh returns the hyperbolic tangent of x.
//
// Special cases are:
+//
// Tanh(±0) = ±0
// Tanh(±Inf) = ±1
// Tanh(NaN) = NaN
// where y is given by y = floor(x * (4 / Pi)) and C is the leading partial
// terms of 4/Pi. Since the leading terms (PI4A and PI4B in sin.go) have 30
// and 32 trailing zero bits, y should have less than 30 significant bits.
+//
// y < 1<<30 -> floor(x*4/Pi) < 1<<30 -> x < (1<<30 - 1) * Pi/4
+//
// So, conservatively we can take x < 1<<29.
// Above this threshold Payne-Hanek range reduction must be used.
const reduceThreshold = 1 << 29
// skipLWSPChar returns b with leading spaces and tabs removed.
// RFC 822 defines:
-// LWSP-char = SPACE / HTAB
+//
+// LWSP-char = SPACE / HTAB
func skipLWSPChar(b []byte) []byte {
for len(b) > 0 && (b[0] == ' ' || b[0] == '\t') {
b = b[1:]
// system's MIME-info database or mime.types file(s) if available under one or
// more of these names:
//
-// /usr/local/share/mime/globs2
-// /usr/share/mime/globs2
-// /etc/mime.types
-// /etc/apache2/mime.types
-// /etc/apache/mime.types
+// /usr/local/share/mime/globs2
+// /usr/share/mime/globs2
+// /etc/mime.types
+// /etc/apache2/mime.types
+// /etc/apache/mime.types
//
// On Windows, MIME types are extracted from the registry.
//
// These are roughly enough for the following:
//
-// Source Encoding Maximum length of single name entry
-// Unicast DNS ASCII or <=253 + a NUL terminator
-// Unicode in RFC 5892 252 * total number of labels + delimiters + a NUL terminator
-// Multicast DNS UTF-8 in RFC 5198 or <=253 + a NUL terminator
-// the same as unicast DNS ASCII <=253 + a NUL terminator
-// Local database various depends on implementation
+// Source Encoding Maximum length of single name entry
+// Unicast DNS ASCII or <=253 + a NUL terminator
+// Unicode in RFC 5892 252 * total number of labels + delimiters + a NUL terminator
+// Multicast DNS UTF-8 in RFC 5198 or <=253 + a NUL terminator
+// the same as unicast DNS ASCII <=253 + a NUL terminator
+// Local database various depends on implementation
const (
nameinfoLen = 64
maxNameinfoLen = 4096
// goDebugNetDNS parses the value of the GODEBUG "netdns" value.
// The netdns value can be of the form:
-// 1 // debug level 1
-// 2 // debug level 2
-// cgo // use cgo for DNS lookups
-// go // use go for DNS lookups
-// cgo+1 // use cgo for DNS lookups + debug level 1
-// 1+cgo // same
-// cgo+2 // same, but debug level 2
+//
+// 1 // debug level 1
+// 2 // debug level 2
+// cgo // use cgo for DNS lookups
+// go // use go for DNS lookups
+// cgo+1 // use cgo for DNS lookups + debug level 1
+// 1+cgo // same
+// cgo+2 // same, but debug level 2
+//
// etc.
func goDebugNetDNS() (dnsMode string, debugLevel int) {
goDebug := godebug.Get("netdns")
// - now+Timeout
// - d.Deadline
// - the context's deadline
+//
// Or zero, if none of Timeout, Deadline, or context's deadline is set.
func (d *Dialer) deadline(ctx context.Context, now time.Time) (earliest time.Time) {
if d.Timeout != 0 { // including negative, for historical reasons
// Dial will try each IP address in order until one succeeds.
//
// Examples:
+//
// Dial("tcp", "golang.org:http")
// Dial("tcp", "192.0.2.1:http")
// Dial("tcp", "198.51.100.1:80")
// behaves with a non-well known protocol number such as "0" or "255".
//
// Examples:
+//
// Dial("ip4:1", "192.0.2.1")
// Dial("ip6:ipv6-icmp", "2001:db8::1")
// Dial("ip6:58", "fe80::1%lo0")
// removeLeadingDuplicates remove leading duplicate in environments.
// It's possible to override environment like following.
-// cgi.Handler{
-// ...
-// Env: []string{"SCRIPT_FILENAME=foo.php"},
-// }
+//
+// cgi.Handler{
+// ...
+// Env: []string{"SCRIPT_FILENAME=foo.php"},
+// }
func removeLeadingDuplicates(env []string) (ret []string) {
for i, e := range env {
found := false
// the following redirect codes, Get follows the redirect, up to a
// maximum of 10 redirects:
//
-// 301 (Moved Permanently)
-// 302 (Found)
-// 303 (See Other)
-// 307 (Temporary Redirect)
-// 308 (Permanent Redirect)
+// 301 (Moved Permanently)
+// 302 (Found)
+// 303 (See Other)
+// 307 (Temporary Redirect)
+// 308 (Permanent Redirect)
//
// An error is returned if there were too many redirects or if there
// was an HTTP protocol error. A non-2xx response doesn't cause an
// following redirect codes, Get follows the redirect after calling the
// Client's CheckRedirect function:
//
-// 301 (Moved Permanently)
-// 302 (Found)
-// 303 (See Other)
-// 307 (Temporary Redirect)
-// 308 (Permanent Redirect)
+// 301 (Moved Permanently)
+// 302 (Found)
+// 303 (See Other)
+// 307 (Temporary Redirect)
+// 308 (Permanent Redirect)
//
// An error is returned if the Client's CheckRedirect function fails
// or if there was an HTTP protocol error. A non-2xx response doesn't
// the following redirect codes, Head follows the redirect, up to a
// maximum of 10 redirects:
//
-// 301 (Moved Permanently)
-// 302 (Found)
-// 303 (See Other)
-// 307 (Temporary Redirect)
-// 308 (Permanent Redirect)
+// 301 (Moved Permanently)
+// 302 (Found)
+// 303 (See Other)
+// 307 (Temporary Redirect)
+// 308 (Permanent Redirect)
//
-// Head is a wrapper around DefaultClient.Head
+// # Head is a wrapper around DefaultClient.Head
//
// To make a request with a specified context.Context, use NewRequestWithContext
// and DefaultClient.Do.
// following redirect codes, Head follows the redirect after calling the
// Client's CheckRedirect function:
//
-// 301 (Moved Permanently)
-// 302 (Found)
-// 303 (See Other)
-// 307 (Temporary Redirect)
-// 308 (Permanent Redirect)
+// 301 (Moved Permanently)
+// 302 (Found)
+// 303 (See Other)
+// 307 (Temporary Redirect)
+// 308 (Permanent Redirect)
//
// To make a request with a specified context.Context, use NewRequestWithContext
// and Client.Do.
}
// cancelTimerBody is an io.ReadCloser that wraps rc with two features:
-// 1) On Read error or close, the stop func is called.
-// 2) On Read failure, if reqDidTimeout is true, the error is wrapped and
+// 1. On Read error or close, the stop func is called.
+// 2. On Read failure, if reqDidTimeout is true, the error is wrapped and
// marked as net.Error that hit its timeout.
type cancelTimerBody struct {
stop func() // stops the time.Timer waiting to cancel the request
// sanitizeCookieValue produces a suitable cookie-value from v.
// https://tools.ietf.org/html/rfc6265#section-4.1.1
-// cookie-value = *cookie-octet / ( DQUOTE *cookie-octet DQUOTE )
-// cookie-octet = %x21 / %x23-2B / %x2D-3A / %x3C-5B / %x5D-7E
-// ; US-ASCII characters excluding CTLs,
-// ; whitespace DQUOTE, comma, semicolon,
-// ; and backslash
+//
+// cookie-value = *cookie-octet / ( DQUOTE *cookie-octet DQUOTE )
+// cookie-octet = %x21 / %x23-2B / %x2D-3A / %x3C-5B / %x5D-7E
+// ; US-ASCII characters excluding CTLs,
+// ; whitespace DQUOTE, comma, semicolon,
+// ; and backslash
+//
// We loosen this as spaces and commas are common in cookie values
// but we produce a quoted cookie-value if and only if v contains
// commas or spaces.
)
// PublicSuffixList provides the public suffix of a domain. For example:
-// - the public suffix of "example.com" is "com",
-// - the public suffix of "foo1.foo2.foo3.co.uk" is "co.uk", and
-// - the public suffix of "bar.pvt.k12.ma.us" is "pvt.k12.ma.us".
+// - the public suffix of "example.com" is "com",
+// - the public suffix of "foo1.foo2.foo3.co.uk" is "co.uk", and
+// - the public suffix of "bar.pvt.k12.ma.us" is "pvt.k12.ma.us".
//
// Implementations of PublicSuffixList must be safe for concurrent use by
// multiple goroutines.
// testPSL implements PublicSuffixList with just two rules: "co.uk"
// and the default rule "*".
// The implementation has two intentional bugs:
-// PublicSuffix("www.buggy.psl") == "xy"
-// PublicSuffix("www2.buggy.psl") == "com"
+//
+// PublicSuffix("www.buggy.psl") == "xy"
+// PublicSuffix("www2.buggy.psl") == "com"
type testPSL struct{}
func (testPSL) String() string {
}
// jarTest encapsulates the following actions on a jar:
-// 1. Perform SetCookies with fromURL and the cookies from setCookies.
-// (Done at time tNow + 0 ms.)
-// 2. Check that the entries in the jar matches content.
-// (Done at time tNow + 1001 ms.)
-// 3. For each query in tests: Check that Cookies with toURL yields the
-// cookies in want.
-// (Query n done at tNow + (n+2)*1001 ms.)
+// 1. Perform SetCookies with fromURL and the cookies from setCookies.
+// (Done at time tNow + 0 ms.)
+// 2. Check that the entries in the jar matches content.
+// (Done at time tNow + 1001 ms.)
+// 3. For each query in tests: Check that Cookies with toURL yields the
+// cookies in want.
+// (Query n done at tNow + (n+2)*1001 ms.)
type jarTest struct {
description string // The description of what this test is supposed to test
fromURL string // The full URL of the request from which Set-Cookie headers where received
functions. Manually configuring HTTP/2 via the golang.org/x/net/http2
package takes precedence over the net/http package's built-in HTTP/2
support.
-
*/
package http
// The typical use case for NewFileTransport is to register the "file"
// protocol with a Transport, as in:
//
-// t := &http.Transport{}
-// t.RegisterProtocol("file", http.NewFileTransport(http.Dir("/")))
-// c := &http.Client{Transport: t}
-// res, err := c.Get("file:///etc/passwd")
-// ...
+// t := &http.Transport{}
+// t.RegisterProtocol("file", http.NewFileTransport(http.Dir("/")))
+// c := &http.Client{Transport: t}
+// res, err := c.Get("file:///etc/passwd")
+// ...
func NewFileTransport(fs FileSystem) RoundTripper {
return fileTransport{fileHandler{fs}}
}
// To use the operating system's file system implementation,
// use http.Dir:
//
-// http.Handle("/", http.FileServer(http.Dir("/tmp")))
+// http.Handle("/", http.FileServer(http.Dir("/tmp")))
//
// To use an fs.FS implementation, use http.FS to convert it:
//
// name (key). See httpguts.ValidHeaderName for the base rules.
//
// Further, http2 says:
-// "Just as in HTTP/1.x, header field names are strings of ASCII
-// characters that are compared in a case-insensitive
-// fashion. However, header field names MUST be converted to
-// lowercase prior to their encoding in HTTP/2. "
+//
+// "Just as in HTTP/1.x, header field names are strings of ASCII
+// characters that are compared in a case-insensitive
+// fashion. However, header field names MUST be converted to
+// lowercase prior to their encoding in HTTP/2. "
func http2validWireHeaderFieldName(v string) bool {
if len(v) == 0 {
return false
// validPseudoPath reports whether v is a valid :path pseudo-header
// value. It must be either:
//
-// *) a non-empty string starting with '/'
-// *) the string '*', for OPTIONS requests.
+// *) a non-empty string starting with '/'
+// *) the string '*', for OPTIONS requests.
//
// For now this is only used a quick check for deciding when to clean
// up Opaque URLs before sending requests from the Transport.
// prior to the headers being written. If the set of trailers is fixed
// or known before the header is written, the normal Go trailers mechanism
// is preferred:
-// https://golang.org/pkg/net/http/#ResponseWriter
-// https://golang.org/pkg/net/http/#example_ResponseWriter_trailers
+//
+// https://golang.org/pkg/net/http/#ResponseWriter
+// https://golang.org/pkg/net/http/#example_ResponseWriter_trailers
const http2TrailerPrefix = "Trailer:"
// promoteUndeclaredTrailers permits http.Handlers to set trailers
// When debugging a particular http server-based test,
// this flag lets you run
+//
// go test -run=BrokenTest -httptest.serve=127.0.0.1:8000
+//
// to start the broken server so you can interact with it manually.
// We only register this flag if it looks like the caller knows about it
// and is trying to use it as we don't want to pollute flags and this
// removeChunkExtension removes any chunk-extension from p.
// For example,
-// "0" => "0"
-// "0;token" => "0"
-// "0;token=val" => "0"
-// `0;token="quoted string"` => "0"
+//
+// "0" => "0"
+// "0;token" => "0"
+// "0;token=val" => "0"
+// `0;token="quoted string"` => "0"
func removeChunkExtension(p []byte) ([]byte, error) {
p, _, _ = bytes.Cut(p, semi)
// TODO: care about exact syntax of chunk extensions? We're
// The handled paths all begin with /debug/pprof/.
//
// To use pprof, link this package into your program:
+//
// import _ "net/http/pprof"
//
// If your application is not already running an http server, you
// need to start one. Add "net/http" and "log" to your imports and
// the following code to your main function:
//
-// go func() {
-// log.Println(http.ListenAndServe("localhost:6060", nil))
-// }()
+// go func() {
+// log.Println(http.ListenAndServe("localhost:6060", nil))
+// }()
//
// If you are not using DefaultServeMux, you will have to register handlers
// with the mux you are using.
// For a study of the facility in action, visit
//
// https://blog.golang.org/2011/06/profiling-go-programs.html
-//
package pprof
import (
// Write writes an HTTP/1.1 request, which is the header and body, in wire format.
// This method consults the following fields of the request:
+//
// Host
// URL
// Method (defaults to "GET")
// into Punycode form, if necessary.
//
// Ideally we'd clean the Host header according to the spec:
-// https://tools.ietf.org/html/rfc7230#section-5.4 (Host = uri-host [ ":" port ]")
-// https://tools.ietf.org/html/rfc7230#section-2.7 (uri-host -> rfc3986's host)
-// https://tools.ietf.org/html/rfc3986#section-3.2.2 (definition of host)
+//
+// https://tools.ietf.org/html/rfc7230#section-5.4 (Host = uri-host [ ":" port ]")
+// https://tools.ietf.org/html/rfc7230#section-2.7 (uri-host -> rfc3986's host)
+// https://tools.ietf.org/html/rfc3986#section-3.2.2 (definition of host)
+//
// But practically, what we are trying to avoid is the situation in
// issue 11206, where a malformed Host header used in the proxy context
// would create a bad request. So it is enough to just truncate at the
}
// RFC 7234, section 5.4: Should treat
+//
// Pragma: no-cache
+//
// like
+//
// Cache-Control: no-cache
func fixPragmaCacheControl(header Header) {
if hp, ok := header["Pragma"]; ok && len(hp) > 0 && hp[0] == "no-cache" {
//
// This method consults the following fields of the response r:
//
-// StatusCode
-// ProtoMajor
-// ProtoMinor
-// Request.Method
-// TransferEncoding
-// Trailer
-// Body
-// ContentLength
-// Header, values for non-canonical keys will have unpredictable behavior
+// StatusCode
+// ProtoMajor
+// ProtoMinor
+// Request.Method
+// TransferEncoding
+// Trailer
+// Body
+// ContentLength
+// Header, values for non-canonical keys will have unpredictable behavior
//
// The Response Body is closed after it is sent.
func (r *Response) Write(w io.Writer) error {
// The client code runs in a subprocess.
//
// For use like:
-// $ go test -c
-// $ ./http.test -test.run=XX -test.bench=BenchmarkServer -test.benchtime=15s -test.cpuprofile=http.prof
-// $ go tool pprof http.test http.prof
-// (pprof) web
+//
+// $ go test -c
+// $ ./http.test -test.run=XX -test.bench=BenchmarkServer -test.benchtime=15s -test.cpuprofile=http.prof
+// $ go tool pprof http.test http.prof
+// (pprof) web
func BenchmarkServer(b *testing.B) {
b.ReportAllocs()
// Child process mode;
// prior to the headers being written. If the set of trailers is fixed
// or known before the header is written, the normal Go trailers mechanism
// is preferred:
-// https://pkg.go.dev/net/http#ResponseWriter
-// https://pkg.go.dev/net/http#example-ResponseWriter-Trailers
+//
+// https://pkg.go.dev/net/http#ResponseWriter
+// https://pkg.go.dev/net/http#example-ResponseWriter-Trailers
const TrailerPrefix = "Trailer:"
// finalTrailers is called after the Handler exits and returns a non-nil
// headers before the pipe is fed data), we need to be careful and bound how
// long we wait for it. This delay will only affect users if all the following
// are true:
-// * the request body blocks
-// * the content length is not set (or set to -1)
-// * the method doesn't usually have a body (GET, HEAD, DELETE, ...)
-// * there is no transfer-encoding=chunked already set.
+// - the request body blocks
+// - the content length is not set (or set to -1)
+// - the method doesn't usually have a body (GET, HEAD, DELETE, ...)
+// - there is no transfer-encoding=chunked already set.
+//
// In other words, this delay will not normally affect anybody, and there
// are workarounds if it does.
func (t *transferWriter) probeRequestBody() {
// - we reused a keep-alive connection
// - we haven't yet received any header data
// - either we wrote no bytes to the server, or the request is idempotent
+//
// This automatically prevents an infinite resend loop because we'll run out of
// the cached keep-alive connections eventually.
func TestRetryRequestsOnError(t *testing.T) {
// address family, both AF_INET and AF_INET6, and a wildcard address
// like the following:
//
-// - A listen for a wildcard communication domain, "tcp" or
-// "udp", with a wildcard address: If the platform supports
-// both IPv6 and IPv4-mapped IPv6 communication capabilities,
-// or does not support IPv4, we use a dual stack, AF_INET6 and
-// IPV6_V6ONLY=0, wildcard address listen. The dual stack
-// wildcard address listen may fall back to an IPv6-only,
-// AF_INET6 and IPV6_V6ONLY=1, wildcard address listen.
-// Otherwise we prefer an IPv4-only, AF_INET, wildcard address
-// listen.
+// - A listen for a wildcard communication domain, "tcp" or
+// "udp", with a wildcard address: If the platform supports
+// both IPv6 and IPv4-mapped IPv6 communication capabilities,
+// or does not support IPv4, we use a dual stack, AF_INET6 and
+// IPV6_V6ONLY=0, wildcard address listen. The dual stack
+// wildcard address listen may fall back to an IPv6-only,
+// AF_INET6 and IPV6_V6ONLY=1, wildcard address listen.
+// Otherwise we prefer an IPv4-only, AF_INET, wildcard address
+// listen.
//
-// - A listen for a wildcard communication domain, "tcp" or
-// "udp", with an IPv4 wildcard address: same as above.
+// - A listen for a wildcard communication domain, "tcp" or
+// "udp", with an IPv4 wildcard address: same as above.
//
-// - A listen for a wildcard communication domain, "tcp" or
-// "udp", with an IPv6 wildcard address: same as above.
+// - A listen for a wildcard communication domain, "tcp" or
+// "udp", with an IPv6 wildcard address: same as above.
//
-// - A listen for an IPv4 communication domain, "tcp4" or "udp4",
-// with an IPv4 wildcard address: We use an IPv4-only, AF_INET,
-// wildcard address listen.
+// - A listen for an IPv4 communication domain, "tcp4" or "udp4",
+// with an IPv4 wildcard address: We use an IPv4-only, AF_INET,
+// wildcard address listen.
//
-// - A listen for an IPv6 communication domain, "tcp6" or "udp6",
-// with an IPv6 wildcard address: We use an IPv6-only, AF_INET6
-// and IPV6_V6ONLY=1, wildcard address listen.
+// - A listen for an IPv6 communication domain, "tcp6" or "udp6",
+// with an IPv6 wildcard address: We use an IPv6-only, AF_INET6
+// and IPV6_V6ONLY=1, wildcard address listen.
//
// Otherwise guess: If the addresses are IPv4 then returns AF_INET,
// or else returns AF_INET6. It also returns a boolean value what
// ParseMAC parses s as an IEEE 802 MAC-48, EUI-48, EUI-64, or a 20-octet
// IP over InfiniBand link-layer address using one of the following formats:
+//
// 00:00:5e:00:53:01
// 02:00:5e:10:00:00:00:01
// 00:00:00:00:fe:80:00:00:00:00:00:00:02:00:5e:10:00:00:00:01
For the most part, this package follows the syntax as specified by RFC 5322 and
extended by RFC 6532.
Notable divergences:
- * Obsolete address formats are not parsed, including addresses with
- embedded route information.
- * The full range of spacing (the CFWS syntax element) is not supported,
- such as breaking addresses across lines.
- * No unicode normalization is performed.
- * The special characters ()[]:;@\, are allowed to appear unquoted in names.
+ - Obsolete address formats are not parsed, including addresses with
+ embedded route information.
+ - The full range of spacing (the CFWS syntax element) is not supported,
+ such as breaking addresses across lines.
+ - No unicode normalization is performed.
+ - The special characters ()[]:;@\, are allowed to appear unquoted in names.
*/
package mail
go handleConnection(conn)
}
-Name Resolution
+# Name Resolution
The method for resolving domain names, whether indirectly with functions like Dial
or directly with functions like LookupHost and LookupAddr, varies by operating system.
On Plan 9, the resolver always accesses /net/cs and /net/dns.
On Windows, the resolver always uses C library functions, such as GetAddrInfo and DnsQuery.
-
*/
package net
// and against which we measure optimized parsers.
//
// parseIPSlow understands the following forms of IP addresses:
-// - Regular IPv4: 1.2.3.4
-// - IPv4 with many leading zeros: 0000001.0000002.0000003.0000004
-// - Regular IPv6: 1111:2222:3333:4444:5555:6666:7777:8888
-// - IPv6 with many leading zeros: 00000001:0000002:0000003:0000004:0000005:0000006:0000007:0000008
-// - IPv6 with zero blocks elided: 1111:2222::7777:8888
-// - IPv6 with trailing 32 bits expressed as IPv4: 1111:2222:3333:4444:5555:6666:77.77.88.88
+// - Regular IPv4: 1.2.3.4
+// - IPv4 with many leading zeros: 0000001.0000002.0000003.0000004
+// - Regular IPv6: 1111:2222:3333:4444:5555:6666:7777:8888
+// - IPv6 with many leading zeros: 00000001:0000002:0000003:0000004:0000005:0000006:0000007:0000008
+// - IPv6 with zero blocks elided: 1111:2222::7777:8888
+// - IPv6 with trailing 32 bits expressed as IPv4: 1111:2222:3333:4444:5555:6666:77.77.88.88
//
// It does not process the following IP address forms, which have been
// varyingly accepted by some programs due to an under-specification
// of the shapes of IPv4 addresses:
//
-// - IPv4 as a single 32-bit uint: 4660 (same as "1.2.3.4")
-// - IPv4 with octal numbers: 0300.0250.0.01 (same as "192.168.0.1")
-// - IPv4 with hex numbers: 0xc0.0xa8.0x0.0x1 (same as "192.168.0.1")
-// - IPv4 in "class-B style": 1.2.52 (same as "1.2.3.4")
-// - IPv4 in "class-A style": 1.564 (same as "1.2.3.4")
+// - IPv4 as a single 32-bit uint: 4660 (same as "1.2.3.4")
+// - IPv4 with octal numbers: 0300.0250.0.01 (same as "192.168.0.1")
+// - IPv4 with hex numbers: 0xc0.0xa8.0x0.0x1 (same as "192.168.0.1")
+// - IPv4 in "class-B style": 1.2.52 (same as "1.2.3.4")
+// - IPv4 in "class-A style": 1.564 (same as "1.2.3.4")
func parseIPSlow(s string) (Addr, error) {
// Identify and strip out the zone, if any. There should be 0 or 1
// '%' in the string.
// function does not verify the contents of each field.
//
// This function performs two transformations:
-// - The last 32 bits of an IPv6 address may be represented in
-// IPv4-style dotted quad form, as in 1:2:3:4:5:6:7.8.9.10. That
-// address is transformed to its hex equivalent,
-// e.g. 1:2:3:4:5:6:708:90a.
-// - An address may contain one "::", which expands into as many
-// 16-bit blocks of zeros as needed to make the address its correct
-// full size. For example, fe80::1:2 expands to fe80:0:0:0:0:0:1:2.
+// - The last 32 bits of an IPv6 address may be represented in
+// IPv4-style dotted quad form, as in 1:2:3:4:5:6:7.8.9.10. That
+// address is transformed to its hex equivalent,
+// e.g. 1:2:3:4:5:6:708:90a.
+// - An address may contain one "::", which expands into as many
+// 16-bit blocks of zeros as needed to make the address its correct
+// full size. For example, fe80::1:2 expands to fe80:0:0:0:0:0:1:2.
//
// Both short forms may be present in a single address,
// e.g. fe80::1.2.3.4.
Only methods that satisfy these criteria will be made available for remote access;
other methods will be ignored:
- - the method's type is exported.
- - the method is exported.
- - the method has two arguments, both exported (or builtin) types.
- - the method's second argument is a pointer.
- - the method has return type error.
+ - the method's type is exported.
+ - the method is exported.
+ - the method has two arguments, both exported (or builtin) types.
+ - the method's second argument is a pointer.
+ - the method has return type error.
In effect, the method must look schematically like
// Register publishes in the server the set of methods of the
// receiver value that satisfy the following conditions:
-// - exported method of exported type
-// - two arguments, both of exported type
-// - the second argument is a pointer
-// - one return value, of type error
+// - exported method of exported type
+// - two arguments, both of exported type
+// - the second argument is a pointer
+// - one return value, of type error
+//
// It returns an error if the receiver is not an exported type or has
// no suitable methods. It also logs the error using package log.
// The client accesses each method using a string of the form "Type.Method",
// Package smtp implements the Simple Mail Transfer Protocol as defined in RFC 5321.
// It also implements the following extensions:
+//
// 8BITMIME RFC 1652
// AUTH RFC 2554
// STARTTLS RFC 3207
+//
// Additional extensions may be handled by clients.
//
// The smtp package is frozen and is not accepting new features.
// Some external packages provide more functionality. See:
//
-// https://godoc.org/?q=smtp
+// https://godoc.org/?q=smtp
package smtp
import (
// Linux stores the backlog as:
//
-// - uint16 in kernel version < 4.1,
-// - uint32 in kernel version >= 4.1
+// - uint16 in kernel version < 4.1,
+// - uint32 in kernel version >= 4.1
//
// Truncate number to avoid wrapping.
//
}
// ReadCodeLine reads a response code line of the form
+//
// code message
+//
// where code is a three-digit status code and the message
// extends to the rest of the line. An example of such a line is:
+//
// 220 plan9.bell-labs.com ESMTP
//
// If the prefix of the status does not match the digits in expectCode,
// See page 36 of RFC 959 (https://www.ietf.org/rfc/rfc959.txt) for
// details of another form of response accepted:
//
-// code-message line 1
-// message line 2
-// ...
-// code message line n
+// code-message line 1
+// message line 2
+// ...
+// code message line n
//
// If the prefix of the status does not match the digits in expectCode,
// ReadResponse returns with err set to &Error{code, message}.
// validHeaderFieldByte reports whether b is a valid byte in a header
// field name. RFC 7230 says:
-// header-field = field-name ":" OWS field-value OWS
-// field-name = token
-// tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" / "+" / "-" / "." /
-// "^" / "_" / "`" / "|" / "~" / DIGIT / ALPHA
-// token = 1*tchar
+//
+// header-field = field-name ":" OWS field-value OWS
+// field-name = token
+// tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" / "+" / "-" / "." /
+// "^" / "_" / "`" / "|" / "~" / DIGIT / ALPHA
+// token = 1*tchar
func validHeaderFieldByte(b byte) bool {
return int(b) < len(isTokenTable) && isTokenTable[b]
}
//
// Conn, a convenient packaging of Reader, Writer, and Pipeline for use
// with a single network connection.
-//
package textproto
import (
// To obtain the path, String uses u.EscapedPath().
//
// In the second form, the following rules apply:
-// - if u.Scheme is empty, scheme: is omitted.
-// - if u.User is nil, userinfo@ is omitted.
-// - if u.Host is empty, host/ is omitted.
-// - if u.Scheme and u.Host are empty and u.User is nil,
-// the entire scheme://userinfo@host/ is omitted.
-// - if u.Host is non-empty and u.Path begins with a /,
-// the form host/path does not add its own /.
-// - if u.RawQuery is empty, ?query is omitted.
-// - if u.Fragment is empty, #fragment is omitted.
+// - if u.Scheme is empty, scheme: is omitted.
+// - if u.User is nil, userinfo@ is omitted.
+// - if u.Host is empty, host/ is omitted.
+// - if u.Scheme and u.Host are empty and u.User is nil,
+// the entire scheme://userinfo@host/ is omitted.
+// - if u.Host is non-empty and u.Path begins with a /,
+// the form host/path does not add its own /.
+// - if u.RawQuery is empty, ?query is omitted.
+// - if u.Fragment is empty, #fragment is omitted.
func (u *URL) String() string {
var buf strings.Builder
if u.Scheme != "" {
// validUserinfo reports whether s is a valid userinfo string per RFC 3986
// Section 3.2.1:
-// userinfo = *( unreserved / pct-encoded / sub-delims / ":" )
-// unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"
-// sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
-// / "*" / "+" / "," / ";" / "="
+//
+// userinfo = *( unreserved / pct-encoded / sub-delims / ":" )
+// unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"
+// sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
+// / "*" / "+" / "," / ";" / "="
//
// It doesn't validate pct-encoded. The caller does that via func unescape.
func validUserinfo(s string) bool {
// Note: The maximum number of concurrent operations on a File may be limited by
// the OS or the system. The number should be high, but exceeding it may degrade
// performance or cause other issues.
-//
package os
import (
// DeviceIoControl(h, FSCTL_GET_REPARSE_POINT, ...)
// into paths acceptable by all Windows APIs.
// For example, it converts
-// \??\C:\foo\bar into C:\foo\bar
-// \??\UNC\foo\bar into \\foo\bar
-// \??\Volume{abc}\ into C:\
+//
+// \??\C:\foo\bar into C:\foo\bar
+// \??\UNC\foo\bar into \\foo\bar
+// \??\Volume{abc}\ into C:\
func normaliseLinkPath(path string) (string, error) {
if len(path) < 4 || path[:4] != `\??\` {
// unexpected path, return it as is
Signals are primarily used on Unix-like systems. For the use of this
package on Windows and Plan 9, see below.
-Types of signals
+# Types of signals
The signals SIGKILL and SIGSTOP may not be caught by a program, and
therefore cannot be affected by this package.
program to simply exit by pressing ^C, and you can cause it to exit
with a stack dump by pressing ^\.
-Default behavior of signals in Go programs
+# Default behavior of signals in Go programs
By default, a synchronous signal is converted into a run-time panic. A
SIGHUP, SIGINT, or SIGTERM signal causes the program to exit. A
started by os.Exec, or by the os/exec package, will inherit the
modified signal mask.
-Changing the behavior of signals in Go programs
+# Changing the behavior of signals in Go programs
The functions in this package allow a program to change the way Go
programs handle signals.
called for that signal, or Stop is called on all channels passed to
Notify for that signal, the signal will once again be blocked.
-SIGPIPE
+# SIGPIPE
When a Go program writes to a broken pipe, the kernel will raise a
SIGPIPE signal.
typical Unix command line programs, while other programs will not
crash with SIGPIPE when writing to a closed network connection.
-Go programs that use cgo or SWIG
+# Go programs that use cgo or SWIG
In a Go program that includes non-Go code, typically C/C++ code
accessed using cgo or SWIG, Go's startup code normally runs first. It
system handler. If the program does not exit, the Go handler then
reinstalls itself and continues execution of the program.
-Non-Go programs that call Go code
+# Non-Go programs that call Go code
When Go code is built with options like -buildmode=c-shared, it will
be run as part of an existing non-Go program. The non-Go code may
an existing non-Go signal handler, that handler will be installed
before raising the signal.
-Windows
+# Windows
On Windows a ^C (Control-C) or ^BREAK (Control-Break) normally cause
the program to exit. If Notify is called for os.Interrupt, ^C or ^BREAK
still get terminated unless it exits. But receiving syscall.SIGTERM will
give the process an opportunity to clean up before termination.
-Plan 9
+# Plan 9
On Plan 9, signals have type syscall.Note, which is a string. Calling
Notify with a syscall.Note will cause that value to be sent on the
channel when that string is posted as a note.
-
*/
package signal
// by purely lexical processing. It applies the following rules
// iteratively until no further processing can be done:
//
-// 1. Replace multiple Separator elements with a single one.
-// 2. Eliminate each . path name element (the current directory).
-// 3. Eliminate each inner .. path name element (the parent directory)
-// along with the non-.. element that precedes it.
-// 4. Eliminate .. elements that begin a rooted path:
-// that is, replace "/.." by "/" at the beginning of a path,
-// assuming Separator is '/'.
+// 1. Replace multiple Separator elements with a single one.
+// 2. Eliminate each . path name element (the current directory).
+// 3. Eliminate each inner .. path name element (the parent directory)
+// along with the non-.. element that precedes it.
+// 4. Eliminate .. elements that begin a rooted path:
+// that is, replace "/.." by "/" at the beginning of a path,
+// assuming Separator is '/'.
//
// The returned path ends in a slash only if it represents a root directory,
// such as "/" on Unix or `C:\` on Windows.
// (where c: is vol parameter) to discover "8dot3 name creation state".
// The state is combination of 2 flags. The global flag controls if it
// is per volume or global setting:
-// 0 - Enable 8dot3 name creation on all volumes on the system
-// 1 - Disable 8dot3 name creation on all volumes on the system
-// 2 - Set 8dot3 name creation on a per volume basis
-// 3 - Disable 8dot3 name creation on all volumes except the system volume
+//
+// 0 - Enable 8dot3 name creation on all volumes on the system
+// 1 - Disable 8dot3 name creation on all volumes on the system
+// 2 - Set 8dot3 name creation on a per volume basis
+// 3 - Disable 8dot3 name creation on all volumes except the system volume
+//
// If global flag is set to 2, then per-volume flag needs to be examined:
-// 0 - Enable 8dot3 name creation on this volume
-// 1 - Disable 8dot3 name creation on this volume
+//
+// 0 - Enable 8dot3 name creation on this volume
+// 1 - Disable 8dot3 name creation on this volume
+//
// checkVolume8dot3Setting verifies that "8dot3 name creation" flags
// are set to 2 and 0, if enabled parameter is true, or 2 and 1, if enabled
// is false. Otherwise checkVolume8dot3Setting returns error.
// toNorm returns the normalized path that is guaranteed to be unique.
// It should accept the following formats:
-// * UNC paths (e.g \\server\share\foo\bar)
-// * absolute paths (e.g C:\foo\bar)
-// * relative paths begin with drive letter (e.g C:foo\bar, C:..\foo\bar, C:.., C:.)
-// * relative paths begin with '\' (e.g \foo\bar)
-// * relative paths begin without '\' (e.g foo\bar, ..\foo\bar, .., .)
+// - UNC paths (e.g \\server\share\foo\bar)
+// - absolute paths (e.g C:\foo\bar)
+// - relative paths begin with drive letter (e.g C:foo\bar, C:..\foo\bar, C:.., C:.)
+// - relative paths begin with '\' (e.g \foo\bar)
+// - relative paths begin without '\' (e.g foo\bar, ..\foo\bar, .., .)
+//
// The returned normalized path will be in the same form (of 5 listed above) as the input path.
// If two paths A and B are indicating the same file with the same format, toNorm(A) should be equal to toNorm(B).
// The normBase parameter should be equal to the normBase func, except for in tests. See docs on the normBase func.
// by purely lexical processing. It applies the following rules
// iteratively until no further processing can be done:
//
-// 1. Replace multiple slashes with a single slash.
-// 2. Eliminate each . path name element (the current directory).
-// 3. Eliminate each inner .. path name element (the parent directory)
-// along with the non-.. element that precedes it.
-// 4. Eliminate .. elements that begin a rooted path:
-// that is, replace "/.." by "/" at the beginning of a path.
+// 1. Replace multiple slashes with a single slash.
+// 2. Eliminate each . path name element (the current directory).
+// 3. Eliminate each inner .. path name element (the parent directory)
+// along with the non-.. element that precedes it.
+// 4. Eliminate .. elements that begin a rooted path:
+// that is, replace "/.." by "/" at the beginning of a path.
//
// The returned path ends in a slash only if it is the root "/".
//
// that wraps the function fn. When called, that new function
// does the following:
//
-// - converts its arguments to a slice of Values.
-// - runs results := fn(args).
-// - returns the results as a slice of Values, one per formal result.
+// - converts its arguments to a slice of Values.
+// - runs results := fn(args).
+// - returns the results as a slice of Values, one per formal result.
//
// The implementation fn can assume that the argument Value slice
// has the number and type of arguments given by typ.
// available in the memory directly following the rtype value.
//
// tflag values must be kept in sync with copies in:
+//
// cmd/compile/internal/reflectdata/reflect.go
// cmd/link/internal/ld/decodesym.go
// runtime/type.go
// Interface returns v's current value as an interface{}.
// It is equivalent to:
+//
// var i interface{} = (v's underlying value)
+//
// It panics if the Value was obtained by accessing
// unexported struct fields.
func (v Value) Interface() (i any) {
// Example:
//
// iter := reflect.ValueOf(m).MapRange()
-// for iter.Next() {
+// for iter.Next() {
// k := iter.Key()
// v := iter.Value()
// ...
// More precisely, it is the syntax accepted by RE2 and described at
// https://golang.org/s/re2syntax, except for \C.
// For an overview of the syntax, run
-// go doc regexp/syntax
+//
+// go doc regexp/syntax
//
// The regexp implementation provided by this package is
// guaranteed to run in time linear in the size of the input.
// (This is a property not guaranteed by most open source
// implementations of regular expressions.) For more information
// about this property, see
+//
// https://swtch.com/~rsc/regexp/regexp1.html
+//
// or any book about automata theory.
//
// All characters are UTF-8-encoded code points.
// before returning.
//
// (There are a few other methods that do not match this pattern.)
-//
package regexp
import (
// that contains no metacharacters, it is equivalent to strings.SplitN.
//
// Example:
-// s := regexp.MustCompile("a*").Split("abaabaccadaaae", 5)
-// // s: ["", "b", "b", "c", "cadaaae"]
+//
+// s := regexp.MustCompile("a*").Split("abaabaccadaaae", 5)
+// // s: ["", "b", "b", "c", "cadaaae"]
//
// The count determines the number of substrings to return:
-// n > 0: at most n substrings; the last substring will be the unsplit remainder.
-// n == 0: the result is nil (zero substrings)
-// n < 0: all substrings
+//
+// n > 0: at most n substrings; the last substring will be the unsplit remainder.
+// n == 0: the result is nil (zero substrings)
+// n < 0: all substrings
func (re *Regexp) Split(s string, n int) []string {
if n == 0 {
parse trees into programs. Most clients of regular expressions will use the
facilities of package regexp (such as Compile and Match) instead of this package.
-Syntax
+# Syntax
The regular expression syntax understood by this package when parsing with the Perl flag is as follows.
Parts of the syntax can be disabled by passing alternate flags to Parse.
-
Single characters:
- . any character, possibly including newline (flag s=true)
- [xyz] character class
- [^xyz] negated character class
- \d Perl character class
- \D negated Perl character class
- [[:alpha:]] ASCII character class
- [[:^alpha:]] negated ASCII character class
- \pN Unicode character class (one-letter name)
- \p{Greek} Unicode character class
- \PN negated Unicode character class (one-letter name)
- \P{Greek} negated Unicode character class
+
+ . any character, possibly including newline (flag s=true)
+ [xyz] character class
+ [^xyz] negated character class
+ \d Perl character class
+ \D negated Perl character class
+ [[:alpha:]] ASCII character class
+ [[:^alpha:]] negated ASCII character class
+ \pN Unicode character class (one-letter name)
+ \p{Greek} Unicode character class
+ \PN negated Unicode character class (one-letter name)
+ \P{Greek} negated Unicode character class
Composites:
- xy x followed by y
- x|y x or y (prefer x)
+
+ xy x followed by y
+ x|y x or y (prefer x)
Repetitions:
- x* zero or more x, prefer more
- x+ one or more x, prefer more
- x? zero or one x, prefer one
- x{n,m} n or n+1 or ... or m x, prefer more
- x{n,} n or more x, prefer more
- x{n} exactly n x
- x*? zero or more x, prefer fewer
- x+? one or more x, prefer fewer
- x?? zero or one x, prefer zero
- x{n,m}? n or n+1 or ... or m x, prefer fewer
- x{n,}? n or more x, prefer fewer
- x{n}? exactly n x
+
+ x* zero or more x, prefer more
+ x+ one or more x, prefer more
+ x? zero or one x, prefer one
+ x{n,m} n or n+1 or ... or m x, prefer more
+ x{n,} n or more x, prefer more
+ x{n} exactly n x
+ x*? zero or more x, prefer fewer
+ x+? one or more x, prefer fewer
+ x?? zero or one x, prefer zero
+ x{n,m}? n or n+1 or ... or m x, prefer fewer
+ x{n,}? n or more x, prefer fewer
+ x{n}? exactly n x
Implementation restriction: The counting forms x{n,m}, x{n,}, and x{n}
reject forms that create a minimum or maximum repetition count above 1000.
Unlimited repetitions are not subject to this restriction.
Grouping:
- (re) numbered capturing group (submatch)
- (?P<name>re) named & numbered capturing group (submatch)
- (?:re) non-capturing group
- (?flags) set flags within current group; non-capturing
- (?flags:re) set flags during re; non-capturing
- Flag syntax is xyz (set) or -xyz (clear) or xy-z (set xy, clear z). The flags are:
+ (re) numbered capturing group (submatch)
+ (?P<name>re) named & numbered capturing group (submatch)
+ (?:re) non-capturing group
+ (?flags) set flags within current group; non-capturing
+ (?flags:re) set flags during re; non-capturing
+
+ Flag syntax is xyz (set) or -xyz (clear) or xy-z (set xy, clear z). The flags are:
- i case-insensitive (default false)
- m multi-line mode: ^ and $ match begin/end line in addition to begin/end text (default false)
- s let . match \n (default false)
- U ungreedy: swap meaning of x* and x*?, x+ and x+?, etc (default false)
+ i case-insensitive (default false)
+ m multi-line mode: ^ and $ match begin/end line in addition to begin/end text (default false)
+ s let . match \n (default false)
+ U ungreedy: swap meaning of x* and x*?, x+ and x+?, etc (default false)
Empty strings:
- ^ at beginning of text or line (flag m=true)
- $ at end of text (like \z not \Z) or line (flag m=true)
- \A at beginning of text
- \b at ASCII word boundary (\w on one side and \W, \A, or \z on the other)
- \B not at ASCII word boundary
- \z at end of text
+
+ ^ at beginning of text or line (flag m=true)
+ $ at end of text (like \z not \Z) or line (flag m=true)
+ \A at beginning of text
+ \b at ASCII word boundary (\w on one side and \W, \A, or \z on the other)
+ \B not at ASCII word boundary
+ \z at end of text
Escape sequences:
- \a bell (== \007)
- \f form feed (== \014)
- \t horizontal tab (== \011)
- \n newline (== \012)
- \r carriage return (== \015)
- \v vertical tab character (== \013)
- \* literal *, for any punctuation character *
- \123 octal character code (up to three digits)
- \x7F hex character code (exactly two digits)
- \x{10FFFF} hex character code
- \Q...\E literal text ... even if ... has punctuation
+
+ \a bell (== \007)
+ \f form feed (== \014)
+ \t horizontal tab (== \011)
+ \n newline (== \012)
+ \r carriage return (== \015)
+ \v vertical tab character (== \013)
+ \* literal *, for any punctuation character *
+ \123 octal character code (up to three digits)
+ \x7F hex character code (exactly two digits)
+ \x{10FFFF} hex character code
+ \Q...\E literal text ... even if ... has punctuation
Character class elements:
- x single character
- A-Z character range (inclusive)
- \d Perl character class
- [:foo:] ASCII character class foo
- \p{Foo} Unicode character class Foo
- \pF Unicode character class F (one-letter name)
+
+ x single character
+ A-Z character range (inclusive)
+ \d Perl character class
+ [:foo:] ASCII character class foo
+ \p{Foo} Unicode character class Foo
+ \pF Unicode character class F (one-letter name)
Named character classes as character class elements:
- [\d] digits (== \d)
- [^\d] not digits (== \D)
- [\D] not digits (== \D)
- [^\D] not not digits (== \d)
- [[:name:]] named ASCII class inside character class (== [:name:])
- [^[:name:]] named ASCII class inside negated character class (== [:^name:])
- [\p{Name}] named Unicode property inside character class (== \p{Name})
- [^\p{Name}] named Unicode property inside negated character class (== \P{Name})
+
+ [\d] digits (== \d)
+ [^\d] not digits (== \D)
+ [\D] not digits (== \D)
+ [^\D] not not digits (== \d)
+ [[:name:]] named ASCII class inside character class (== [:name:])
+ [^[:name:]] named ASCII class inside negated character class (== [:^name:])
+ [\p{Name}] named Unicode property inside character class (== \p{Name})
+ [^\p{Name}] named Unicode property inside negated character class (== \P{Name})
Perl character classes (all ASCII-only):
- \d digits (== [0-9])
- \D not digits (== [^0-9])
- \s whitespace (== [\t\n\f\r ])
- \S not whitespace (== [^\t\n\f\r ])
- \w word characters (== [0-9A-Za-z_])
- \W not word characters (== [^0-9A-Za-z_])
+
+ \d digits (== [0-9])
+ \D not digits (== [^0-9])
+ \s whitespace (== [\t\n\f\r ])
+ \S not whitespace (== [^\t\n\f\r ])
+ \w word characters (== [0-9A-Za-z_])
+ \W not word characters (== [^0-9A-Za-z_])
ASCII character classes:
- [[:alnum:]] alphanumeric (== [0-9A-Za-z])
- [[:alpha:]] alphabetic (== [A-Za-z])
- [[:ascii:]] ASCII (== [\x00-\x7F])
- [[:blank:]] blank (== [\t ])
- [[:cntrl:]] control (== [\x00-\x1F\x7F])
- [[:digit:]] digits (== [0-9])
- [[:graph:]] graphical (== [!-~] == [A-Za-z0-9!"#$%&'()*+,\-./:;<=>?@[\\\]^_`{|}~])
- [[:lower:]] lower case (== [a-z])
- [[:print:]] printable (== [ -~] == [ [:graph:]])
- [[:punct:]] punctuation (== [!-/:-@[-`{-~])
- [[:space:]] whitespace (== [\t\n\v\f\r ])
- [[:upper:]] upper case (== [A-Z])
- [[:word:]] word characters (== [0-9A-Za-z_])
- [[:xdigit:]] hex digit (== [0-9A-Fa-f])
+
+ [[:alnum:]] alphanumeric (== [0-9A-Za-z])
+ [[:alpha:]] alphabetic (== [A-Za-z])
+ [[:ascii:]] ASCII (== [\x00-\x7F])
+ [[:blank:]] blank (== [\t ])
+ [[:cntrl:]] control (== [\x00-\x1F\x7F])
+ [[:digit:]] digits (== [0-9])
+ [[:graph:]] graphical (== [!-~] == [A-Za-z0-9!"#$%&'()*+,\-./:;<=>?@[\\\]^_`{|}~])
+ [[:lower:]] lower case (== [a-z])
+ [[:print:]] printable (== [ -~] == [ [:graph:]])
+ [[:punct:]] punctuation (== [!-/:-@[-`{-~])
+ [[:space:]] whitespace (== [\t\n\v\f\r ])
+ [[:upper:]] upper case (== [A-Z])
+ [[:word:]] word characters (== [0-9A-Za-z_])
+ [[:xdigit:]] hex digit (== [0-9A-Fa-f])
Unicode character classes are those in unicode.Categories and unicode.Scripts.
*/
// frees (passes to p.reuse) any removed *Regexps.
//
// For example,
-// ABC|ABD|AEF|BCX|BCY
+//
+// ABC|ABD|AEF|BCX|BCY
+//
// simplifies by literal prefix extraction to
-// A(B(C|D)|EF)|BC(X|Y)
+//
+// A(B(C|D)|EF)|BC(X|Y)
+//
// which simplifies by character class introduction to
-// A(B[CD]|EF)|BC[XY]
+//
+// A(B[CD]|EF)|BC[XY]
func (p *parser) factor(sub []*Regexp) []*Regexp {
if len(sub) < 2 {
return sub
// recv processes a receive operation on a full channel c.
// There are 2 parts:
-// 1) The value sent by the sender sg is put into the channel
+// 1. The value sent by the sender sg is put into the channel
// and the sender is woken up to go on its merry way.
-// 2) The value received by the receiver (the current G) is
+// 2. The value received by the receiver (the current G) is
// written to ep.
+//
// For synchronous channels, both values are the same.
// For asynchronous channels, the receiver gets its data from
// the channel buffer and the sender's data is put in the
// This test checks that select acts on the state of the channels at one
// moment in the execution, not over a smeared time window.
// In the test, one goroutine does:
+//
// create c1, c2
// make c1 ready for receiving
// create second goroutine
// make c2 ready for receiving
// make c1 no longer ready for receiving (if possible)
+//
// The second goroutine does a non-blocking select receiving from c1 and c2.
// From the time the second goroutine is created, at least one of c1 and c2
// is always ready for receiving, so the select in the second goroutine must
// preemption points. To apply this to all preemption points in the
// runtime and runtime-like code, use the following in bash or zsh:
//
-// X=(-{gc,asm}flags={runtime/...,reflect,sync}=-d=maymorestack=runtime.mayMoreStackPreempt) GOFLAGS=${X[@]}
+// X=(-{gc,asm}flags={runtime/...,reflect,sync}=-d=maymorestack=runtime.mayMoreStackPreempt) GOFLAGS=${X[@]}
//
// This must be deeply nosplit because it is called from a function
// prologue before the stack is set up and because the compiler will
// Ideally it should also use very little stack because the linker
// doesn't currently account for this in nosplit stack depth checking.
//
-//go:nosplit
-//
// Ensure mayMoreStackPreempt can be called for all ABIs.
//
+//go:nosplit
//go:linkname mayMoreStackPreempt
func mayMoreStackPreempt() {
// Don't do anything on the g0 or gsignal stack.
// dramatic situations; SetPanicOnFault allows such programs to request
// that the runtime trigger only a panic, not a crash.
// The runtime.Error that the runtime panics with may have an additional method:
-// Addr() uintptr
+//
+// Addr() uintptr
+//
// If that method exists, it returns the memory address which triggered the fault.
// The results of Addr are best-effort and the veracity of the result
// may depend on the platform.
": missing method " + e.missingMethod
}
-//go:nosplit
// itoa converts val to a decimal representation. The result is
// written somewhere within buf and the location of the result is returned.
// buf must be at least 20 bytes.
+//
+//go:nosplit
func itoa(buf []byte, val uint64) []byte {
i := len(buf) - 1
for val >= 10 {
used by the reflect package; see reflect's documentation for the programmable
interface to the run-time type system.
-Environment Variables
+# Environment Variables
The following environment variables ($name or %name%, depending on the host
operating system) control the run-time behavior of Go programs. The meanings
GOTRACEBACK=none omits the goroutine stack traces entirely.
GOTRACEBACK=single (the default) behaves as described above.
GOTRACEBACK=all adds stack traces for all user-created goroutines.
-GOTRACEBACK=system is like ``all'' but adds stack frames for run-time functions
+GOTRACEBACK=system is like “all” but adds stack frames for run-time functions
and shows goroutines created internally by the run-time.
-GOTRACEBACK=crash is like ``system'' but crashes in an operating system-specific
+GOTRACEBACK=crash is like “system” but crashes in an operating system-specific
manner instead of exiting. For example, on Unix systems, the crash raises
SIGABRT to trigger a core dump.
For historical reasons, the GOTRACEBACK settings 0, 1, and 2 are synonyms for
// Abs returns the absolute value of x.
//
// Special cases are:
+//
// Abs(±Inf) = +Inf
// Abs(NaN) = NaN
func abs(x float64) float64 {
unlockWithRank(l)
}
-//go:nowritebarrier
// We might not be holding a p in this code.
+//
+//go:nowritebarrier
func unlock2(l *mutex) {
gp := getg()
var mp *m
}
// negative zero is a good test because:
-// 1) 0 and -0 are equal, yet have distinct representations.
-// 2) 0 is represented as all zeros, -0 isn't.
+// 1. 0 and -0 are equal, yet have distinct representations.
+// 2. 0 is represented as all zeros, -0 isn't.
+//
// I'm not sure the language spec actually requires this behavior,
// but it's what the current map implementation does.
func TestNegativeZero(t *testing.T) {
// subtract1 returns the byte pointer p-1.
//
-//go:nowritebarrier
-//
// nosplit because it is used during write barriers and must not be preempted.
//
+//go:nowritebarrier
//go:nosplit
func subtract1(p *byte) *byte {
// Note: wrote out full expression instead of calling subtractb(p, 1)
evolves, and also enables variation across Go implementations, whose relevant
metric sets may not intersect.
-Interface
+# Interface
Metrics are designated by a string key, rather than, for example, a field name in
a struct. The full list of supported metrics is always available in the slice of
is guaranteed not to change. If it must change, then a new metric will be introduced
with a new key and a new "kind."
-Metric key format
+# Metric key format
As mentioned earlier, metric keys are strings. Their format is simple and well-defined,
designed to be both human and machine readable. It is split into two components,
For more details on the precise definition of the metric key's path and unit formats, see
the documentation of the Name field of the Description struct.
-A note about floats
+# A note about floats
This package supports metrics whose values have a floating-point representation. In
order to improve ease-of-use, this package promises to never produce the following
classes of floating-point values: NaN, infinity.
-Supported metrics
+# Supported metrics
Below is the full list of supported metrics, ordered lexicographically.
// before the point in the program where KeepAlive is called.
//
// A very simplified example showing where KeepAlive is required:
-// type File struct { d int }
-// d, err := syscall.Open("/file/path", syscall.O_RDONLY, 0)
-// // ... do something if err != nil ...
-// p := &File{d}
-// runtime.SetFinalizer(p, func(p *File) { syscall.Close(p.d) })
-// var buf [10]byte
-// n, err := syscall.Read(p.d, buf[:])
-// // Ensure p is not finalized until Read returns.
-// runtime.KeepAlive(p)
-// // No more uses of p after this point.
+//
+// type File struct { d int }
+// d, err := syscall.Open("/file/path", syscall.O_RDONLY, 0)
+// // ... do something if err != nil ...
+// p := &File{d}
+// runtime.SetFinalizer(p, func(p *File) { syscall.Close(p.d) })
+// var buf [10]byte
+// n, err := syscall.Read(p.d, buf[:])
+// // Ensure p is not finalized until Read returns.
+// runtime.KeepAlive(p)
+// // No more uses of p after this point.
//
// Without the KeepAlive call, the finalizer could run at the start of
// syscall.Read, closing the file descriptor before syscall.Read makes
// This should be called when all local mark work has been drained and
// there are no remaining workers. Specifically, when
//
-// work.nwait == work.nproc && !gcMarkWorkAvailable(p)
+// work.nwait == work.nproc && !gcMarkWorkAvailable(p)
//
// The calling context must be preemptible.
//
//
// A gcWork can be used on the stack as follows:
//
-// (preemption must be disabled)
-// gcw := &getg().m.p.ptr().gcw
-// .. call gcw.put() to produce and gcw.tryGet() to consume ..
+// (preemption must be disabled)
+// gcw := &getg().m.p.ptr().gcw
+// .. call gcw.put() to produce and gcw.tryGet() to consume ..
//
// It's important that any use of gcWork during the mark phase prevent
// the garbage collector from transitioning to mark termination since
// mSpanManual, or mSpanFree. Transitions between these states are
// constrained as follows:
//
-// * A span may transition from free to in-use or manual during any GC
-// phase.
+// - A span may transition from free to in-use or manual during any GC
+// phase.
//
-// * During sweeping (gcphase == _GCoff), a span may transition from
-// in-use to free (as a result of sweeping) or manual to free (as a
-// result of stacks being freed).
+// - During sweeping (gcphase == _GCoff), a span may transition from
+// in-use to free (as a result of sweeping) or manual to free (as a
+// result of stacks being freed).
//
-// * During GC (gcphase != _GCoff), a span *must not* transition from
-// manual or in-use to free. Because concurrent GC may read a pointer
-// and then look up its span, the span state must be monotonic.
+// - During GC (gcphase != _GCoff), a span *must not* transition from
+// manual or in-use to free. Because concurrent GC may read a pointer
+// and then look up its span, the span state must be monotonic.
//
// Setting mspan.state to mSpanInUse or mSpanManual must be done
// atomically and only after all other span fields are valid.
//
// With levelShift, one can compute the index of the summary at level l related to a
// pointer p by doing:
-// p >> levelShift[l]
+//
+// p >> levelShift[l]
var levelShift = [summaryLevels]uint{
heapAddrBits - summaryL0Bits,
heapAddrBits - summaryL0Bits - 1*summaryLevelBits,
// putFast adds old and new to the write barrier buffer and returns
// false if a flush is necessary. Callers should use this as:
//
-// buf := &getg().m.p.ptr().wbBuf
-// if !buf.putFast(old, new) {
-// wbBufFlush(...)
-// }
-// ... actual memory write ...
+// buf := &getg().m.p.ptr().wbBuf
+// if !buf.putFast(old, new) {
+// wbBufFlush(...)
+// }
+// ... actual memory write ...
//
// The arguments to wbBufFlush depend on whether the caller is doing
// its own cgo pointer checks. If it is, then this can be
// pollDesc contains 2 binary semaphores, rg and wg, to park reader and writer
// goroutines respectively. The semaphore can be in the following states:
-// pdReady - io readiness notification is pending;
-// a goroutine consumes the notification by changing the state to nil.
-// pdWait - a goroutine prepares to park on the semaphore, but not yet parked;
-// the goroutine commits to park by changing the state to G pointer,
-// or, alternatively, concurrent io notification changes the state to pdReady,
-// or, alternatively, concurrent timeout/close changes the state to nil.
-// G pointer - the goroutine is blocked on the semaphore;
-// io notification or timeout/close changes the state to pdReady or nil respectively
-// and unparks the goroutine.
-// nil - none of the above.
+//
+// pdReady - io readiness notification is pending;
+// a goroutine consumes the notification by changing the state to nil.
+// pdWait - a goroutine prepares to park on the semaphore, but not yet parked;
+// the goroutine commits to park by changing the state to G pointer,
+// or, alternatively, concurrent io notification changes the state to pdReady,
+// or, alternatively, concurrent timeout/close changes the state to nil.
+// G pointer - the goroutine is blocked on the semaphore;
+// io notification or timeout/close changes the state to pdReady or nil respectively
+// and unparks the goroutine.
+// nil - none of the above.
const (
pdReady uintptr = 1
pdWait uintptr = 2
)
// Atomically,
+//
// if(*addr == val) sleep
+//
// Might be woken up spuriously; that's allowed.
// Don't sleep longer than ns; ns < 0 means forever.
//
// Package pprof writes runtime profiling data in the format expected
// by the pprof visualization tool.
//
-// Profiling a Go program
+// # Profiling a Go program
//
// The first step to profiling a Go program is to enable profiling.
// Support for profiling benchmarks built with the standard testing
// runs benchmarks in the current directory and writes the CPU and
// memory profiles to cpu.prof and mem.prof:
//
-// go test -cpuprofile cpu.prof -memprofile mem.prof -bench .
+// go test -cpuprofile cpu.prof -memprofile mem.prof -bench .
//
// To add equivalent profiling support to a standalone program, add
// code like the following to your main function:
//
-// var cpuprofile = flag.String("cpuprofile", "", "write cpu profile to `file`")
-// var memprofile = flag.String("memprofile", "", "write memory profile to `file`")
+// var cpuprofile = flag.String("cpuprofile", "", "write cpu profile to `file`")
+// var memprofile = flag.String("memprofile", "", "write memory profile to `file`")
//
-// func main() {
-// flag.Parse()
-// if *cpuprofile != "" {
-// f, err := os.Create(*cpuprofile)
-// if err != nil {
-// log.Fatal("could not create CPU profile: ", err)
-// }
-// defer f.Close() // error handling omitted for example
-// if err := pprof.StartCPUProfile(f); err != nil {
-// log.Fatal("could not start CPU profile: ", err)
-// }
-// defer pprof.StopCPUProfile()
-// }
+// func main() {
+// flag.Parse()
+// if *cpuprofile != "" {
+// f, err := os.Create(*cpuprofile)
+// if err != nil {
+// log.Fatal("could not create CPU profile: ", err)
+// }
+// defer f.Close() // error handling omitted for example
+// if err := pprof.StartCPUProfile(f); err != nil {
+// log.Fatal("could not start CPU profile: ", err)
+// }
+// defer pprof.StopCPUProfile()
+// }
//
-// // ... rest of the program ...
+// // ... rest of the program ...
//
-// if *memprofile != "" {
-// f, err := os.Create(*memprofile)
-// if err != nil {
-// log.Fatal("could not create memory profile: ", err)
-// }
-// defer f.Close() // error handling omitted for example
-// runtime.GC() // get up-to-date statistics
-// if err := pprof.WriteHeapProfile(f); err != nil {
-// log.Fatal("could not write memory profile: ", err)
-// }
-// }
-// }
+// if *memprofile != "" {
+// f, err := os.Create(*memprofile)
+// if err != nil {
+// log.Fatal("could not create memory profile: ", err)
+// }
+// defer f.Close() // error handling omitted for example
+// runtime.GC() // get up-to-date statistics
+// if err := pprof.WriteHeapProfile(f); err != nil {
+// log.Fatal("could not write memory profile: ", err)
+// }
+// }
+// }
//
// There is also a standard HTTP interface to profiling data. Adding
// the following line will install handlers under the /debug/pprof/
// URL to download live profiles:
//
-// import _ "net/http/pprof"
+// import _ "net/http/pprof"
//
// See the net/http/pprof package for more details.
//
// Profiles can then be visualized with the pprof tool:
//
-// go tool pprof cpu.prof
+// go tool pprof cpu.prof
//
// There are many commands available from the pprof command line.
// Commonly used commands include "top", which prints a summary of the
}
// symbolizeFlag keeps track of symbolization result.
-// 0 : no symbol lookup was performed
-// 1<<0 (lookupTried) : symbol lookup was performed
-// 1<<1 (lookupFailed): symbol lookup was performed but failed
+//
+// 0 : no symbol lookup was performed
+// 1<<0 (lookupTried) : symbol lookup was performed
+// 1<<1 (lookupFailed): symbol lookup was performed but failed
type symbolizeFlag uint8
const (
// and looking up debug info is not ideal, so we use a heuristic to filter
// the fake pcs and restore the inlined and entry functions. Inlined functions
// have the following properties:
-// Frame's Func is nil (note: also true for non-Go functions), and
-// Frame's Entry matches its entry function frame's Entry (note: could also be true for recursive calls and non-Go functions), and
-// Frame's Name does not match its entry function frame's name (note: inlined functions cannot be directly recursive).
+//
+// Frame's Func is nil (note: also true for non-Go functions), and
+// Frame's Entry matches its entry function frame's Entry (note: could also be true for recursive calls and non-Go functions), and
+// Frame's Name does not match its entry function frame's name (note: inlined functions cannot be directly recursive).
//
// As reading and processing the pcs in a stack trace one by one (from leaf to the root),
// we use pcDeck to temporarily hold the observed pcs and their expanded frames
// runSafePointFn runs the safe point function, if any, for this P.
// This should be called like
//
-// if getg().m.p.runSafePointFn != 0 {
-// runSafePointFn()
-// }
+// if getg().m.p.runSafePointFn != 0 {
+// runSafePointFn()
+// }
//
// runSafePointFn must be checked on any transition in to _Pidle or
// _Psyscall to avoid a race where forEachP sees that the P is running
//
// Thus, we get the following effects on timer-stealing in findrunnable:
//
-// * Idle Ps with no timers when they go idle are never checked in findrunnable
-// (for work- or timer-stealing; this is the ideal case).
-// * Running Ps must always be checked.
-// * Idle Ps whose timers are stolen must continue to be checked until they run
-// again, even after timer expiration.
+// - Idle Ps with no timers when they go idle are never checked in findrunnable
+// (for work- or timer-stealing; this is the ideal case).
+// - Running Ps must always be checked.
+// - Idle Ps whose timers are stolen must continue to be checked until they run
+// again, even after timer expiration.
//
// When the P starts running again, the mask should be set, as a timer may be
// added at any time.
// Global pool of spans that have free stacks.
// Stacks are assigned an order according to size.
-// order = log_2(size/FixedStack)
+//
+// order = log_2(size/FixedStack)
+//
// There is a free list for each order.
var stackpool [_NumStackOrders]struct {
item stackpoolItem
// and otherwise intrinsified by the compiler.
//
// Some internal compiler optimizations use this function.
-// - Used for m[T1{... Tn{..., string(k), ...} ...}] and m[string(k)]
-// where k is []byte, T1 to Tn is a nesting of struct and array literals.
-// - Used for "<"+string(b)+">" concatenation where b is []byte.
-// - Used for string(b)=="foo" comparison where b is []byte.
+// - Used for m[T1{... Tn{..., string(k), ...} ...}] and m[string(k)]
+// where k is []byte, T1 to Tn is a nesting of struct and array literals.
+// - Used for "<"+string(b)+">" concatenation where b is []byte.
+// - Used for string(b)=="foo" comparison where b is []byte.
func slicebytetostringtmp(ptr *byte, n int) (str string) {
if raceenabled && n > 0 {
racereadrangepc(unsafe.Pointer(ptr),
// Go obviously doesn't easily expose the problematic PCs to running programs,
// so this test is a bit fragile. Some details:
//
-// * tracebackFunc is our target function. We want to get a PC in the
-// alignment region following this function. This function also has other
-// functions inlined into it to ensure it has an InlTree (this was the source
-// of the bug in issue 44971).
+// - tracebackFunc is our target function. We want to get a PC in the
+// alignment region following this function. This function also has other
+// functions inlined into it to ensure it has an InlTree (this was the source
+// of the bug in issue 44971).
//
-// * We acquire a PC in tracebackFunc, walking forwards until FuncForPC says
-// we're in a new function. The last PC of the function according to FuncForPC
-// should be in the alignment region (assuming the function isn't already
-// perfectly aligned).
+// - We acquire a PC in tracebackFunc, walking forwards until FuncForPC says
+// we're in a new function. The last PC of the function according to FuncForPC
+// should be in the alignment region (assuming the function isn't already
+// perfectly aligned).
//
// This is a regression test for issue 44971.
func TestFunctionAlignmentTraceback(t *testing.T) {
}
func close_trampoline()
-//go:nosplit
-//go:cgo_unsafe_args
-//
// This is exported via linkname to assembly in runtime/cgo.
//
+//go:nosplit
+//go:cgo_unsafe_args
//go:linkname exit
func exit(code int32) {
libcCall(unsafe.Pointer(abi.FuncPCABI0(exit_trampoline)), unsafe.Pointer(&code))
// resetTimer resets an inactive timer, adding it to the heap.
//
-//go:linkname resetTimer time.resetTimer
// Reports whether the timer was modified before it was run.
+//
+//go:linkname resetTimer time.resetTimer
func resetTimer(t *timer, when int64) bool {
if raceenabled {
racerelease(unsafe.Pointer(t))
// If the end function is called multiple times, only the first
// call is used in the latency measurement.
//
-// ctx, task := trace.NewTask(ctx, "awesomeTask")
-// trace.WithRegion(ctx, "preparation", prepWork)
-// // preparation of the task
-// go func() { // continue processing the task in a separate goroutine.
-// defer task.End()
-// trace.WithRegion(ctx, "remainingWork", remainingWork)
-// }()
+// ctx, task := trace.NewTask(ctx, "awesomeTask")
+// trace.WithRegion(ctx, "preparation", prepWork)
+// // preparation of the task
+// go func() { // continue processing the task in a separate goroutine.
+// defer task.End()
+// trace.WithRegion(ctx, "remainingWork", remainingWork)
+// }()
func NewTask(pctx context.Context, taskType string) (ctx context.Context, task *Task) {
pid := fromContext(pctx).id
id := newID()
// after this region must be ended before this region can be ended.
// Recommended usage is
//
-// defer trace.StartRegion(ctx, "myTracedRegion").End()
+// defer trace.StartRegion(ctx, "myTracedRegion").End()
func StartRegion(ctx context.Context, regionType string) *Region {
if !IsEnabled() {
return noopRegion
// Package trace contains facilities for programs to generate traces
// for the Go execution tracer.
//
-// Tracing runtime activities
+// # Tracing runtime activities
//
// The execution trace captures a wide range of execution events such as
// goroutine creation/blocking/unblocking, syscall enter/exit/block,
// command runs the test in the current directory and writes the trace
// file (trace.out).
//
-// go test -trace=trace.out
+// go test -trace=trace.out
//
// This runtime/trace package provides APIs to add equivalent tracing
// support to a standalone program. See the Example that demonstrates
// following line will install a handler under the /debug/pprof/trace URL
// to download a live trace:
//
-// import _ "net/http/pprof"
+// import _ "net/http/pprof"
//
// See the net/http/pprof package for more details about all of the
// debug endpoints installed by this import.
//
-// User annotation
+// # User annotation
//
// Package trace provides user annotation APIs that can be used to
// log interesting events during execution.
// trace to trace the durations of sequential steps in a cappuccino making
// operation.
//
-// trace.WithRegion(ctx, "makeCappuccino", func() {
+// trace.WithRegion(ctx, "makeCappuccino", func() {
//
-// // orderID allows to identify a specific order
-// // among many cappuccino order region records.
-// trace.Log(ctx, "orderID", orderID)
+// // orderID allows to identify a specific order
+// // among many cappuccino order region records.
+// trace.Log(ctx, "orderID", orderID)
//
-// trace.WithRegion(ctx, "steamMilk", steamMilk)
-// trace.WithRegion(ctx, "extractCoffee", extractCoffee)
-// trace.WithRegion(ctx, "mixMilkCoffee", mixMilkCoffee)
-// })
+// trace.WithRegion(ctx, "steamMilk", steamMilk)
+// trace.WithRegion(ctx, "extractCoffee", extractCoffee)
+// trace.WithRegion(ctx, "mixMilkCoffee", mixMilkCoffee)
+// })
//
// A task is a higher-level component that aids tracing of logical
// operations such as an RPC request, an HTTP request, or an
// the trace tool can identify the goroutines involved in a specific
// cappuccino order.
//
-// ctx, task := trace.NewTask(ctx, "makeCappuccino")
-// trace.Log(ctx, "orderID", orderID)
-//
-// milk := make(chan bool)
-// espresso := make(chan bool)
-//
-// go func() {
-// trace.WithRegion(ctx, "steamMilk", steamMilk)
-// milk <- true
-// }()
-// go func() {
-// trace.WithRegion(ctx, "extractCoffee", extractCoffee)
-// espresso <- true
-// }()
-// go func() {
-// defer task.End() // When assemble is done, the order is complete.
-// <-espresso
-// <-milk
-// trace.WithRegion(ctx, "mixMilkCoffee", mixMilkCoffee)
-// }()
-//
+// ctx, task := trace.NewTask(ctx, "makeCappuccino")
+// trace.Log(ctx, "orderID", orderID)
+//
+// milk := make(chan bool)
+// espresso := make(chan bool)
+//
+// go func() {
+// trace.WithRegion(ctx, "steamMilk", steamMilk)
+// milk <- true
+// }()
+// go func() {
+// trace.WithRegion(ctx, "extractCoffee", extractCoffee)
+// espresso <- true
+// }()
+// go func() {
+// defer task.End() // When assemble is done, the order is complete.
+// <-espresso
+// <-milk
+// trace.WithRegion(ctx, "mixMilkCoffee", mixMilkCoffee)
+// }()
//
// The trace tool computes the latency of a task by measuring the
// time between the task creation and the task end and provides
// tflag is documented in reflect/type.go.
//
// tflag values must be kept in sync with copies in:
+//
// cmd/compile/internal/reflectdata/reflect.go
// cmd/link/internal/ld/decodesym.go
// reflect/type.go
// Floating point control word values.
// Bits 0-5 are bits to disable floating-point exceptions.
// Bits 8-9 are the precision control:
-// 0 = single precision a.k.a. float32
-// 2 = double precision a.k.a. float64
+//
+// 0 = single precision a.k.a. float32
+// 2 = double precision a.k.a. float64
+//
// Bits 10-11 are the rounding mode:
-// 0 = round to nearest (even on a tie)
-// 3 = round toward zero
+//
+// 0 = round to nearest (even on a tie)
+// 3 = round toward zero
var (
controlWord64 uint16 = 0x3f + 2<<8 + 0<<10
controlWord64trunc uint16 = 0x3f + 2<<8 + 3<<10
// If possible to convert decimal representation to 64-bit float f exactly,
// entirely in floating-point math, do so, avoiding the expense of decimalToFloatBits.
// Three common cases:
+//
// value is exact integer
// value is exact integer * exact power of ten
// value is exact integer / exact power of ten
+//
// These all produce potentially inexact but correctly rounded answers.
func atof64exact(mantissa uint64, exp int, neg bool) (f float64, ok bool) {
if mantissa>>float64info.mantbits != 0 {
// Package strconv implements conversions to and from string representations
// of basic data types.
//
-// Numeric Conversions
+// # Numeric Conversions
//
// The most common numeric conversions are Atoi (string to int) and Itoa (int to string).
//
// AppendBool, AppendFloat, AppendInt, and AppendUint are similar but
// append the formatted value to a destination slice.
//
-// String Conversions
+// # String Conversions
//
// Quote and QuoteToASCII convert strings to quoted Go string literals.
// The latter guarantees that the result is an ASCII string, by escaping
// return quoted Go rune literals.
//
// Unquote and UnquoteChar unquote Go string and rune literals.
-//
package strconv
// detailedPowersOfTen contains 128-bit mantissa approximations (rounded down)
// to the powers of 10. For example:
//
-// - 1e43 ≈ (0xE596B7B0_C643C719 * (2 ** 79))
-// - 1e43 = (0xE596B7B0_C643C719_6D9CCD05_D0000000 * (2 ** 15))
+// - 1e43 ≈ (0xE596B7B0_C643C719 * (2 ** 79))
+// - 1e43 = (0xE596B7B0_C643C719_6D9CCD05_D0000000 * (2 ** 15))
//
// The mantissas are explicitly listed. The exponents are implied by a linear
// expression with slope 217706.0/65536.0 ≈ log(10)/log(2).
// The returned boolean is true if all trimmed bits were zero.
//
// That is:
-// m*2^e2 * round(10^q) = resM * 2^resE + ε
-// exact = ε == 0
+//
+// m*2^e2 * round(10^q) = resM * 2^resE + ε
+// exact = ε == 0
func mult64bitPow10(m uint32, e2, q int) (resM uint32, resE int, exact bool) {
if q == 0 {
// P == 1<<63
// The returned boolean is true is all trimmed bits were zero.
//
// That is:
-// m*2^e2 * round(10^q) = resM * 2^resE + ε
-// exact = ε == 0
+//
+// m*2^e2 * round(10^q) = resM * 2^resE + ε
+// exact = ε == 0
func mult128bitPow10(m uint64, e2, q int) (resM uint64, resE int, exact bool) {
if q == 0 {
// P == 1<<127
// or character literal represented by the string s.
// It returns four values:
//
-// 1) value, the decoded Unicode code point or byte value;
-// 2) multibyte, a boolean indicating whether the decoded character requires a multibyte UTF-8 representation;
-// 3) tail, the remainder of the string after the character; and
-// 4) an error that will be nil if the character is syntactically valid.
+// 1. value, the decoded Unicode code point or byte value;
+// 2. multibyte, a boolean indicating whether the decoded character requires a multibyte UTF-8 representation;
+// 3. tail, the remainder of the string after the character; and
+// 4. an error that will be nil if the character is syntactically valid.
//
// The second argument, quote, specifies the type of literal being parsed
// and therefore which escaped quote character is permitted.
// and values may be empty. For example, the trie containing keys "ax", "ay",
// "bcbc", "x" and "xy" could have eight nodes:
//
-// n0 -
-// n1 a-
-// n2 .x+
-// n3 .y+
-// n4 b-
-// n5 .cbc+
-// n6 x+
-// n7 .y+
+// n0 -
+// n1 a-
+// n2 .x+
+// n3 .y+
+// n4 b-
+// n5 .cbc+
+// n6 x+
+// n7 .y+
//
// n0 is the root node, and its children are n1, n4 and n6; n1's children are
// n2 and n3; n4's child is n5; n6's child is n7. Nodes n0, n1 and n4 (marked
// the substrings between those separators.
//
// The count determines the number of substrings to return:
-// n > 0: at most n substrings; the last substring will be the unsplit remainder.
-// n == 0: the result is nil (zero substrings)
-// n < 0: all substrings
+//
+// n > 0: at most n substrings; the last substring will be the unsplit remainder.
+// n == 0: the result is nil (zero substrings)
+// n < 0: all substrings
//
// Edge cases for s and sep (for example, empty strings) are handled
// as described in the documentation for Split.
// returns a slice of those substrings.
//
// The count determines the number of substrings to return:
-// n > 0: at most n substrings; the last substring will be the unsplit remainder.
-// n == 0: the result is nil (zero substrings)
-// n < 0: all substrings
+//
+// n > 0: at most n substrings; the last substring will be the unsplit remainder.
+// n == 0: the result is nil (zero substrings)
+// n < 0: all substrings
//
// Edge cases for s and sep (for example, empty strings) are handled
// as described in the documentation for SplitAfter.
StoreUintptr(addr, new)
}
-//go:nocheckptr
// This code is just testing that LoadPointer/StorePointer operate
// atomically; it's not actually calculating pointers.
+//
+//go:nocheckptr
func hammerStoreLoadPointer(t *testing.T, paddr unsafe.Pointer) {
addr := (*unsafe.Pointer)(paddr)
v := uintptr(LoadPointer(addr))
// typically cannot assume that the condition is true when
// Wait returns. Instead, the caller should Wait in a loop:
//
-// c.L.Lock()
-// for !condition() {
-// c.Wait()
-// }
-// ... make use of condition ...
-// c.L.Unlock()
+// c.L.Lock()
+// for !condition() {
+// c.Wait()
+// }
+// ... make use of condition ...
+// c.L.Unlock()
func (c *Cond) Wait() {
c.checker.check()
t := runtime_notifyListAdd(&c.notify)
// Do calls the function f if and only if Do is being called for the
// first time for this instance of Once. In other words, given
-// var once Once
+//
+// var once Once
+//
// if once.Do(f) is called multiple times, only the first call will invoke f,
// even if f has a different value in each invocation. A new instance of
// Once is required for each function to execute.
// Do is intended for initialization that must be run exactly once. Since f
// is niladic, it may be necessary to use a function literal to capture the
// arguments to a function to be invoked by Do:
-// config.once.Do(func() { config.init(filename) })
+//
+// config.once.Do(func() { config.init(filename) })
//
// Because no call to Do returns until the one call to f returns, if f causes
// Do to be called, it will deadlock.
// in https://msdn.microsoft.com/en-us/library/ms880421.
// This function returns "" (2 double quotes) if s is empty.
// Alternatively, these transformations are done:
-// - every back slash (\) is doubled, but only if immediately
-// followed by double quote (");
-// - every double quote (") is escaped by back slash (\);
-// - finally, s is wrapped with double quotes (arg -> "arg"),
-// but only if there is space or tab inside s.
+// - every back slash (\) is doubled, but only if immediately
+// followed by double quote (");
+// - every double quote (") is escaped by back slash (\);
+// - finally, s is wrapped with double quotes (arg -> "arg"),
+// but only if there is space or tab inside s.
func EscapeArg(s string) string {
if len(s) == 0 {
return `""`
// ValueOf returns x as a JavaScript value:
//
-// | Go | JavaScript |
-// | ---------------------- | ---------------------- |
-// | js.Value | [its value] |
-// | js.Func | function |
-// | nil | null |
-// | bool | boolean |
-// | integers and floats | number |
-// | string | string |
-// | []interface{} | new array |
-// | map[string]interface{} | new object |
+// | Go | JavaScript |
+// | ---------------------- | ---------------------- |
+// | js.Value | [its value] |
+// | js.Func | function |
+// | nil | null |
+// | bool | boolean |
+// | integers and floats | number |
+// | string | string |
+// | []interface{} | new array |
+// | map[string]interface{} | new object |
//
// Panics if x is not one of the expected types.
func ValueOf(x any) Value {
// That is also where updates required by new systems or versions
// should be applied. See https://golang.org/s/go1.4-syscall for more
// information.
-//
package syscall
import "internal/bytealg"
// An Errno is an unsigned number describing an error condition.
// It implements the error interface. The zero Errno is by convention
// a non-error, so code to convert from Errno to error should use:
+//
// err = nil
// if errno != 0 {
// err = errno
// An Errno is an unsigned number describing an error condition.
// It implements the error interface. The zero Errno is by convention
// a non-error, so code to convert from Errno to error should use:
+//
// err = nil
// if errno != 0 {
// err = errno
// whose remaining arguments are the types to be fuzzed.
// For example:
//
-// f.Fuzz(func(t *testing.T, b []byte, i int) { ... })
+// f.Fuzz(func(t *testing.T, b []byte, i int) { ... })
//
// The following types are allowed: []byte, string, bool, byte, rune, float32,
// float64, int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64.
// Check returns that input as a *CheckError.
// For example:
//
-// func TestOddMultipleOfThree(t *testing.T) {
-// f := func(x int) bool {
-// y := OddMultipleOfThree(x)
-// return y%2 == 1 && y%3 == 0
-// }
-// if err := quick.Check(f, nil); err != nil {
-// t.Error(err)
-// }
-// }
+// func TestOddMultipleOfThree(t *testing.T) {
+// f := func(x int) bool {
+// y := OddMultipleOfThree(x)
+// return y%2 == 1 && y%3 == 0
+// }
+// if err := quick.Check(f, nil); err != nil {
+// t.Error(err)
+// }
+// }
func Check(f any, config *Config) error {
if config == nil {
config = &defaultConfig
// Package testing provides support for automated testing of Go packages.
// It is intended to be used in concert with the "go test" command, which automates
// execution of any function of the form
-// func TestXxx(*testing.T)
+//
+// func TestXxx(*testing.T)
+//
// where Xxx does not start with a lowercase letter. The function name
// serves to identify the test routine.
//
//
// A simple test function looks like this:
//
-// func TestAbs(t *testing.T) {
-// got := Abs(-1)
-// if got != 1 {
-// t.Errorf("Abs(-1) = %d; want 1", got)
-// }
-// }
+// func TestAbs(t *testing.T) {
+// got := Abs(-1)
+// if got != 1 {
+// t.Errorf("Abs(-1) = %d; want 1", got)
+// }
+// }
//
-// Benchmarks
+// # Benchmarks
//
// Functions of the form
-// func BenchmarkXxx(*testing.B)
+//
+// func BenchmarkXxx(*testing.B)
+//
// are considered benchmarks, and are executed by the "go test" command when
// its -bench flag is provided. Benchmarks are run sequentially.
//
// https://golang.org/cmd/go/#hdr-Testing_flags.
//
// A sample benchmark function looks like this:
-// func BenchmarkRandInt(b *testing.B) {
-// for i := 0; i < b.N; i++ {
-// rand.Int()
-// }
-// }
+//
+// func BenchmarkRandInt(b *testing.B) {
+// for i := 0; i < b.N; i++ {
+// rand.Int()
+// }
+// }
//
// The benchmark function must run the target code b.N times.
// During benchmark execution, b.N is adjusted until the benchmark function lasts
// long enough to be timed reliably. The output
-// BenchmarkRandInt-8 68453040 17.8 ns/op
+//
+// BenchmarkRandInt-8 68453040 17.8 ns/op
+//
// means that the loop ran 68453040 times at a speed of 17.8 ns per loop.
//
// If a benchmark needs some expensive setup before running, the timer
// may be reset:
//
-// func BenchmarkBigLen(b *testing.B) {
-// big := NewBig()
-// b.ResetTimer()
-// for i := 0; i < b.N; i++ {
-// big.Len()
-// }
-// }
+// func BenchmarkBigLen(b *testing.B) {
+// big := NewBig()
+// b.ResetTimer()
+// for i := 0; i < b.N; i++ {
+// big.Len()
+// }
+// }
//
// If a benchmark needs to test performance in a parallel setting, it may use
// the RunParallel helper function; such benchmarks are intended to be used with
// the go test -cpu flag:
//
-// func BenchmarkTemplateParallel(b *testing.B) {
-// templ := template.Must(template.New("test").Parse("Hello, {{.}}!"))
-// b.RunParallel(func(pb *testing.PB) {
-// var buf bytes.Buffer
-// for pb.Next() {
-// buf.Reset()
-// templ.Execute(&buf, "World")
-// }
-// })
-// }
+// func BenchmarkTemplateParallel(b *testing.B) {
+// templ := template.Must(template.New("test").Parse("Hello, {{.}}!"))
+// b.RunParallel(func(pb *testing.PB) {
+// var buf bytes.Buffer
+// for pb.Next() {
+// buf.Reset()
+// templ.Execute(&buf, "World")
+// }
+// })
+// }
//
// A detailed specification of the benchmark results format is given
// in https://golang.org/design/14313-benchmark-format.
// In particular, https://golang.org/x/perf/cmd/benchstat performs
// statistically robust A/B comparisons.
//
-// Examples
+// # Examples
//
// The package also runs and verifies example code. Example functions may
// include a concluding line comment that begins with "Output:" and is compared with
// the standard output of the function when the tests are run. (The comparison
// ignores leading and trailing space.) These are examples of an example:
//
-// func ExampleHello() {
-// fmt.Println("hello")
-// // Output: hello
-// }
+// func ExampleHello() {
+// fmt.Println("hello")
+// // Output: hello
+// }
//
-// func ExampleSalutations() {
-// fmt.Println("hello, and")
-// fmt.Println("goodbye")
-// // Output:
-// // hello, and
-// // goodbye
-// }
+// func ExampleSalutations() {
+// fmt.Println("hello, and")
+// fmt.Println("goodbye")
+// // Output:
+// // hello, and
+// // goodbye
+// }
//
// The comment prefix "Unordered output:" is like "Output:", but matches any
// line order:
//
-// func ExamplePerm() {
-// for _, value := range Perm(5) {
-// fmt.Println(value)
-// }
-// // Unordered output: 4
-// // 2
-// // 1
-// // 3
-// // 0
-// }
+// func ExamplePerm() {
+// for _, value := range Perm(5) {
+// fmt.Println(value)
+// }
+// // Unordered output: 4
+// // 2
+// // 1
+// // 3
+// // 0
+// }
//
// Example functions without output comments are compiled but not executed.
//
// The naming convention to declare examples for the package, a function F, a type T and
// method M on type T are:
//
-// func Example() { ... }
-// func ExampleF() { ... }
-// func ExampleT() { ... }
-// func ExampleT_M() { ... }
+// func Example() { ... }
+// func ExampleF() { ... }
+// func ExampleT() { ... }
+// func ExampleT_M() { ... }
//
// Multiple example functions for a package/type/function/method may be provided by
// appending a distinct suffix to the name. The suffix must start with a
// lower-case letter.
//
-// func Example_suffix() { ... }
-// func ExampleF_suffix() { ... }
-// func ExampleT_suffix() { ... }
-// func ExampleT_M_suffix() { ... }
+// func Example_suffix() { ... }
+// func ExampleF_suffix() { ... }
+// func ExampleT_suffix() { ... }
+// func ExampleT_M_suffix() { ... }
//
// The entire test file is presented as the example when it contains a single
// example function, at least one other function, type, variable, or constant
// declaration, and no test or benchmark functions.
//
-// Fuzzing
+// # Fuzzing
//
// 'go test' and the testing package support fuzzing, a testing technique where
// a function is called with randomly generated inputs to find bugs not
// anticipated by unit tests.
//
// Functions of the form
-// func FuzzXxx(*testing.F)
+//
+// func FuzzXxx(*testing.F)
+//
// are considered fuzz tests.
//
// For example:
//
-// func FuzzHex(f *testing.F) {
-// for _, seed := range [][]byte{{}, {0}, {9}, {0xa}, {0xf}, {1, 2, 3, 4}} {
-// f.Add(seed)
-// }
-// f.Fuzz(func(t *testing.T, in []byte) {
-// enc := hex.EncodeToString(in)
-// out, err := hex.DecodeString(enc)
-// if err != nil {
-// t.Fatalf("%v: decode: %v", in, err)
-// }
-// if !bytes.Equal(in, out) {
-// t.Fatalf("%v: not equal after round trip: %v", in, out)
-// }
-// })
-// }
+// func FuzzHex(f *testing.F) {
+// for _, seed := range [][]byte{{}, {0}, {9}, {0xa}, {0xf}, {1, 2, 3, 4}} {
+// f.Add(seed)
+// }
+// f.Fuzz(func(t *testing.T, in []byte) {
+// enc := hex.EncodeToString(in)
+// out, err := hex.DecodeString(enc)
+// if err != nil {
+// t.Fatalf("%v: decode: %v", in, err)
+// }
+// if !bytes.Equal(in, out) {
+// t.Fatalf("%v: not equal after round trip: %v", in, out)
+// }
+// })
+// }
//
// A fuzz test maintains a seed corpus, or a set of inputs which are run by
// default, and can seed input generation. Seed inputs may be registered by
//
// See https://go.dev/doc/fuzz for documentation about fuzzing.
//
-// Skipping
+// # Skipping
//
// Tests or benchmarks may be skipped at run time with a call to
// the Skip method of *T or *B:
//
-// func TestTimeConsuming(t *testing.T) {
-// if testing.Short() {
-// t.Skip("skipping test in short mode.")
-// }
-// ...
-// }
+// func TestTimeConsuming(t *testing.T) {
+// if testing.Short() {
+// t.Skip("skipping test in short mode.")
+// }
+// ...
+// }
//
// The Skip method of *T can be used in a fuzz target if the input is invalid,
// but should not be considered a failing input. For example:
//
-// func FuzzJSONMarshaling(f *testing.F) {
-// f.Fuzz(func(t *testing.T, b []byte) {
-// var v interface{}
-// if err := json.Unmarshal(b, &v); err != nil {
-// t.Skip()
-// }
-// if _, err := json.Marshal(v); err != nil {
-// t.Error("Marshal: %v", err)
-// }
-// })
-// }
+// func FuzzJSONMarshaling(f *testing.F) {
+// f.Fuzz(func(t *testing.T, b []byte) {
+// var v interface{}
+// if err := json.Unmarshal(b, &v); err != nil {
+// t.Skip()
+// }
+// if _, err := json.Marshal(v); err != nil {
+// t.Error("Marshal: %v", err)
+// }
+// })
+// }
//
-// Subtests and Sub-benchmarks
+// # Subtests and Sub-benchmarks
//
// The Run methods of T and B allow defining subtests and sub-benchmarks,
// without having to define separate functions for each. This enables uses
// like table-driven benchmarks and creating hierarchical tests.
// It also provides a way to share common setup and tear-down code:
//
-// func TestFoo(t *testing.T) {
-// // <setup code>
-// t.Run("A=1", func(t *testing.T) { ... })
-// t.Run("A=2", func(t *testing.T) { ... })
-// t.Run("B=1", func(t *testing.T) { ... })
-// // <tear-down code>
-// }
+// func TestFoo(t *testing.T) {
+// // <setup code>
+// t.Run("A=1", func(t *testing.T) { ... })
+// t.Run("A=2", func(t *testing.T) { ... })
+// t.Run("B=1", func(t *testing.T) { ... })
+// // <tear-down code>
+// }
//
// Each subtest and sub-benchmark has a unique name: the combination of the name
// of the top-level test and the sequence of names passed to Run, separated by
// empty expression matches any string.
// For example, using "matching" to mean "whose name contains":
//
-// go test -run '' # Run all tests.
-// go test -run Foo # Run top-level tests matching "Foo", such as "TestFooBar".
-// go test -run Foo/A= # For top-level tests matching "Foo", run subtests matching "A=".
-// go test -run /A=1 # For all top-level tests, run subtests matching "A=1".
-// go test -fuzz FuzzFoo # Fuzz the target matching "FuzzFoo"
+// go test -run '' # Run all tests.
+// go test -run Foo # Run top-level tests matching "Foo", such as "TestFooBar".
+// go test -run Foo/A= # For top-level tests matching "Foo", run subtests matching "A=".
+// go test -run /A=1 # For all top-level tests, run subtests matching "A=1".
+// go test -fuzz FuzzFoo # Fuzz the target matching "FuzzFoo"
//
// The -run argument can also be used to run a specific value in the seed
// corpus, for debugging. For example:
-// go test -run=FuzzFoo/9ddb952d9814
+//
+// go test -run=FuzzFoo/9ddb952d9814
//
// The -fuzz and -run flags can both be set, in order to fuzz a target but
// skip the execution of all other tests.
// run in parallel with each other, and only with each other, regardless of
// other top-level tests that may be defined:
//
-// func TestGroupedParallel(t *testing.T) {
-// for _, tc := range tests {
-// tc := tc // capture range variable
-// t.Run(tc.Name, func(t *testing.T) {
-// t.Parallel()
-// ...
-// })
-// }
-// }
+// func TestGroupedParallel(t *testing.T) {
+// for _, tc := range tests {
+// tc := tc // capture range variable
+// t.Run(tc.Name, func(t *testing.T) {
+// t.Parallel()
+// ...
+// })
+// }
+// }
//
// The race detector kills the program if it exceeds 8128 concurrent goroutines,
// so use care when running parallel tests with the -race flag set.
// Run does not return until parallel subtests have completed, providing a way
// to clean up after a group of parallel tests:
//
-// func TestTeardownParallel(t *testing.T) {
-// // This Run will not return until the parallel tests finish.
-// t.Run("group", func(t *testing.T) {
-// t.Run("Test1", parallelTest1)
-// t.Run("Test2", parallelTest2)
-// t.Run("Test3", parallelTest3)
-// })
-// // <tear-down code>
-// }
-//
-// Main
+// func TestTeardownParallel(t *testing.T) {
+// // This Run will not return until the parallel tests finish.
+// t.Run("group", func(t *testing.T) {
+// t.Run("Test1", parallelTest1)
+// t.Run("Test2", parallelTest2)
+// t.Run("Test3", parallelTest3)
+// })
+// // <tear-down code>
+// }
+//
+// # Main
//
// It is sometimes necessary for a test or benchmark program to do extra setup or teardown
// before or after it executes. It is also sometimes necessary to control
}
// evalArgs formats the list of arguments into a string. It is therefore equivalent to
+//
// fmt.Sprint(args...)
+//
// except that each argument is indirected (if a pointer), as required,
// using the same rules as the default string evaluation during template
// execution.
// Must is a helper that wraps a call to a function returning (*Template, error)
// and panics if the error is non-nil. It is intended for use in variable
// initializations such as
+//
// var t = template.Must(template.New("name").Parse("text"))
func Must(t *Template, err error) *Template {
if err != nil {
//
// missingkey: Control the behavior during execution if a map is
// indexed with a key that is not present in the map.
+//
// "missingkey=default" or "missingkey=invalid"
// The default behavior: Do nothing and continue execution.
// If printed, the result of the index operation is the string
}
// itemList:
+//
// textOrAction*
+//
// Terminates at {{end}} or {{else}}, returned separately.
func (t *Tree) itemList() (list *ListNode, next Node) {
list = t.newList(t.peekNonSpace().pos)
}
// textOrAction:
+//
// text | comment | action
func (t *Tree) textOrAction() Node {
switch token := t.nextNonSpace(); token.typ {
}
// Action:
+//
// control
// command ("|" command)*
+//
// Left delim is past. Now get actions.
// First word could be a keyword such as range.
func (t *Tree) action() (n Node) {
}
// Break:
+//
// {{break}}
+//
// Break keyword is past.
func (t *Tree) breakControl(pos Pos, line int) Node {
if token := t.nextNonSpace(); token.typ != itemRightDelim {
}
// Continue:
+//
// {{continue}}
+//
// Continue keyword is past.
func (t *Tree) continueControl(pos Pos, line int) Node {
if token := t.nextNonSpace(); token.typ != itemRightDelim {
}
// Pipeline:
+//
// declarations? command ('|' command)*
func (t *Tree) pipeline(context string, end itemType) (pipe *PipeNode) {
token := t.peekNonSpace()
}
// If:
+//
// {{if pipeline}} itemList {{end}}
// {{if pipeline}} itemList {{else}} itemList {{end}}
+//
// If keyword is past.
func (t *Tree) ifControl() Node {
return t.newIf(t.parseControl(true, "if"))
}
// Range:
+//
// {{range pipeline}} itemList {{end}}
// {{range pipeline}} itemList {{else}} itemList {{end}}
+//
// Range keyword is past.
func (t *Tree) rangeControl() Node {
r := t.newRange(t.parseControl(false, "range"))
}
// With:
+//
// {{with pipeline}} itemList {{end}}
// {{with pipeline}} itemList {{else}} itemList {{end}}
+//
// If keyword is past.
func (t *Tree) withControl() Node {
return t.newWith(t.parseControl(false, "with"))
}
// End:
+//
// {{end}}
+//
// End keyword is past.
func (t *Tree) endControl() Node {
return t.newEnd(t.expect(itemRightDelim, "end").pos)
}
// Else:
+//
// {{else}}
+//
// Else keyword is past.
func (t *Tree) elseControl() Node {
// Special case for "else if".
}
// Block:
+//
// {{block stringValue pipeline}}
+//
// Block keyword is past.
// The name must be something that can evaluate to a string.
// The pipeline is mandatory.
}
// Template:
+//
// {{template stringValue pipeline}}
+//
// Template keyword is past. The name must be something that can evaluate
// to a string.
func (t *Tree) templateControl() Node {
}
// command:
+//
// operand (space operand)*
+//
// space-separated arguments up to a pipeline character or right delimiter.
// we consume the pipe character but leave the right delim to terminate the action.
func (t *Tree) command() *CommandNode {
}
// operand:
+//
// term .Field*
+//
// An operand is a space-separated component of a command,
// a term possibly followed by field accesses.
// A nil return means the next item is not an operand.
}
// term:
+//
// literal (number, string, nil, boolean)
// function (identifier)
// .
// .Field
// $
// '(' pipeline ')'
+//
// A term is a simple "expression".
// A nil return means the next item is not a term.
func (t *Tree) term() Node {
// These are predefined layouts for use in Time.Format and time.Parse.
// The reference time used in these layouts is the specific time stamp:
+//
// 01/02 03:04:05PM '06 -0700
+//
// (January 2, 15:04:05, 2006, in time zone seven hours west of GMT).
// That value is recorded as the constant named Layout, listed below. As a Unix
// time, this is 1136239445. Since MST is GMT-0700, the reference would be
// printed by the Unix date command as:
+//
// Mon Jan 2 15:04:05 MST 2006
+//
// It is a regrettable historic error that the date uses the American convention
// of putting the numerical month before the day.
//
// AM/PM mark: "PM"
//
// Numeric time zone offsets format as follows:
+//
// "-0700" ±hhmm
// "-07:00" ±hh:mm
// "-07" ±hh
+//
// Replacing the sign in the format with a Z triggers
// the ISO 8601 behavior of printing Z instead of an
// offset for the UTC zone. Thus:
+//
// "Z0700" Z or ±hhmm
// "Z07:00" Z or ±hh:mm
// "Z07" Z or ±hh
}
// String returns the time formatted using the format string
+//
// "2006-01-02 15:04:05.999999999 -0700 MST"
//
// If the time has a monotonic clock reading, the returned string
// return value and drain the channel.
// For example, assuming the program has not received from t.C already:
//
-// if !t.Stop() {
-// <-t.C
-// }
+// if !t.Stop() {
+// <-t.C
+// }
//
// This cannot be done concurrent to other receives from the Timer's
// channel or other calls to the Timer's Stop method.
// the timer must be stopped and—if Stop reports that the timer expired
// before being stopped—the channel explicitly drained:
//
-// if !t.Stop() {
-// <-t.C
-// }
-// t.Reset(d)
+// if !t.Stop() {
+// <-t.C
+// }
+// t.Reset(d)
//
// This should not be done concurrent to other receives from the Timer's
// channel.
// The calendrical calculations always assume a Gregorian calendar, with
// no leap seconds.
//
-// Monotonic Clocks
+// # Monotonic Clocks
//
// Operating systems provide both a “wall clock,” which is subject to
// changes for clock synchronization, and a “monotonic clock,” which is
// For debugging, the result of t.String does include the monotonic
// clock reading if present. If t != u because of different monotonic clock readings,
// that difference will be visible when printing t.String() and u.String().
-//
package time
import (
// to avoid confusion across daylight savings time zone transitions.
//
// To count the number of units in a Duration, divide:
+//
// second := time.Second
// fmt.Print(int64(second/time.Millisecond)) // prints 1000
//
// To convert an integer number of units to a Duration, multiply:
+//
// seconds := 10
// fmt.Print(time.Duration(seconds)*time.Second) // prints 10s
const (
}
// norm returns nhi, nlo such that
+//
// hi * base + lo == nhi * base + nlo
// 0 <= nlo < base
func norm(hi, lo, base int) (nhi, nlo int) {
}
// Date returns the Time corresponding to
+//
// yyyy-mm-dd hh:mm:ss + nsec nanoseconds
+//
// in the appropriate zone for that time in the given location.
//
// The month, day, hour, min, sec, and nsec values may be outside
// The reference implementation in localtime.c from
// https://www.iana.org/time-zones/repository/releases/tzcode2013g.tar.gz
// implements the following algorithm for these cases:
-// 1) If the first zone is unused by the transitions, use it.
-// 2) Otherwise, if there are transition times, and the first
+// 1. If the first zone is unused by the transitions, use it.
+// 2. Otherwise, if there are transition times, and the first
// transition is to a zone in daylight time, find the first
// non-daylight-time zone before and closest to the first transition
// zone.
-// 3) Otherwise, use the first zone that is not daylight time, if
+// 3. Otherwise, use the first zone that is not daylight time, if
// there is one.
-// 4) Otherwise, use the first zone.
+// 4. Otherwise, use the first zone.
func (l *Location) lookupFirstZone() int {
// Case 1.
if !l.firstZoneUsed() {
// IsSpace reports whether the rune is a space character as defined
// by Unicode's White Space property; in the Latin-1 space
// this is
+//
// '\t', '\n', '\v', '\f', '\r', ' ', U+0085 (NEL), U+00A0 (NBSP).
+//
// Other definitions of spacing characters are set by category
// Z and property Pattern_White_Space.
func IsSpace(r rune) bool {
// means the character is in the corresponding case. There is a special
// case representing sequences of alternating corresponding Upper and Lower
// pairs. It appears with a fixed Delta of
+//
// {UpperLower, UpperLower, UpperLower}
+//
// The constant UpperLower has an otherwise impossible delta value.
type CaseRange struct {
Lo uint32
// If r is not a valid Unicode code point, SimpleFold(r) returns r.
//
// For example:
+//
// SimpleFold('A') = 'a'
// SimpleFold('a') = 'A'
//
// Pointer represents a pointer to an arbitrary type. There are four special operations
// available for type Pointer that are not available for other types:
-// - A pointer value of any type can be converted to a Pointer.
-// - A Pointer can be converted to a pointer value of any type.
-// - A uintptr can be converted to a Pointer.
-// - A Pointer can be converted to a uintptr.
+// - A pointer value of any type can be converted to a Pointer.
+// - A Pointer can be converted to a pointer value of any type.
+// - A uintptr can be converted to a Pointer.
+// - A Pointer can be converted to a uintptr.
+//
// Pointer therefore allows a program to defeat the type system and read and write
// arbitrary memory. It should be used with extreme care.
//