--- /dev/null
+Please answer these questions before submitting your issue. Thanks!
+
+1. What version of Go are you using (`go version`)?
+
+
+2. What operating system and processor architecture are you using (`go env`)?
+
+
+3. What did you do?
+(Use play.golang.org to provide a runnable example, if possible.)
+
+
+4. What did you expect to see?
+
+
+5. What did you see instead?
+
+
--- /dev/null
+Please do not send pull requests to the golang/* repositories.
+
+We do, however, take contributions gladly.
+
+See https://golang.org/doc/contribute.html
+
+Thanks!
src/cmd/cgo/zdefaultcc.go
src/cmd/go/zdefaultcc.go
src/cmd/internal/obj/zbootstrap.go
+src/go/build/zcgo.go
src/go/doc/headscan
src/runtime/internal/sys/zversion.go
src/unicode/maketables
<p>
The assembler is based on the input style of the Plan 9 assemblers, which is documented in detail
-<a href="http://plan9.bell-labs.com/sys/doc/asm.html">elsewhere</a>.
+<a href="https://9p.io/sys/doc/asm.html">elsewhere</a>.
If you plan to write assembly language, you should read that document although much of it is Plan 9-specific.
The current document provides a summary of the syntax and the differences with
what is explained in that document, and
The most important thing to know about Go's assembler is that it is not a direct representation of the underlying machine.
Some of the details map precisely to the machine, but some do not.
This is because the compiler suite (see
-<a href="http://plan9.bell-labs.com/sys/doc/compiler.html">this description</a>)
+<a href="https://9p.io/sys/doc/compiler.html">this description</a>)
needs no assembler pass in the usual pipeline.
Instead, the compiler operates on a kind of semi-abstract instruction set,
and instruction selection occurs partly after code generation.
</ul>
+<p>
+When using the compiler and assembler's
+<code>-dynlink</code> or <code>-shared</code> modes,
+any load or store of a fixed memory location such as a global variable
+must be assumed to overwrite <code>CX</code>.
+Therefore, to be safe for use with these modes,
+assembly sources should typically avoid CX except between memory references.
+</p>
+
<h3 id="amd64">64-bit Intel 386 (a.k.a. amd64)</h3>
<p>
The full address syntax is summarized in this table
(an excerpt of Table II from
- <a href="http://plan9.bell-labs.com/sys/doc/sam/sam.html">The text editor <code>sam</code></a>):
+ <a href="https://9p.io/sys/doc/sam/sam.html">The text editor <code>sam</code></a>):
<br/><br/>
<table>
</p>
<p>
-Note to Git aficionados: The <code>git-codereview</code> command is not required to
+<b>Note to Git aficionados:</b>
+The <code>git-codereview</code> command is not required to
upload and manage Gerrit code reviews. For those who prefer plain Git, the text
-below gives the Git equivalent of each git-codereview command. If you do use plain
+below gives the Git equivalent of each git-codereview command.
+</p>
+
+<p>If you do use plain
Git, note that you still need the commit hooks that the git-codereview command
configures; those hooks add a Gerrit <code>Change-Id</code> line to the commit
message and check that all Go source files have been formatted with gofmt. Even
checkout by running <code>git-codereview</code> <code>hooks</code>.
</p>
+<p>
+The workflow described below assumes a single change per branch.
+It is also possible to prepare a sequence of (usually related) changes in a single branch.
+See the <a href="https://golang.org/x/review/git-codereview">git-codereview documentation</a> for details.
+</p>
+
<h3 id="git-config">Set up git aliases</h3>
<p>
See the <a href="/security">security policy</a> for more details.
</p>
+<h2 id="go1.6">go1.6 (released 2016/02/17)</h2>
+
+<p>
+Go 1.6 is a major release of Go.
+Read the <a href="/doc/go1.6">Go 1.6 Release Notes</a> for more information.
+</p>
+
<h2 id="go1.5">go1.5 (released 2015/08/19)</h2>
<p>
</p>
<pre>
-// Compile parses a regular expression and returns, if successful, a Regexp
-// object that can be used to match against text.
-func Compile(str string) (regexp *Regexp, err error) {
+// Compile parses a regular expression and returns, if successful,
+// a Regexp that can be used to match against text.
+func Compile(str string) (*Regexp, error) {
</pre>
<p>
<!--{
- "Title": "Go 1.6 Release Notes DRAFT",
+ "Title": "Go 1.6 Release Notes",
"Path": "/doc/go1.6",
"Template": true
}-->
ul li { margin: 0.5em 0; }
</style>
-<p>
-<i>NOTE: This is a DRAFT of the Go 1.6 release notes, prepared for the Go 1.6 beta.
-Go 1.6 has NOT yet been released.
-By our regular schedule, it is expected some time in February 2016.
-</i>
-</p>
-
<h2 id="introduction">Introduction to Go 1.6</h2>
<p>
Go 1.6 adds support for later SDK versions.
</p>
-<pre>
-TODO: CX no longer available on 386 assembly? (https://golang.org/cl/16386)
-</pre>
+<p>
+On 32-bit x86 systems using the <code>-dynlink</code> or <code>-shared</code> compilation modes,
+the register CX is now overwritten by certain memory references and should
+be avoided in hand-written assembly.
+See the <a href="/doc/asm#x86">assembly documentation</a> for details.
+</p>
<h2 id="tools">Tools</h2>
On average the programs in the Go 1 benchmark suite run a few percent faster in Go 1.6
than they did in Go 1.5.
The garbage collector's pauses are even lower than in Go 1.5,
-although the effect is likely only noticeable for programs using
+especially for programs using
a large amount of memory.
</p>
adds support for general compressed ELF sections.
User code needs no updating: the sections are decompressed automatically when read.
However, compressed
-<a href="/pkg/debug/elf/#Section"><code>Section</code></a>'s do not support random access:
+<a href="/pkg/debug/elf/#Section"><code>Sections</code></a> do not support random access:
they have a nil <code>ReaderAt</code> field.
</li>
Also in the <a href="/pkg/fmt/"><code>fmt</code></a> package,
<a href="/pkg/fmt/#Scanf"><code>Scanf</code></a> can now scan hexadecimal strings using %X, as an alias for %x.
Both formats accept any mix of upper- and lower-case hexadecimal.
-<a href="https://golang.org/issues/13585">TODO: Keep?</a>
</li>
<li>
<a href="/pkg/net/#LookupAddr"><code>LookupAddr</code></a>
now return rooted domain names (with a trailing dot)
on Plan 9 and Windows, to match the behavior of Go on Unix systems.
-TODO: Third, lookups satisfied from /etc/hosts now add a trailing dot as well,
-so that looking up 127.0.0.1 typically now returns “localhost.” not “localhost”.
-This is arguably a mistake but is not yet fixed. See https://golang.org/issue/13564.
</li>
<li>
<li>
The <a href="/pkg/strconv/"><code>strconv</code></a> package adds
<a href="/pkg/strconv/#IsGraphic"><code>IsGraphic</code></a>,
+similar to <a href="/pkg/strconv/#IsPrint"><code>IsPrint</code></a>.
+It also adds
<a href="/pkg/strconv/#QuoteToGraphic"><code>QuoteToGraphic</code></a>,
<a href="/pkg/strconv/#QuoteRuneToGraphic"><code>QuoteRuneToGraphic</code></a>,
<a href="/pkg/strconv/#AppendQuoteToGraphic"><code>AppendQuoteToGraphic</code></a>,
and
<a href="/pkg/strconv/#AppendQuoteRuneToGraphic"><code>AppendQuoteRuneToGraphic</code></a>,
analogous to
-<a href="/pkg/strconv/#IsPrint"><code>IsPrint</code></a>,
-<a href="/pkg/strconv/#QuoteToPrint"><code>QuoteToPrint</code></a>,
+<a href="/pkg/strconv/#QuoteToASCII"><code>QuoteToASCII</code></a>,
+<a href="/pkg/strconv/#QuoteRuneToASCII"><code>QuoteRuneToASCII</code></a>,
and so on.
-The <code>Print</code> family escapes all space characters except ASCII space (U+0020).
+The <code>ASCII</code> family escapes all space characters except ASCII space (U+0020).
In contrast, the <code>Graphic</code> family does not escape any Unicode space characters (category Zs).
</li>
--- /dev/null
+Tools:
+
+cmd/dist: add list subcommand to list all supported platforms (CL 19837)
+cmd/go: GO15VENDOREXPERIMENT gone, assumed on (CL 19615)
+cmd/link: "-X name value" form gone (CL 19614)
+cmd/compile: smaller binaries (many CLs)
+cmd/go, go/build: add support for Fortran (CL 19670, CL 4114)
+cmd/dist: introduce list subcommand to list all supported platforms (CL 19837)
+
+Ports:
+
+We now require OpenBSD 5.6+ (CL 18219, crypto/rand using getentropy)
+plan9/arm support? Start at least.
+
+API additions and behavior changes:
+
+runtime: add CallerFrames and Frames (CL 19869)
+testing/quick: now generates nil values (CL 16470)
+net/url: support query string without values (CL 19931)
+net/textproto: permit all valid token chars in CanonicalMIMEHeaderKey input (CL 18725)
<p>
The mascot and logo were designed by
<a href="http://reneefrench.blogspot.com">Renée French</a>, who also designed
-<a href="http://plan9.bell-labs.com/plan9/glenda.html">Glenda</a>,
+<a href="https://9p.io/plan9/glenda.html">Glenda</a>,
the Plan 9 bunny.
The <a href="https://blog.golang.org/gopher">gopher</a>
is derived from one she used for an <a href="http://wfmu.org/">WFMU</a>
<!--{
"Title": "The Go Programming Language Specification",
- "Subtitle": "Version of January 5, 2016",
+ "Subtitle": "Version of February 23, 2016",
"Path": "/ref/spec"
}-->
Selector = "." identifier .
Index = "[" Expression "]" .
-Slice = "[" ( [ Expression ] ":" [ Expression ] ) |
- ( [ Expression ] ":" Expression ":" Expression )
- "]" .
+Slice = "[" [ Expression ] ":" [ Expression ] "]" |
+ "[" [ Expression ] ":" Expression ":" Expression "]" .
TypeAssertion = "." "(" Type ")" .
Arguments = "(" [ ( ExpressionList | Type [ "," ExpressionList ] ) [ "..." ] [ "," ] ] ")" .
</pre>
<pre>
$ git clone https://go.googlesource.com/go
$ cd go
-$ git checkout go1.5.3
+$ git checkout go1.6
</pre>
<h2 id="head">(Optional) Switch to the master branch</h2>
<a href="//groups.google.com/group/golang-announce">golang-announce</a>
mailing list.
Each announcement mentions the latest release tag, for instance,
-<code>go1.5.3</code>.
+<code>go1.6</code>.
</p>
<p>
<th width="50"></th><th align="left" width="100"><code>$GOOS</code></th> <th align="left" width="100"><code>$GOARCH</code></th>
</tr>
<tr>
+<td></td><td><code>android</code></td> <td><code>arm</code></td>
+</tr>
+<tr>
<td></td><td><code>darwin</code></td> <td><code>386</code></td>
</tr>
<tr>
body: `i := 0; p := &S{p:&i}; s := p.a[:]; C.f(unsafe.Pointer(&s[0]))`,
fail: false,
},
+ {
+ // Passing the address of a slice of an array that is
+ // an element in a struct, with a type conversion.
+ name: "slice-ok-4",
+ c: `typedef void* PV; void f(PV p) {}`,
+ imports: []string{"unsafe"},
+ support: `type S struct { p *int; a [4]byte }`,
+ body: `i := 0; p := &S{p:&i}; C.f(C.PV(unsafe.Pointer(&p.a[0])))`,
+ fail: false,
+ },
{
// Passing the address of a static variable with no
// pointers doesn't matter.
--- /dev/null
+! Copyright 2016 The Go Authors. All rights reserved.
+! Use of this source code is governed by a BSD-style
+! license that can be found in the LICENSE file.
+
+function the_answer() result(j) bind(C)
+ use iso_c_binding, only: c_int
+ integer(c_int) :: j ! output
+ j = 42
+end function the_answer
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package fortran
+
+// int the_answer();
+import "C"
+
+func TheAnswer() int {
+ return int(C.the_answer())
+}
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package fortran
+
+import "testing"
+
+func TestFortran(t *testing.T) {
+ if a := TheAnswer(); a != 42 {
+ t.Errorf("Unexpected result for The Answer. Got: %d Want: 42", a)
+ }
+}
--- /dev/null
+ program HelloWorldF90
+ write(*,*) "Hello World!"
+ end program HelloWorldF90
--- /dev/null
+#!/usr/bin/env bash
+# Copyright 2016 The Go Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style
+# license that can be found in the LICENSE file.
+
+# This directory is intended to test the use of Fortran with cgo.
+
+set -e
+
+FC=$1
+
+if ! $FC helloworld/helloworld.f90 -o main.exe >& /dev/null; then
+ echo "skipping Fortran test: could not build helloworld.f90 with $FC"
+ exit 0
+fi
+rm -f main.exe
+
+if ! go test; then
+ echo "FAIL: go test"
+ status=1
+fi
+
+exit $status
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Issue 13930. Test that cgo's multiple-value special form for
+// C function calls works in variable declaration statements.
+
+package cgotest
+
+// #include <stdlib.h>
+import "C"
+
+var _, _ = C.abs(0)
if [ ! -f src/libgo/libgo.go ]; then
cwd=$(pwd)
- echo 'misc/cgo/testcshared/test.bash is running in $cwd' 1>&2
+ echo "misc/cgo/testcshared/test.bash is running in $cwd" 1>&2
exit 1
fi
fi
export CC
+msan=yes
+
TMPDIR=${TMPDIR:-/tmp}
-echo > ${TMPDIR}/testsanitizers$$.c
+echo 'int main() { return 0; }' > ${TMPDIR}/testsanitizers$$.c
if $CC -fsanitize=memory -c ${TMPDIR}/testsanitizers$$.c -o ${TMPDIR}/testsanitizers$$.o 2>&1 | grep "unrecognized" >& /dev/null; then
- echo "skipping msan test: -fsanitize=memory not supported"
- rm -f ${TMPDIR}/testsanitizers$$.*
- exit 0
+ echo "skipping msan tests: -fsanitize=memory not supported"
+ msan=no
fi
rm -f ${TMPDIR}/testsanitizers$$.*
-# The memory sanitizer in versions of clang before 3.6 don't work with Go.
-if $CC --version | grep clang >& /dev/null; then
+tsan=yes
+
+# The memory and thread sanitizers in versions of clang before 3.6
+# don't work with Go.
+if test "$msan" = "yes" && $CC --version | grep clang >& /dev/null; then
ver=$($CC --version | sed -e 's/.* version \([0-9.-]*\).*/\1/')
major=$(echo $ver | sed -e 's/\([0-9]*\).*/\1/')
minor=$(echo $ver | sed -e 's/[0-9]*\.\([0-9]*\).*/\1/')
if test "$major" -lt 3 || test "$major" -eq 3 -a "$minor" -lt 6; then
- echo "skipping msan test; clang version $major.$minor (older than 3.6)"
- exit 0
+ echo "skipping msan/tsan tests: clang version $major.$minor (older than 3.6)"
+ msan=no
+ tsan=no
fi
# Clang before 3.8 does not work with Linux at or after 4.1.
# golang.org/issue/12898.
- if test "$major" -lt 3 || test "$major" -eq 3 -a "$minor" -lt 8; then
+ if test "$msan" = "yes" -a "$major" -lt 3 || test "$major" -eq 3 -a "$minor" -lt 8; then
if test "$(uname)" = Linux; then
linuxver=$(uname -r)
linuxmajor=$(echo $linuxver | sed -e 's/\([0-9]*\).*/\1/')
linuxminor=$(echo $linuxver | sed -e 's/[0-9]*\.\([0-9]*\).*/\1/')
if test "$linuxmajor" -gt 4 || test "$linuxmajor" -eq 4 -a "$linuxminor" -ge 1; then
- echo "skipping msan test; clang version $major.$minor (older than 3.8) incompatible with linux version $linuxmajor.$linuxminor (4.1 or newer)"
- exit 0
+ echo "skipping msan/tsan tests: clang version $major.$minor (older than 3.8) incompatible with linux version $linuxmajor.$linuxminor (4.1 or newer)"
+ msan=no
+ tsan=no
fi
fi
fi
status=0
-if ! go build -msan std; then
- echo "FAIL: build -msan std"
- status=1
-fi
+if test "$msan" = "yes"; then
+ if ! go build -msan std; then
+ echo "FAIL: build -msan std"
+ status=1
+ fi
-if ! go run -msan msan.go; then
- echo "FAIL: msan"
- status=1
-fi
+ if ! go run -msan msan.go; then
+ echo "FAIL: msan"
+ status=1
+ fi
-if ! CGO_LDFLAGS="-fsanitize=memory" CGO_CPPFLAGS="-fsanitize=memory" go run -msan -a msan2.go; then
- echo "FAIL: msan2 with -fsanitize=memory"
- status=1
-fi
+ if ! CGO_LDFLAGS="-fsanitize=memory" CGO_CPPFLAGS="-fsanitize=memory" go run -msan -a msan2.go; then
+ echo "FAIL: msan2 with -fsanitize=memory"
+ status=1
+ fi
-if ! go run -msan -a msan2.go; then
- echo "FAIL: msan2"
- status=1
-fi
+ if ! go run -msan -a msan2.go; then
+ echo "FAIL: msan2"
+ status=1
+ fi
-if ! go run -msan msan3.go; then
- echo "FAIL: msan3"
- status=1
+ if ! go run -msan msan3.go; then
+ echo "FAIL: msan3"
+ status=1
+ fi
+
+ if ! go run -msan msan4.go; then
+ echo "FAIL: msan4"
+ status=1
+ fi
+
+ if go run -msan msan_fail.go 2>/dev/null; then
+ echo "FAIL: msan_fail"
+ status=1
+ fi
fi
-if ! go run -msan msan4.go; then
- echo "FAIL: msan4"
- status=1
+if test "$tsan" = "yes"; then
+ echo 'int main() { return 0; }' > ${TMPDIR}/testsanitizers$$.c
+ ok=yes
+ if ! $CC -fsanitize=thread ${TMPDIR}/testsanitizers$$.c -o ${TMPDIR}/testsanitizers$$ &> ${TMPDIR}/testsanitizers$$.err; then
+ ok=no
+ fi
+ if grep "unrecognized" ${TMPDIR}/testsanitizers$$.err >& /dev/null; then
+ echo "skipping tsan tests: -fsanitize=thread not supported"
+ tsan=no
+ elif test "$ok" != "yes"; then
+ cat ${TMPDIR}/testsanitizers$$.err
+ echo "skipping tsan tests: -fsanitizer=thread build failed"
+ tsan=no
+ fi
+ rm -f ${TMPDIR}/testsanitizers$$*
fi
-if go run -msan msan_fail.go 2>/dev/null; then
- echo "FAIL: msan_fail"
- status=1
+if test "$tsan" = "yes"; then
+ err=${TMPDIR}/tsanerr$$.out
+
+ if ! go run tsan.go 2>$err; then
+ cat $err
+ echo "FAIL: tsan"
+ status=1
+ elif grep -i warning $err >/dev/null 2>&1; then
+ cat $err
+ echo "FAIL: tsan"
+ status=1
+ fi
+
+ if ! go run tsan2.go 2>$err; then
+ cat $err
+ echo "FAIL: tsan2"
+ status=1
+ elif grep -i warning $err >/dev/null 2>&1; then
+ cat $err
+ echo "FAIL: tsan2"
+ status=1
+ fi
+
+ rm -f $err
fi
exit $status
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package main
+
+// This program produced false race reports when run under the C/C++
+// ThreadSanitizer, as it did not understand the synchronization in
+// the Go code.
+
+/*
+#cgo CFLAGS: -fsanitize=thread
+#cgo LDFLAGS: -fsanitize=thread
+
+int val;
+
+int getVal() {
+ return val;
+}
+
+void setVal(int i) {
+ val = i;
+}
+*/
+import "C"
+
+import (
+ "runtime"
+)
+
+func main() {
+ runtime.LockOSThread()
+ C.setVal(1)
+ c := make(chan bool)
+ go func() {
+ runtime.LockOSThread()
+ C.setVal(2)
+ c <- true
+ }()
+ <-c
+ if v := C.getVal(); v != 2 {
+ panic(v)
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package main
+
+// This program produced false race reports when run under the C/C++
+// ThreadSanitizer, as it did not understand the synchronization in
+// the Go code.
+
+/*
+#cgo CFLAGS: -fsanitize=thread
+#cgo LDFLAGS: -fsanitize=thread
+
+extern void GoRun(void);
+
+// Yes, you can have definitions if you use //export, as long as they are weak.
+
+int val __attribute__ ((weak));
+
+int run(void) __attribute__ ((weak));
+
+int run() {
+ val = 1;
+ GoRun();
+ return val;
+}
+
+void setVal(int) __attribute__ ((weak));
+
+void setVal(int i) {
+ val = i;
+}
+*/
+import "C"
+
+import "runtime"
+
+//export GoRun
+func GoRun() {
+ runtime.LockOSThread()
+ c := make(chan bool)
+ go func() {
+ runtime.LockOSThread()
+ C.setVal(2)
+ c <- true
+ }()
+ <-c
+}
+
+func main() {
+ if v := C.run(); v != 2 {
+ panic(v)
+ }
+}
#
# This script does not handle file names that contain spaces.
-gofiles=$(git diff --cached --name-only --diff-filter=ACM | grep '.go$')
+gofiles=$(git diff --cached --name-only --diff-filter=ACM | grep '\.go$')
[ -z "$gofiles" ] && exit 0
unformatted=$(gofmt -l $gofiles)
internal
objfile
objfile.go
+ unvendor
+ golang.org
+ x
+ arch
+ arm
+ armasm
+ testdata
+ +
+ x86
+ x86asm
+ testdata
+ +
gofmt
gofmt.go
gofmt_test.go
newlink
testdata
+
- vendor
- golang.org
- x
- arch
- arm
- armasm
- testdata
- +
- x86
- x86asm
- testdata
- +
archive
tar
testdata
// succeed, and seems harmless enough.
ext.ModTime = hdr.ModTime
// The spec asks that we namespace our pseudo files
- // with the current pid. However, this results in differing outputs
- // for identical inputs. As such, the constant 0 is now used instead.
+ // with the current pid. However, this results in differing outputs
+ // for identical inputs. As such, the constant 0 is now used instead.
// golang.org/issue/12358
dir, file := path.Split(hdr.Name)
fullName := path.Join(dir, "PaxHeaders.0", file)
// Open returns a ReadCloser that provides access to the File's contents.
// Multiple files may be read concurrently.
-func (f *File) Open() (rc io.ReadCloser, err error) {
+func (f *File) Open() (io.ReadCloser, error) {
bodyOffset, err := f.findBodyOffset()
if err != nil {
- return
+ return nil, err
}
size := int64(f.CompressedSize64)
r := io.NewSectionReader(f.zipr, f.headerOffset+bodyOffset, size)
dcomp := f.zip.decompressor(f.Method)
if dcomp == nil {
- err = ErrAlgorithm
- return
+ return nil, ErrAlgorithm
}
- rc = dcomp(r)
+ var rc io.ReadCloser = dcomp(r)
var desr io.Reader
if f.hasDataDescriptor() {
desr = io.NewSectionReader(f.zipr, f.headerOffset+bodyOffset+size, dataDescriptorLen)
f: f,
desr: desr,
}
- return
+ return rc, nil
}
type checksumReader struct {
}
// Close finishes writing the zip file by writing the central directory.
-// It does not (and can not) close the underlying writer.
+// It does not (and cannot) close the underlying writer.
func (w *Writer) Close() error {
if w.last != nil && !w.last.closed {
if err := w.last.close(); err != nil {
// ReadByte reads and returns a single byte.
// If no byte is available, returns an error.
-func (b *Reader) ReadByte() (c byte, err error) {
+func (b *Reader) ReadByte() (byte, error) {
b.lastRuneSize = -1
for b.r == b.w {
if b.err != nil {
}
b.fill() // buffer is empty
}
- c = b.buf[b.r]
+ c := b.buf[b.r]
b.r++
b.lastByte = int(c)
return c, nil
}
-// UnreadByte unreads the last byte. Only the most recently read byte can be unread.
+// UnreadByte unreads the last byte. Only the most recently read byte can be unread.
func (b *Reader) UnreadByte() error {
if b.lastByte < 0 || b.r == 0 && b.w > 0 {
return ErrInvalidUnreadByte
return r, size, nil
}
-// UnreadRune unreads the last rune. If the most recent read operation on
+// UnreadRune unreads the last rune. If the most recent read operation on
// the buffer was not a ReadRune, UnreadRune returns an error. (In this
// regard it is stricter than UnreadByte, which will unread the last byte
// from any read operation.)
// ReadBytes returns err != nil if and only if the returned data does not end in
// delim.
// For simple uses, a Scanner may be more convenient.
-func (b *Reader) ReadBytes(delim byte) (line []byte, err error) {
+func (b *Reader) ReadBytes(delim byte) ([]byte, error) {
// Use ReadSlice to look for array,
// accumulating full buffers.
var frag []byte
var full [][]byte
-
+ var err error
for {
var e error
frag, e = b.ReadSlice(delim)
// ReadString returns err != nil if and only if the returned data does not end in
// delim.
// For simple uses, a Scanner may be more convenient.
-func (b *Reader) ReadString(delim byte) (line string, err error) {
+func (b *Reader) ReadString(delim byte) (string, error) {
bytes, err := b.ReadBytes(delim)
- line = string(bytes)
- return line, err
+ return string(bytes), err
}
// WriteTo implements io.WriterTo.
}
}
if err == io.EOF {
- // If we filled the buffer exactly, flush pre-emptively.
+ // If we filled the buffer exactly, flush preemptively.
if b.Available() == 0 {
err = b.flush()
} else {
// Test that an EOF is overridden by a user-generated scan error.
func TestErrAtEOF(t *testing.T) {
s := NewScanner(strings.NewReader("1 2 33"))
- // This spitter will fail on last entry, after s.err==EOF.
+ // This splitter will fail on last entry, after s.err==EOF.
split := func(data []byte, atEOF bool) (advance int, token []byte, err error) {
advance, token, err = ScanWords(data, atEOF)
if len(token) > 1 {
pattern=.
fi
-# put linux, nacl first in the target list to get all the architectures up front.
-targets="$((ls runtime | sed -n 's/^rt0_\(.*\)_\(.*\)\.s/\1-\2/p'; echo linux-386-387 linux-arm-arm5) | sort | sed -e 's|linux-mips64x|linux-mips64 linux-mips64le|' | egrep -v android-arm | egrep "$pattern" | egrep 'linux|nacl')
-$(ls runtime | sed -n 's/^rt0_\(.*\)_\(.*\)\.s/\1-\2/p' | egrep -v 'android-arm|darwin-arm' | egrep "$pattern" | egrep -v 'linux|nacl')"
-
./make.bash || exit 1
GOROOT="$(cd .. && pwd)"
+# put linux, nacl first in the target list to get all the architectures up front.
+targets="$((../bin/go tool dist list | sed -n 's/^\(.*\)\/\(.*\)/\1-\2/p'; echo linux-386-387 linux-arm-arm5) | sort | egrep -v android-arm | egrep "$pattern" | egrep 'linux|nacl')
+$(../bin/go tool dist list | sed -n 's/^\(.*\)\/\(.*\)/\1-\2/p' | egrep -v 'android-arm|darwin-arm' | egrep "$pattern" | egrep -v 'linux|nacl')"
+
failed=false
for target in $targets
do
type Buffer struct {
buf []byte // contents are the bytes buf[off : len(buf)]
off int // read at &buf[off], write at &buf[len(buf)]
- runeBytes [utf8.UTFMax]byte // avoid allocation of slice on each WriteByte or Rune
+ runeBytes [utf8.UTFMax]byte // avoid allocation of slice on each call to WriteRune
bootstrap [64]byte // memory to hold first slice; helps small buffers (Printf) avoid allocation.
lastRead readOp // last read operation, so that Unread* can work correctly.
}
func (b *Buffer) Bytes() []byte { return b.buf[b.off:] }
// String returns the contents of the unread portion of the buffer
-// as a string. If the Buffer is a nil pointer, it returns "<nil>".
+// as a string. If the Buffer is a nil pointer, it returns "<nil>".
func (b *Buffer) String() string {
if b == nil {
// Special case, useful in debugging.
}
// MinRead is the minimum slice size passed to a Read call by
-// Buffer.ReadFrom. As long as the Buffer has at least MinRead bytes beyond
+// Buffer.ReadFrom. As long as the Buffer has at least MinRead bytes beyond
// what is required to hold the contents of r, ReadFrom will not grow the
// underlying buffer.
const MinRead = 512
}
// Read reads the next len(p) bytes from the buffer or until the buffer
-// is drained. The return value n is the number of bytes read. If the
+// is drained. The return value n is the number of bytes read. If the
// buffer has no data to return, err is io.EOF (unless len(p) is zero);
// otherwise it is nil.
func (b *Buffer) Read(p []byte) (n int, err error) {
// ReadByte reads and returns the next byte from the buffer.
// If no byte is available, it returns error io.EOF.
-func (b *Buffer) ReadByte() (c byte, err error) {
+func (b *Buffer) ReadByte() (byte, error) {
b.lastRead = opInvalid
if b.off >= len(b.buf) {
// Buffer is empty, reset to recover space.
b.Truncate(0)
return 0, io.EOF
}
- c = b.buf[b.off]
+ c := b.buf[b.off]
b.off++
b.lastRead = opRead
return c, nil
}
// UnreadByte unreads the last byte returned by the most recent
-// read operation. If write has happened since the last read, UnreadByte
+// read operation. If write has happened since the last read, UnreadByte
// returns an error.
func (b *Buffer) UnreadByte() error {
if b.lastRead != opReadRune && b.lastRead != opRead {
}
// NewBuffer creates and initializes a new Buffer using buf as its initial
-// contents. It is intended to prepare a Buffer to read existing data. It
+// contents. It is intended to prepare a Buffer to read existing data. It
// can also be used to size the internal buffer for writing. To do that,
// buf should have the desired capacity but a length of zero.
//
// IndexAny interprets s as a sequence of UTF-8-encoded Unicode code points.
// It returns the byte index of the first occurrence in s of any of the Unicode
-// code points in chars. It returns -1 if chars is empty or if there is no code
+// code points in chars. It returns -1 if chars is empty or if there is no code
// point in common.
func IndexAny(s []byte, chars string) int {
if len(chars) > 0 {
}
// LastIndexAny interprets s as a sequence of UTF-8-encoded Unicode code
-// points. It returns the byte index of the last occurrence in s of any of
-// the Unicode code points in chars. It returns -1 if chars is empty or if
+// points. It returns the byte index of the last occurrence in s of any of
+// the Unicode code points in chars. It returns -1 if chars is empty or if
// there is no code point in common.
func LastIndexAny(s []byte, chars string) int {
if len(chars) > 0 {
// FieldsFunc interprets s as a sequence of UTF-8-encoded Unicode code points.
// It splits the slice s at each run of code points c satisfying f(c) and
-// returns a slice of subslices of s. If all code points in s satisfy f(c), or
+// returns a slice of subslices of s. If all code points in s satisfy f(c), or
// len(s) == 0, an empty slice is returned.
// FieldsFunc makes no guarantees about the order in which it calls f(c).
// If f does not return consistent results for a given c, FieldsFunc may crash.
// Map returns a copy of the byte slice s with all its characters modified
// according to the mapping function. If mapping returns a negative value, the character is
-// dropped from the string with no replacement. The characters in s and the
+// dropped from the string with no replacement. The characters in s and the
// output are interpreted as UTF-8-encoded Unicode code points.
func Map(mapping func(r rune) rune, s []byte) []byte {
// In the worst case, the slice can grow when mapped, making
- // things unpleasant. But it's so rare we barge in assuming it's
- // fine. It could also shrink but that falls out naturally.
+ // things unpleasant. But it's so rare we barge in assuming it's
+ // fine. It could also shrink but that falls out naturally.
maxbytes := len(s) // length of b
nbytes := 0 // number of bytes encoded in b
b := make([]byte, maxbytes)
return false
}
- // General case. SimpleFold(x) returns the next equivalent rune > x
+ // General case. SimpleFold(x) returns the next equivalent rune > x
// or wraps around to smaller values.
r := unicode.SimpleFold(sr)
for r != sr && r < tr {
return false
}
- // One string is empty. Are both?
+ // One string is empty. Are both?
return len(s) == len(t)
}
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
}
-// make sure Equal returns false for minimally different strings. The data
+// make sure Equal returns false for minimally different strings. The data
// is all zeros except for a single one in one location.
func TestNotEqual(t *testing.T) {
var size = 128
}
}
+// test a small index across all page offsets
+func TestIndexByteSmall(t *testing.T) {
+ b := make([]byte, 5015) // bigger than a page
+ // Make sure we find the correct byte even when straddling a page.
+ for i := 0; i <= len(b)-15; i++ {
+ for j := 0; j < 15; j++ {
+ b[i+j] = byte(100 + j)
+ }
+ for j := 0; j < 15; j++ {
+ p := IndexByte(b[i:i+15], byte(100+j))
+ if p != j {
+ t.Errorf("IndexByte(%q, %d) = %d", b[i:i+15], 100+j, p)
+ }
+ }
+ for j := 0; j < 15; j++ {
+ b[i+j] = 0
+ }
+ }
+ // Make sure matches outside the slice never trigger.
+ for i := 0; i <= len(b)-15; i++ {
+ for j := 0; j < 15; j++ {
+ b[i+j] = 1
+ }
+ for j := 0; j < 15; j++ {
+ p := IndexByte(b[i:i+15], byte(0))
+ if p != -1 {
+ t.Errorf("IndexByte(%q, %d) = %d", b[i:i+15], 0, p)
+ }
+ }
+ for j := 0; j < 15; j++ {
+ b[i+j] = 0
+ }
+ }
+}
+
func TestIndexRune(t *testing.T) {
for _, tt := range indexRuneTests {
a := []byte(tt.a)
var bmbuf []byte
+func BenchmarkIndexByte10(b *testing.B) { bmIndexByte(b, IndexByte, 10) }
func BenchmarkIndexByte32(b *testing.B) { bmIndexByte(b, IndexByte, 32) }
func BenchmarkIndexByte4K(b *testing.B) { bmIndexByte(b, IndexByte, 4<<10) }
func BenchmarkIndexByte4M(b *testing.B) { bmIndexByte(b, IndexByte, 4<<20) }
func BenchmarkIndexByte64M(b *testing.B) { bmIndexByte(b, IndexByte, 64<<20) }
+func BenchmarkIndexBytePortable10(b *testing.B) { bmIndexByte(b, IndexBytePortable, 10) }
func BenchmarkIndexBytePortable32(b *testing.B) { bmIndexByte(b, IndexBytePortable, 32) }
func BenchmarkIndexBytePortable4K(b *testing.B) { bmIndexByte(b, IndexBytePortable, 4<<10) }
func BenchmarkIndexBytePortable4M(b *testing.B) { bmIndexByte(b, IndexBytePortable, 4<<20) }
// Run a couple of awful growth/shrinkage tests
a := tenRunes('a')
- // 1. Grow. This triggers two reallocations in Map.
+ // 1. Grow. This triggers two reallocations in Map.
maxRune := func(r rune) rune { return unicode.MaxRune }
m := Map(maxRune, []byte(a))
expect := tenRunes(unicode.MaxRune)
a := make([]byte, n+1)
b := make([]byte, n+1)
for len := 0; len < 128; len++ {
- // randomish but deterministic data. No 0 or 255.
+ // randomish but deterministic data. No 0 or 255.
for i := 0; i < len; i++ {
a[i] = byte(1 + 31*i%254)
b[i] = byte(1 + 31*i%254)
)
// This file tests the situation where memeq is checking
-// data very near to a page boundary. We want to make sure
+// data very near to a page boundary. We want to make sure
// equal does not read across the boundary and cause a page
// fault where it shouldn't.
-// This test runs only on linux. The code being tested is
+// This test runs only on linux. The code being tested is
// not OS-specific, so it does not need to be tested on all
// operating systems.
return
}
-func (r *Reader) ReadByte() (b byte, err error) {
+func (r *Reader) ReadByte() (byte, error) {
r.prevRune = -1
if r.i >= int64(len(r.s)) {
return 0, io.EOF
}
- b = r.s[r.i]
+ b := r.s[r.i]
r.i++
- return
+ return b, nil
}
func (r *Reader) UnreadByte() error {
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
for _, context := range contexts {
w := NewWalker(context, filepath.Join(build.Default.GOROOT, "src"))
for _, name := range pkgNames {
- if name != "unsafe" && !strings.HasPrefix(name, "cmd/") {
+ if name != "unsafe" && !strings.HasPrefix(name, "cmd/") && !internalPkg.MatchString(name) {
pkg, _ := w.Import(name)
w.export(pkg)
}
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
Flags:
-D value
- predefined symbol with optional simple value -D=identifer=value;
+ predefined symbol with optional simple value -D=identifier=value;
can be set multiple times
-I value
include directory; can be set multiple times
instructions["MOVDQ2Q"] = x86.AMOVQ
instructions["MOVNTDQ"] = x86.AMOVNTO
instructions["MOVOA"] = x86.AMOVO
- instructions["PF2ID"] = x86.APF2IL
- instructions["PI2FD"] = x86.API2FL
instructions["PSLLDQ"] = x86.APSLLO
instructions["PSRLDQ"] = x86.APSRLO
instructions["PADDD"] = x86.APADDL
t.Errorf(format, args...)
ok = false
}
- obj.Flushplist(ctxt)
+ obj.FlushplistNoFree(ctxt)
for p := top; p != nil; p = p.Link {
if p.As == obj.ATEXT {
{"(SI)(BX*1)", "(SI)(BX*1)"},
{"(SI)(DX*1)", "(SI)(DX*1)"},
{"(SP)", "(SP)"},
+ {"(SP)(AX*4)", "(SP)(AX*4)"},
+ {"32(SP)(BX*2)", "32(SP)(BX*2)"},
+ {"32323(SP)(R8*4)", "32323(SP)(R8*4)"},
{"+3(PC)", "3(PC)"},
{"-1(DI)(BX*1)", "-1(DI)(BX*1)"},
{"-3(PC)", "-3(PC)"},
p.errorf("illegal use of register list")
}
p.registerList(a)
- p.expect(scanner.EOF)
+ p.expectOperandEnd()
return true
}
}
}
// fmt.Printf("REG %s\n", obj.Dconv(&emptyProg, 0, a))
- p.expect(scanner.EOF)
+ p.expectOperandEnd()
return true
}
a.Type = obj.TYPE_FCONST
a.Val = p.floatExpr()
// fmt.Printf("FCONST %s\n", obj.Dconv(&emptyProg, 0, a))
- p.expect(scanner.EOF)
+ p.expectOperandEnd()
return true
}
if p.have(scanner.String) {
a.Type = obj.TYPE_SCONST
a.Val = str
// fmt.Printf("SCONST %s\n", obj.Dconv(&emptyProg, 0, a))
- p.expect(scanner.EOF)
+ p.expectOperandEnd()
return true
}
a.Offset = int64(p.expr())
a.Type = obj.TYPE_MEM
}
// fmt.Printf("CONST %d %s\n", a.Offset, obj.Dconv(&emptyProg, 0, a))
- p.expect(scanner.EOF)
+ p.expectOperandEnd()
return true
}
// fmt.Printf("offset %d \n", a.Offset)
p.registerIndirect(a, prefix)
// fmt.Printf("DONE %s\n", p.arch.Dconv(&emptyProg, 0, a))
- p.expect(scanner.EOF)
+ p.expectOperandEnd()
return true
}
// get verifies that the next item has the expected type and returns it.
func (p *Parser) get(expected lex.ScanToken) lex.Token {
- p.expect(expected)
+ p.expect(expected, expected.String())
return p.next()
}
+// expectOperandEnd verifies that the parsing state is properly at the end of an operand.
+func (p *Parser) expectOperandEnd() {
+ p.expect(scanner.EOF, "end of operand")
+}
+
// expect verifies that the next item has the expected type. It does not consume it.
-func (p *Parser) expect(expected lex.ScanToken) {
- if p.peek() != expected {
- p.errorf("expected %s, found %s", expected, p.next())
+func (p *Parser) expect(expectedToken lex.ScanToken, expectedMessage string) {
+ if p.peek() != expectedToken {
+ p.errorf("expected %s, found %s", expectedMessage, p.next())
}
}
{"TEXT", "%", "expect two or three operands for TEXT"},
{"TEXT", "1, 1", "TEXT symbol \"<erroneous symbol>\" must be a symbol(SB)"},
{"TEXT", "$\"foo\", 0, $1", "TEXT symbol \"<erroneous symbol>\" must be a symbol(SB)"},
- {"TEXT", "$0É:0, 0, $1", "expected EOF, found É"}, // Issue #12467.
- {"TEXT", "$:0:(SB, 0, $1", "expected '(', found 0"}, // Issue 12468.
+ {"TEXT", "$0É:0, 0, $1", "expected end of operand, found É"}, // Issue #12467.
+ {"TEXT", "$:0:(SB, 0, $1", "expected '(', found 0"}, // Issue 12468.
{"FUNCDATA", "", "expect two operands for FUNCDATA"},
{"FUNCDATA", "(SB ", "expect two operands for FUNCDATA"},
{"DATA", "", "expect two operands for DATA"},
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
MOVNTDQ X1, (AX) // MOVNTO X1, (AX)
MOVOA (AX), X1 // MOVO (AX), X1
+// Tests for SP indexed addresses.
+ MOVQ foo(SP)(AX*1), BX // 488b1c04
+ MOVQ foo+32(SP)(CX*2), DX // 488b544c20
+ MOVQ foo+32323(SP)(R8*4), R9 // 4e8b8c84437e0000
+ MOVL foo(SP)(SI*8), DI // 8b3cf4
+ MOVL foo+32(SP)(R10*1), R11 // 468b5c1420
+ MOVL foo+32323(SP)(R12*2), R13 // 468bac64437e0000
+ MOVW foo(SP)(AX*4), R8 // 66448b0484
+ MOVW foo+32(SP)(R9*8), CX // 66428b4ccc20
+ MOVW foo+32323(SP)(AX*1), DX // 668b9404437e0000
+ MOVB foo(SP)(AX*2), AL // 8a0444
+ MOVB foo+32(SP)(CX*4), AH // 8a648c20
+ MOVB foo+32323(SP)(CX*8), R9 // 448a8ccc437e0000
+
// LTYPE0 nonnon { outcode($1, &$2); }
RET // c3
-// Copyright 2016 The Go Authors. All rights reserved.
+// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
TEXT errors(SB),$0
- MOVL foo<>(SB)(AX), AX // ERROR "invalid instruction"
+ MOVL foo<>(SB)(AX), AX // ERROR "invalid instruction"
+ MOVL (AX)(SP*1), AX // ERROR "invalid instruction"
RET
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//
// LSTXR reg ',' addr ',' reg
// {
-// outtcode($1, &$2, &$4, &$6);
+// outcode($1, &$2, &$4, &$6);
// }
LDAXRW (R0), R2
STLXRW R1, (R0), R3
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
TrimPath = flag.String("trimpath", "", "remove prefix from recorded source file paths")
Shared = flag.Bool("shared", false, "generate code that can be linked into a shared library")
Dynlink = flag.Bool("dynlink", false, "support references to Go symbols defined in other shared libraries")
- AllErrors = flag.Bool("e", false, "no limit on number of errors reported")
+ AllErrors = flag.Bool("e", false, "no limit on number of errors reported")
)
var (
)
func init() {
- flag.Var(&D, "D", "predefined symbol with optional simple value -D=identifer=value; can be set multiple times")
+ flag.Var(&D, "D", "predefined symbol with optional simple value -D=identifier=value; can be set multiple times")
flag.Var(&I, "I", "include directory; can be set multiple times")
}
peekText string
}
-// NewInput returns a
+// NewInput returns an Input from the given path.
func NewInput(name string) *Input {
return &Input{
// include directories: look in source dir, then -I directories.
input := NewInput(name)
fd, err := os.Open(name)
if err != nil {
- log.Fatalf("asm: %s\n", err)
+ log.Fatalf("%s\n", err)
}
input.Push(NewTokenizer(name, fd, fd))
return input
architecture := arch.Set(GOARCH)
if architecture == nil {
- log.Fatalf("asm: unrecognized architecture %s", GOARCH)
+ log.Fatalf("unrecognized architecture %s", GOARCH)
}
flags.Parse()
obj.Writeobjdirect(ctxt, output)
}
if !ok || diag {
- log.Printf("asm: assembly of %s failed", flag.Arg(0))
+ log.Printf("assembly of %s failed", flag.Arg(0))
os.Remove(*flags.OutputFile)
os.Exit(1)
}
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// ReadGo populates f with information learned from reading the
-// Go source file with the given file name. It gathers the C preamble
+// Go source file with the given file name. It gathers the C preamble
// attached to the import "C" comment, a list of references to C.xxx,
// a list of exported functions, and the actual AST, to be rewritten and
// printed.
case *ast.ImportSpec:
case *ast.ValueSpec:
f.walk(&n.Type, "type", visit)
- f.walk(n.Values, "expr", visit)
+ if len(n.Names) == 2 && len(n.Values) == 1 {
+ f.walk(&n.Values[0], "as2", visit)
+ } else {
+ f.walk(n.Values, "expr", visit)
+ }
case *ast.TypeSpec:
f.walk(&n.Type, "type", visit)
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
"C? Go? Cgo!" for an introduction to using cgo:
https://golang.org/doc/articles/c_go_cgo.html.
-CFLAGS, CPPFLAGS, CXXFLAGS and LDFLAGS may be defined with pseudo #cgo
-directives within these comments to tweak the behavior of the C or C++
-compiler. Values defined in multiple directives are concatenated
+CFLAGS, CPPFLAGS, CXXFLAGS, FFLAGS and LDFLAGS may be defined with pseudo
+#cgo directives within these comments to tweak the behavior of the C, C++
+or Fortran compiler. Values defined in multiple directives are concatenated
together. The directive can include a list of build constraints limiting its
effect to systems satisfying one of the constraints
(see https://golang.org/pkg/go/build/#hdr-Build_Constraints for details about the constraint syntax).
// #include <png.h>
import "C"
-When building, the CGO_CFLAGS, CGO_CPPFLAGS, CGO_CXXFLAGS and
+When building, the CGO_CFLAGS, CGO_CPPFLAGS, CGO_CXXFLAGS, CGO_FFLAGS and
CGO_LDFLAGS environment variables are added to the flags derived from
these directives. Package-specific flags should be set using the
directives, not the environment variables, so that builds work in
All the cgo CPPFLAGS and CFLAGS directives in a package are concatenated and
used to compile C files in that package. All the CPPFLAGS and CXXFLAGS
directives in a package are concatenated and used to compile C++ files in that
-package. All the LDFLAGS directives in any package in the program are
-concatenated and used at link time. All the pkg-config directives are
-concatenated and sent to pkg-config simultaneously to add to each appropriate
-set of command-line flags.
+package. All the CPPFLAGS and FFLAGS directives in a package are concatenated
+and used to compile Fortran files in that package. All the LDFLAGS directives
+in any package in the program are concatenated and used at link time. All the
+pkg-config directives are concatenated and sent to pkg-config simultaneously
+to add to each appropriate set of command-line flags.
When the cgo directives are parsed, any occurrence of the string ${SRCDIR}
will be replaced by the absolute path to the directory containing the source
"C", it will look for other non-Go files in the directory and compile
them as part of the Go package. Any .c, .s, or .S files will be
compiled with the C compiler. Any .cc, .cpp, or .cxx files will be
-compiled with the C++ compiler. Any .h, .hh, .hpp, or .hxx files will
+compiled with the C++ compiler. Any .f, .F, .for or .f90 files will be
+compiled with the fortran compiler. Any .h, .hh, .hpp, or .hxx files will
not be compiled separately, but, if these header files are changed,
the C and C++ files will be recompiled. The default C and C++
compilers may be changed by the CC and CXX environment variables,
Go structs cannot embed fields with C types.
-Go code can not refer to zero-sized fields that occur at the end of
+Go code cannot refer to zero-sized fields that occur at the end of
non-empty C structs. To get the address of such a field (which is the
only operation you can do with a zero-sized field) you must take the
address of the struct and add the size of the struct.
C errno variable as an error (use _ to skip the result value if the
function returns void). For example:
- n, err := C.sqrt(-1)
+ n, err = C.sqrt(-1)
_, err := C.voidFunc()
+ var n, err = C.sqrt(1)
Calling C function pointers is currently not supported, however you can
declare Go variables which hold C function pointers and pass them
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
f.Preamble = strings.Join(linesOut, "\n")
}
-// addToFlag appends args to flag. All flags are later written out onto the
+// addToFlag appends args to flag. All flags are later written out onto the
// _cgo_flags file for the build system to use.
func (p *Package) addToFlag(flag string, args []string) {
p.CgoFlags[flag] = append(p.CgoFlags[flag], args...)
// Single quotes and double quotes are recognized to prevent splitting within the
// quoted region, and are removed from the resulting substrings. If a quote in s
// isn't closed err will be set and r will have the unclosed argument as the
-// last element. The backslash is used for escaping.
+// last element. The backslash is used for escaping.
//
// For example, the following string:
//
if isConst {
n.Kind = "const"
// Turn decimal into hex, just for consistency
- // with enum-derived constants. Otherwise
+ // with enum-derived constants. Otherwise
// in the cgo -godefs output half the constants
// are in hex and half are in whatever the #define used.
i, err := strconv.ParseInt(n.Define, 0, 64)
if nerrors > 0 {
// Check if compiling the preamble by itself causes any errors,
// because the messages we've printed out so far aren't helpful
- // to users debugging preamble mistakes. See issue 8442.
+ // to users debugging preamble mistakes. See issue 8442.
preambleErrors := p.gccErrors([]byte(f.Preamble))
if len(preambleErrors) > 0 {
error_(token.NoPos, "\n%s errors for preamble:\n%s", p.gccBaseCmd()[0], preambleErrors)
// being referred to as C.xxx.
func (p *Package) loadDWARF(f *File, names []*Name) {
// Extract the types from the DWARF section of an object
- // from a well-formed C program. Gcc only generates DWARF info
+ // from a well-formed C program. Gcc only generates DWARF info
// for symbols in the object file, so it is not enough to print the
// preamble and hope the symbols we care about will be there.
// Instead, emit
}
// Apple's LLVM-based gcc does not include the enumeration
- // names and values in its DWARF debug output. In case we're
+ // names and values in its DWARF debug output. In case we're
// using such a gcc, create a data block initialized with the values.
// We can read them out of the object file.
fmt.Fprintf(&b, "long long __cgodebug_data[] = {\n")
fmt.Fprintf(&b, "\t0,\n")
}
}
- // for the last entry, we can not use 0, otherwise
+ // for the last entry, we cannot use 0, otherwise
// in case all __cgodebug_data is zero initialized,
// LLVM-based gcc will place the it in the __DATA.__common
// zero-filled section (our debug/macho doesn't support
}
}
-// rewriteCall rewrites one call to add pointer checks. We replace
+// rewriteCall rewrites one call to add pointer checks. We replace
// each pointer argument x with _cgoCheckPointer(x).(T).
func (p *Package) rewriteCall(f *File, call *ast.CallExpr, name *Name) {
for i, param := range name.FuncType.Params {
} else {
// In order for the type assertion to succeed,
// we need it to match the actual type of the
- // argument. The only type we have is the
- // type of the function parameter. We know
+ // argument. The only type we have is the
+ // type of the function parameter. We know
// that the argument type must be assignable
// to the function parameter type, or the code
// would not compile, but there is nothing
// requiring that the types be exactly the
- // same. Add a type conversion to the
+ // same. Add a type conversion to the
// argument so that the type assertion will
// succeed.
c.Args[0] = &ast.CallExpr{
return p.hasPointer(f, t, true)
}
-// hasPointer is used by needsPointerCheck. If top is true it returns
+// hasPointer is used by needsPointerCheck. If top is true it returns
// whether t is or contains a pointer that might point to a pointer.
// If top is false it returns whether t is or contains a pointer.
// f may be nil.
if goTypes[t.Name] != nil {
return false
}
- // We can't figure out the type. Conservative
+ // We can't figure out the type. Conservative
// approach is to assume it has a pointer.
return true
case *ast.SelectorExpr:
if name != nil && name.Kind == "type" && name.Type != nil && name.Type.Go != nil {
return p.hasPointer(f, name.Type.Go, top)
}
- // We can't figure out the type. Conservative
+ // We can't figure out the type. Conservative
// approach is to assume it has a pointer.
return true
default:
}
// checkAddrArgs tries to add arguments to the call of
-// _cgoCheckPointer when the argument is an address expression. We
+// _cgoCheckPointer when the argument is an address expression. We
// pass true to mean that the argument is an address operation of
// something other than a slice index, which means that it's only
// necessary to check the specific element pointed to, not the entire
-// object. This is for &s.f, where f is a field in a struct. We can
+// object. This is for &s.f, where f is a field in a struct. We can
// pass a slice or array, meaning that we should check the entire
// slice or array but need not check any other part of the object.
-// This is for &s.a[i], where we need to check all of a. However, we
+// This is for &s.a[i], where we need to check all of a. However, we
// only pass the slice or array if we can refer to it without side
// effects.
func (p *Package) checkAddrArgs(f *File, args []ast.Expr, x ast.Expr) []ast.Expr {
index, ok := u.X.(*ast.IndexExpr)
if !ok {
// This is the address of something that is not an
- // index expression. We only need to examine the
+ // index expression. We only need to examine the
// single value to which it points.
// TODO: what if true is shadowed?
return append(args, ast.NewIdent("true"))
func (p *Package) isType(t ast.Expr) bool {
switch t := t.(type) {
case *ast.SelectorExpr:
- if t.Sel.Name != "Pointer" {
- return false
- }
id, ok := t.X.(*ast.Ident)
if !ok {
return false
}
- return id.Name == "unsafe"
+ if id.Name == "unsafe" && t.Sel.Name == "Pointer" {
+ return true
+ }
+ if id.Name == "C" && typedef["_Ctype_"+t.Sel.Name] != nil {
+ return true
+ }
+ return false
case *ast.Ident:
// TODO: This ignores shadowing.
switch t.Name {
return false
}
-// unsafeCheckPointerName is given the Go version of a C type. If the
+// unsafeCheckPointerName is given the Go version of a C type. If the
// type uses unsafe.Pointer, we arrange to build a version of
-// _cgoCheckPointer that returns that type. This avoids using a type
-// assertion to unsafe.Pointer in our copy of user code. We return
+// _cgoCheckPointer that returns that type. This avoids using a type
+// assertion to unsafe.Pointer in our copy of user code. We return
// the name of the _cgoCheckPointer function we are going to build, or
// the empty string if the type does not use unsafe.Pointer.
func (p *Package) unsafeCheckPointerName(t ast.Expr) string {
// rewriteRef rewrites all the C.xxx references in f.AST to refer to the
// Go equivalents, now that we have figured out the meaning of all
-// the xxx. In *godefs mode, rewriteRef replaces the names
+// the xxx. In *godefs mode, rewriteRef replaces the names
// with full definitions instead of mangled names.
func (p *Package) rewriteRef(f *File) {
// Keep a list of all the functions, to remove the ones
// Now that we have all the name types filled in,
// scan through the Refs to identify the ones that
- // are trying to do a ,err call. Also check that
+ // are trying to do a ,err call. Also check that
// functions are only used in calls.
for _, r := range f.Ref {
if r.Name.Kind == "const" && r.Name.Const == "" {
f.Name[fpName] = name
}
r.Name = name
- // Rewrite into call to _Cgo_ptr to prevent assignments. The _Cgo_ptr
+ // Rewrite into call to _Cgo_ptr to prevent assignments. The _Cgo_ptr
// function is defined in out.go and simply returns its argument. See
// issue 7757.
expr = &ast.CallExpr{
for i := range f.Symtab.Syms {
s := &f.Symtab.Syms[i]
if isDebugData(s.Name) {
- // Found it. Now find data section.
+ // Found it. Now find data section.
if i := int(s.Sect) - 1; 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
if sect.Addr <= s.Value && s.Value < sect.Addr+sect.Size {
for i := range symtab {
s := &symtab[i]
if isDebugData(s.Name) {
- // Found it. Now find data section.
+ // Found it. Now find data section.
if i := int(s.Section); 0 <= i && i < len(f.Sections) {
sect := f.Sections[i]
if sect.Addr <= s.Value && s.Value < sect.Addr+sect.Size {
}
// gccErrors runs gcc over the C program stdin and returns
-// the errors that gcc prints. That is, this function expects
+// the errors that gcc prints. That is, this function expects
// gcc to fail.
func (p *Package) gccErrors(stdin []byte) string {
// TODO(rsc): require failure
const signedDelta = 64
-// String returns the current type representation. Format arguments
+// String returns the current type representation. Format arguments
// are assembled within this method so that any changes in mutable
// values are taken into account.
func (tr *TypeRepr) String() string {
}
case *dwarf.TypedefType:
// C has much more relaxed rules than Go for
- // implicit type conversions. When the parameter
+ // implicit type conversions. When the parameter
// is type T defined as *X, simulate a little of the
// laxness of C by making the argument *X instead of T.
if ptr, ok := base(dt.Type).(*dwarf.PtrType); ok {
}
// Remember the C spelling, in case the struct
- // has __attribute__((unavailable)) on it. See issue 2888.
+ // has __attribute__((unavailable)) on it. See issue 2888.
t.Typedef = dt.Name
}
}
for i, f := range dtype.ParamType {
// gcc's DWARF generator outputs a single DotDotDotType parameter for
// function pointers that specify no parameters (e.g. void
- // (*__cgo_0)()). Treat this special case as void. This case is
+ // (*__cgo_0)()). Treat this special case as void. This case is
// invalid according to ISO C anyway (i.e. void (*__cgo_1)(...) is not
// legal).
if _, ok := f.(*dwarf.DotDotDotType); ok && i == 0 {
off := int64(0)
// Rename struct fields that happen to be named Go keywords into
- // _{keyword}. Create a map from C ident -> Go ident. The Go ident will
- // be mangled. Any existing identifier that already has the same name on
+ // _{keyword}. Create a map from C ident -> Go ident. The Go ident will
+ // be mangled. Any existing identifier that already has the same name on
// the C-side will cause the Go-mangled version to be prefixed with _.
// (e.g. in a struct with fields '_type' and 'type', the latter would be
// rendered as '__type' in Go).
// In godefs mode, if this field is a C11
// anonymous union then treat the first field in the
- // union as the field in the struct. This handles
+ // union as the field in the struct. This handles
// cases like the glibc <sys/resource.h> file; see
// issue 6677.
if *godefs {
// We can't permit that, because then the size of the Go
// struct will not be the same as the size of the C struct.
// Our only option in such a case is to remove the field,
- // which means that it can not be referenced from Go.
+ // which means that it cannot be referenced from Go.
for off > 0 && sizes[len(sizes)-1] == 0 {
n := len(sizes)
fld = fld[0 : n-1]
}
// fieldPrefix returns the prefix that should be removed from all the
-// field names when generating the C or Go code. For generated
+// field names when generating the C or Go code. For generated
// C, we leave the names as is (tv_sec, tv_usec), since that's what
// people are used to seeing in C. For generated Go code, such as
// package syscall's data structures, we drop a common prefix
for _, f := range fld {
for _, n := range f.Names {
// Ignore field names that don't have the prefix we're
- // looking for. It is common in C headers to have fields
+ // looking for. It is common in C headers to have fields
// named, say, _pad in an otherwise prefixed header.
// If the struct has 3 fields tv_sec, tv_usec, _pad1, then we
// still want to remove the tv_ prefix.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
if *dynobj != "" {
// cgo -dynimport is essentially a separate helper command
- // built into the cgo binary. It scans a gcc-produced executable
+ // built into the cgo binary. It scans a gcc-produced executable
// and dumps information about the imported symbols and the
- // imported libraries. The 'go build' rules for cgo prepare an
+ // imported libraries. The 'go build' rules for cgo prepare an
// appropriate executable and then use its import information
// instead of needing to make the linkers duplicate all the
// specialized knowledge gcc has about where to look for imported
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
fmt.Fprintf(fm, "char* _cgo_topofstack(void) { return (char*)0; }\n")
} else {
// If we're not importing runtime/cgo, we *are* runtime/cgo,
- // which provides these functions. We just need a prototype.
+ // which provides these functions. We just need a prototype.
fmt.Fprintf(fm, "void crosscall2(void(*fn)(void*, int), void *a, int c);\n")
fmt.Fprintf(fm, "void _cgo_wait_runtime_init_done();\n")
}
}
fmt.Fprint(fgo2, "\n")
+ fmt.Fprint(fgo2, "//go:cgo_unsafe_args\n")
conf.Fprint(fgo2, fset, d)
fmt.Fprint(fgo2, " {\n")
// Gcc output starts with the preamble.
fmt.Fprintf(fgcc, "%s\n", f.Preamble)
fmt.Fprintf(fgcc, "%s\n", gccProlog)
+ fmt.Fprintf(fgcc, "%s\n", tsanProlog)
for _, key := range nameKeys(f.Name) {
n := f.Name[key]
// Save the stack top for use below.
fmt.Fprintf(fgcc, "\tchar *stktop = _cgo_topofstack();\n")
}
+ fmt.Fprintf(fgcc, "\t_cgo_tsan_acquire();\n")
fmt.Fprintf(fgcc, "\t")
if t := n.FuncType.Result; t != nil {
fmt.Fprintf(fgcc, "__typeof__(a->r) r = ")
// the Go equivalents had good type params.
// However, our version of the type omits the magic
// words const and volatile, which can provoke
- // C compiler warnings. Silence them by casting
+ // C compiler warnings. Silence them by casting
// all pointers to void*. (Eventually that will produce
// other warnings.)
if c := t.C.String(); c[len(c)-1] == '*' {
fmt.Fprintf(fgcc, "a->p%d", i)
}
fmt.Fprintf(fgcc, ");\n")
+ fmt.Fprintf(fgcc, "\t_cgo_tsan_release();\n")
if n.FuncType.Result != nil {
// The cgo call may have caused a stack copy (via a callback).
// Adjust the return value pointer appropriately.
fmt.Fprintf(fgcc, "\n")
}
-// Write out a wrapper for a function when using gccgo. This is a
-// simple wrapper that just calls the real function. We only need a
+// Write out a wrapper for a function when using gccgo. This is a
+// simple wrapper that just calls the real function. We only need a
// wrapper to support static functions in the prologue--without a
// wrapper, we can't refer to the function, since the reference is in
// a different file.
}
fmt.Fprintf(fgcc, ")\n")
fmt.Fprintf(fgcc, "{\n")
+ if t := n.FuncType.Result; t != nil {
+ fmt.Fprintf(fgcc, "\t%s r;\n", t.C.String())
+ }
+ fmt.Fprintf(fgcc, "\t_cgo_tsan_acquire();\n")
fmt.Fprintf(fgcc, "\t")
if t := n.FuncType.Result; t != nil {
- fmt.Fprintf(fgcc, "return ")
+ fmt.Fprintf(fgcc, "r = ")
// Cast to void* to avoid warnings due to omitted qualifiers.
if c := t.C.String(); c[len(c)-1] == '*' {
fmt.Fprintf(fgcc, "(void*)")
fmt.Fprintf(fgcc, "p%d", i)
}
fmt.Fprintf(fgcc, ");\n")
+ fmt.Fprintf(fgcc, "\t_cgo_tsan_release();\n")
+ if t := n.FuncType.Result; t != nil {
+ fmt.Fprintf(fgcc, "\treturn ")
+ // Cast to void* to avoid warnings due to omitted qualifiers
+ // and explicit incompatible struct types.
+ if c := t.C.String(); c[len(c)-1] == '*' {
+ fmt.Fprintf(fgcc, "(void*)")
+ }
+ fmt.Fprintf(fgcc, "r;\n")
+ }
fmt.Fprintf(fgcc, "}\n")
fmt.Fprintf(fgcc, "\n")
}
fmt.Fprintf(fgcc, "extern void crosscall2(void (*fn)(void *, int), void *, int);\n")
fmt.Fprintf(fgcc, "extern void _cgo_wait_runtime_init_done();\n\n")
+ fmt.Fprintf(fgcc, "%s\n", tsanProlog)
for _, exp := range p.ExpFunc {
fn := exp.Func
// Construct a gcc struct matching the gc argument and
- // result frame. The gcc struct will be compiled with
+ // result frame. The gcc struct will be compiled with
// __attribute__((packed)) so all padding must be accounted
// for explicitly.
ctype := "struct {\n"
func(i int, aname string, atype ast.Expr) {
fmt.Fprintf(fgcc, "\ta.p%d = p%d;\n", i, i)
})
+ fmt.Fprintf(fgcc, "\t_cgo_tsan_release();\n")
fmt.Fprintf(fgcc, "\tcrosscall2(_cgoexp%s_%s, &a, %d);\n", cPrefix, exp.ExpName, off)
+ fmt.Fprintf(fgcc, "\t_cgo_tsan_acquire();\n")
if gccResult != "void" {
if len(fntype.Results.List) == 1 && len(fntype.Results.List[0].Names) <= 1 {
fmt.Fprintf(fgcc, "\treturn a.r0;\n")
fmt.Fprintf(fgcc, "#include \"_cgo_export.h\"\n")
fmt.Fprintf(fgcc, "%s\n", gccgoExportFileProlog)
+ fmt.Fprintf(fgcc, "%s\n", tsanProlog)
for _, exp := range p.ExpFunc {
fn := exp.Func
fmt.Fprint(fgcc, "\n")
fmt.Fprintf(fgcc, "%s %s %s {\n", cRet, exp.ExpName, cParams)
+ if resultCount > 0 {
+ fmt.Fprintf(fgcc, "\t%s r;\n", cRet)
+ }
fmt.Fprintf(fgcc, "\tif(_cgo_wait_runtime_init_done)\n")
fmt.Fprintf(fgcc, "\t\t_cgo_wait_runtime_init_done();\n")
+ fmt.Fprintf(fgcc, "\t_cgo_tsan_release();\n")
fmt.Fprint(fgcc, "\t")
if resultCount > 0 {
- fmt.Fprint(fgcc, "return ")
+ fmt.Fprint(fgcc, "r = ")
}
fmt.Fprintf(fgcc, "%s(", goName)
if fn.Recv != nil {
fmt.Fprintf(fgcc, "p%d", i)
})
fmt.Fprint(fgcc, ");\n")
+ fmt.Fprintf(fgcc, "\t_cgo_tsan_acquire();\n")
+ if resultCount > 0 {
+ fmt.Fprint(fgcc, "\treturn r;\n")
+ }
fmt.Fprint(fgcc, "}\n")
// Dummy declaration for _cgo_main.c
#include <string.h>
`
+// Prologue defining TSAN functions in C.
+const tsanProlog = `
+#define _cgo_tsan_acquire()
+#define _cgo_tsan_release()
+#if defined(__has_feature)
+#if __has_feature(thread_sanitizer)
+#undef _cgo_tsan_acquire
+#undef _cgo_tsan_release
+
+long long _cgo_sync __attribute__ ((common));
+
+extern void __tsan_acquire(void*);
+extern void __tsan_release(void*);
+
+static void _cgo_tsan_acquire() {
+ __tsan_acquire(&_cgo_sync);
+}
+
+static void _cgo_tsan_release() {
+ __tsan_release(&_cgo_sync);
+}
+#endif
+#endif
+`
+
const builtinProlog = `
#include <stddef.h> /* for ptrdiff_t and size_t below */
`
const gccgoGoProlog = `
+//extern runtime.cgoCheckPointer
func _cgoCheckPointer(interface{}, ...interface{}) interface{}
+//extern runtime.cgoCheckResult
func _cgoCheckResult(interface{})
`
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
"bytes"
"fmt"
"go/token"
+ "io/ioutil"
"os"
"os/exec"
)
// It returns the output to standard output and standard error.
// ok indicates whether the command exited successfully.
func run(stdin []byte, argv []string) (stdout, stderr []byte, ok bool) {
+ if i := find(argv, "-xc"); i >= 0 && argv[len(argv)-1] == "-" {
+ // Some compilers have trouble with standard input.
+ // Others have trouble with -xc.
+ // Avoid both problems by writing a file with a .c extension.
+ f, err := ioutil.TempFile("", "cgo-gcc-input-")
+ if err != nil {
+ fatalf("%s", err)
+ }
+ name := f.Name()
+ f.Close()
+ if err := ioutil.WriteFile(name+".c", stdin, 0666); err != nil {
+ os.Remove(name)
+ fatalf("%s", err)
+ }
+ defer os.Remove(name)
+ defer os.Remove(name + ".c")
+
+ // Build new argument list without -xc and trailing -.
+ new := append(argv[:i:i], argv[i+1:len(argv)-1]...)
+
+ // Since we are going to write the file to a temporary directory,
+ // we will need to add -I . explicitly to the command line:
+ // any #include "foo" before would have looked in the current
+ // directory as the directory "holding" standard input, but now
+ // the temporary directory holds the input.
+ // We've also run into compilers that reject "-I." but allow "-I", ".",
+ // so be sure to use two arguments.
+ // This matters mainly for people invoking cgo -godefs by hand.
+ new = append(new, "-I", ".")
+
+ // Finish argument list with path to C file.
+ new = append(new, name+".c")
+
+ argv = new
+ stdin = nil
+ }
+
p := exec.Command(argv[0], argv[1:]...)
p.Stdin = bytes.NewReader(stdin)
var bout, berr bytes.Buffer
return
}
+func find(argv []string, target string) int {
+ for i, arg := range argv {
+ if arg == target {
+ return i
+ }
+ }
+ return -1
+}
+
func lineno(pos token.Pos) string {
return fset.Position(pos).String()
}
// Die with an error message.
func fatalf(msg string, args ...interface{}) {
// If we've already printed other errors, they might have
- // caused the fatal condition. Assume they're enough.
+ // caused the fatal condition. Assume they're enough.
if nerrors == 0 {
fmt.Fprintf(os.Stderr, msg+"\n", args...)
}
"cmd/internal/obj/x86"
)
-func defframe(ptxt *obj.Prog) {
- var n *gc.Node
+// no floating point in note handlers on Plan 9
+var isPlan9 = obj.Getgoos() == "plan9"
+func defframe(ptxt *obj.Prog) {
// fill in argument size, stack size
ptxt.To.Type = obj.TYPE_TEXTSIZE
x0 := uint32(0)
// iterate through declarations - they are sorted in decreasing xoffset order.
- for l := gc.Curfn.Func.Dcl; l != nil; l = l.Next {
- n = l.N
+ for _, n := range gc.Curfn.Func.Dcl {
if !n.Name.Needzero {
continue
}
*ax = 1
}
p = appendpp(p, x86.AMOVQ, obj.TYPE_REG, x86.REG_AX, 0, obj.TYPE_MEM, x86.REG_SP, frame+lo)
- } else if cnt <= int64(8*gc.Widthreg) {
+ } else if !isPlan9 && cnt <= int64(8*gc.Widthreg) {
if *x0 == 0 {
p = appendpp(p, x86.AXORPS, obj.TYPE_REG, x86.REG_X0, 0, obj.TYPE_REG, x86.REG_X0, 0)
*x0 = 1
if cnt%16 != 0 {
p = appendpp(p, x86.AMOVUPS, obj.TYPE_REG, x86.REG_X0, 0, obj.TYPE_MEM, x86.REG_SP, frame+lo+cnt-int64(16))
}
- } else if !gc.Nacl && (cnt <= int64(128*gc.Widthreg)) {
+ } else if !gc.Nacl && !isPlan9 && (cnt <= int64(128*gc.Widthreg)) {
if *x0 == 0 {
p = appendpp(p, x86.AXORPS, obj.TYPE_REG, x86.REG_X0, 0, obj.TYPE_REG, x86.REG_X0, 0)
*x0 = 1
}
-
p = appendpp(p, leaptr, obj.TYPE_MEM, x86.REG_SP, frame+lo+dzDI(cnt), obj.TYPE_REG, x86.REG_DI, 0)
p = appendpp(p, obj.ADUFFZERO, obj.TYPE_NONE, 0, 0, obj.TYPE_ADDR, 0, dzOff(cnt))
p.To.Sym = gc.Linksym(gc.Pkglookup("duffzero", gc.Runtimepkg))
w := nl.Type.Width
- if w > 1024 || (gc.Nacl && w >= 64) {
+ if w > 1024 || (w >= 64 && (gc.Nacl || isPlan9)) {
var oldn1 gc.Node
var n1 gc.Node
savex(x86.REG_DI, &n1, &oldn1, nil, gc.Types[gc.Tptr])
}
func clearfat_tail(n1 *gc.Node, b int64) {
+ if b >= 16 && isPlan9 {
+ var z gc.Node
+ gc.Nodconst(&z, gc.Types[gc.TUINT64], 0)
+ q := b / 8
+ for ; q > 0; q-- {
+ n1.Type = z.Type
+ gins(x86.AMOVQ, &z, n1)
+ n1.Xoffset += 8
+ b -= 8
+ }
+ if b != 0 {
+ n1.Xoffset -= 8 - b
+ gins(x86.AMOVQ, &z, n1)
+ }
+ return
+ }
if b >= 16 {
var vec_zero gc.Node
gc.Regalloc(&vec_zero, gc.Types[gc.TFLOAT64], nil)
// MOVLQZX removal.
// The MOVLQZX exists to avoid being confused for a
// MOVL that is just copying 32-bit data around during
- // copyprop. Now that copyprop is done, remov MOVLQZX R1, R2
+ // copyprop. Now that copyprop is done, remov MOVLQZX R1, R2
// if it is dominated by an earlier ADDL/MOVL/etc into R1 that
// will have already cleared the high bits.
//
// MOVSD removal.
// We never use packed registers, so a MOVSD between registers
// can be replaced by MOVAPD, which moves the pair of float64s
- // instead of just the lower one. We only use the lower one, but
+ // instead of just the lower one. We only use the lower one, but
// the processor can do better if we do moves using both.
for r := (*gc.Flow)(g.Start); r != nil; r = r.Link {
p = r.Prog
}
if regtyp(&p.From) || p.From.Type == obj.TYPE_CONST {
- // move or artihmetic into partial register.
+ // move or arithmetic into partial register.
// from another register or constant can be movl.
// we don't switch to 64-bit arithmetic if it can
// change how the carry bit is set (and the carry bit is needed).
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
x86.AJPL: {Flags: gc.Cjmp | gc.UseCarry},
x86.AJPS: {Flags: gc.Cjmp | gc.UseCarry},
obj.AJMP: {Flags: gc.Jump | gc.Break | gc.KillCarry},
+ x86.ALEAW: {Flags: gc.LeftAddr | gc.RightWrite},
x86.ALEAL: {Flags: gc.LeftAddr | gc.RightWrite},
x86.ALEAQ: {Flags: gc.LeftAddr | gc.RightWrite},
x86.AMOVBLSX: {Flags: gc.SizeL | gc.LeftRead | gc.RightWrite | gc.Conv},
x86.AORW: {Flags: gc.SizeW | gc.LeftRead | RightRdwr | gc.SetCarry},
x86.APOPQ: {Flags: gc.SizeQ | gc.RightWrite},
x86.APUSHQ: {Flags: gc.SizeQ | gc.LeftRead},
+ x86.APXOR: {Flags: gc.SizeD | gc.LeftRead | RightRdwr},
x86.ARCLB: {Flags: gc.SizeB | gc.LeftRead | RightRdwr | gc.ShiftCX | gc.SetCarry | gc.UseCarry},
x86.ARCLL: {Flags: gc.SizeL | gc.LeftRead | RightRdwr | gc.ShiftCX | gc.SetCarry | gc.UseCarry},
x86.ARCLQ: {Flags: gc.SizeQ | gc.LeftRead | RightRdwr | gc.ShiftCX | gc.SetCarry | gc.UseCarry},
var ah gc.Node
gc.Regalloc(&ah, hi1.Type, nil)
- // Do op. Leave result in ah:al.
+ // Do op. Leave result in ah:al.
switch n.Op {
default:
gc.Fatalf("cgen64: not implemented: %v\n", n)
)
func defframe(ptxt *obj.Prog) {
- var n *gc.Node
-
// fill in argument size, stack size
ptxt.To.Type = obj.TYPE_TEXTSIZE
hi := int64(0)
lo := hi
r0 := uint32(0)
- for l := gc.Curfn.Func.Dcl; l != nil; l = l.Next {
- n = l.N
+ for _, n := range gc.Curfn.Func.Dcl {
if !n.Name.Needzero {
continue
}
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
arm.AMOVH: {Flags: gc.SizeW | gc.LeftRead | gc.RightWrite | gc.Move},
arm.AMOVW: {Flags: gc.SizeL | gc.LeftRead | gc.RightWrite | gc.Move},
- // In addtion, duffzero reads R0,R1 and writes R1. This fact is
+ // In addition, duffzero reads R0,R1 and writes R1. This fact is
// encoded in peep.c
obj.ADUFFZERO: {Flags: gc.Call},
- // In addtion, duffcopy reads R1,R2 and writes R0,R1,R2. This fact is
+ // In addition, duffcopy reads R1,R2 and writes R0,R1,R2. This fact is
// encoded in peep.c
obj.ADUFFCOPY: {Flags: gc.Call},
// TODO(austin): Instead of generating ADD $-8,R8; ADD
// $-8,R7; n*(MOVDU 8(R8),R9; MOVDU R9,8(R7);) just
// generate the offsets directly and eliminate the
- // ADDs. That will produce shorter, more
+ // ADDs. That will produce shorter, more
// pipeline-able code.
var p *obj.Prog
for ; c > 0; c-- {
)
func defframe(ptxt *obj.Prog) {
- var n *gc.Node
-
// fill in argument size, stack size
ptxt.To.Type = obj.TYPE_TEXTSIZE
lo := hi
// iterate through declarations - they are sorted in decreasing xoffset order.
- for l := gc.Curfn.Func.Dcl; l != nil; l = l.Next {
- n = l.N
+ for _, n := range gc.Curfn.Func.Dcl {
if !n.Name.Needzero {
continue
}
// The hardware will generate undefined result.
// Also need to explicitly trap on division on zero,
// the hardware will silently generate undefined result.
- // DIVW will leave unpredicable result in higher 32-bit,
+ // DIVW will leave unpredictable result in higher 32-bit,
// so always use DIVD/DIVDU.
t := nl.Type
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
}
-// Individual bitLen tests. Numbers chosen to examine both sides
+// Individual bitLen tests. Numbers chosen to examine both sides
// of powers-of-two boundaries.
func BenchmarkBitLen0(b *testing.B) { benchmarkBitLenN(b, 0) }
func BenchmarkBitLen1(b *testing.B) { benchmarkBitLenN(b, 1) }
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package big_test
+
+import (
+ "cmd/compile/internal/big"
+ "fmt"
+)
+
+// Use the classic continued fraction for e
+// e = [1; 0, 1, 1, 2, 1, 1, ... 2n, 1, 1, ...]
+// i.e., for the nth term, use
+// 1 if n mod 3 != 1
+// (n-1)/3 * 2 if n mod 3 == 1
+func recur(n, lim int64) *big.Rat {
+ term := new(big.Rat)
+ if n%3 != 1 {
+ term.SetInt64(1)
+ } else {
+ term.SetInt64((n - 1) / 3 * 2)
+ }
+
+ if n > lim {
+ return term
+ }
+
+ // Directly initialize frac as the fractional
+ // inverse of the result of recur.
+ frac := new(big.Rat).Inv(recur(n+1, lim))
+
+ return term.Add(term, frac)
+}
+
+// This example demonstrates how to use big.Rat to compute the
+// first 15 terms in the sequence of rational convergents for
+// the constant e (base of natural logarithm).
+func Example_eConvergents() {
+ for i := 1; i <= 15; i++ {
+ r := recur(0, int64(i))
+
+ // Print r both as a fraction and as a floating-point number.
+ // Since big.Rat implements fmt.Formatter, we can use %-13s to
+ // get a left-aligned string representation of the fraction.
+ fmt.Printf("%-13s = %s\n", r, r.FloatString(8))
+ }
+
+ // Output:
+ // 2/1 = 2.00000000
+ // 3/1 = 3.00000000
+ // 8/3 = 2.66666667
+ // 11/4 = 2.75000000
+ // 19/7 = 2.71428571
+ // 87/32 = 2.71875000
+ // 106/39 = 2.71794872
+ // 193/71 = 2.71830986
+ // 1264/465 = 2.71827957
+ // 1457/536 = 2.71828358
+ // 2721/1001 = 2.71828172
+ // 23225/8544 = 2.71828184
+ // 25946/9545 = 2.71828182
+ // 49171/18089 = 2.71828183
+ // 517656/190435 = 2.71828183
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// This file implements encoding/decoding of Floats.
+
+package big
+
+import "fmt"
+
+// MarshalText implements the encoding.TextMarshaler interface.
+// Only the Float value is marshaled (in full precision), other
+// attributes such as precision or accuracy are ignored.
+func (x *Float) MarshalText() (text []byte, err error) {
+ if x == nil {
+ return []byte("<nil>"), nil
+ }
+ var buf []byte
+ return x.Append(buf, 'g', -1), nil
+}
+
+// UnmarshalText implements the encoding.TextUnmarshaler interface.
+// The result is rounded per the precision and rounding mode of z.
+// If z's precision is 0, it is changed to 64 before rounding takes
+// effect.
+func (z *Float) UnmarshalText(text []byte) error {
+ // TODO(gri): get rid of the []byte/string conversion
+ _, _, err := z.Parse(string(text), 0)
+ if err != nil {
+ err = fmt.Errorf("math/big: cannot unmarshal %q into a *big.Float (%v)", text, err)
+ }
+ return err
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package big
+
+import (
+ "encoding/json"
+ "testing"
+)
+
+var floatVals = []string{
+ "0",
+ "1",
+ "0.1",
+ "2.71828",
+ "1234567890",
+ "3.14e1234",
+ "3.14e-1234",
+ "0.738957395793475734757349579759957975985497e100",
+ "0.73895739579347546656564656573475734957975995797598589749859834759476745986795497e100",
+ "inf",
+ "Inf",
+}
+
+func TestFloatJSONEncoding(t *testing.T) {
+ for _, test := range floatVals {
+ for _, sign := range []string{"", "+", "-"} {
+ for _, prec := range []uint{0, 1, 2, 10, 53, 64, 100, 1000} {
+ x := sign + test
+ var tx Float
+ _, _, err := tx.SetPrec(prec).Parse(x, 0)
+ if err != nil {
+ t.Errorf("parsing of %s (prec = %d) failed (invalid test case): %v", x, prec, err)
+ continue
+ }
+ b, err := json.Marshal(&tx)
+ if err != nil {
+ t.Errorf("marshaling of %v (prec = %d) failed: %v", &tx, prec, err)
+ continue
+ }
+ var rx Float
+ rx.SetPrec(prec)
+ if err := json.Unmarshal(b, &rx); err != nil {
+ t.Errorf("unmarshaling of %v (prec = %d) failed: %v", &tx, prec, err)
+ continue
+ }
+ if rx.Cmp(&tx) != 0 {
+ t.Errorf("JSON encoding of %v (prec = %d) failed: got %v want %v", &tx, prec, &rx, &tx)
+ }
+ }
+ }
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// This file implements encoding/decoding of Ints.
+
+package big
+
+import "fmt"
+
+// Gob codec version. Permits backward-compatible changes to the encoding.
+const intGobVersion byte = 1
+
+// GobEncode implements the gob.GobEncoder interface.
+func (x *Int) GobEncode() ([]byte, error) {
+ if x == nil {
+ return nil, nil
+ }
+ buf := make([]byte, 1+len(x.abs)*_S) // extra byte for version and sign bit
+ i := x.abs.bytes(buf) - 1 // i >= 0
+ b := intGobVersion << 1 // make space for sign bit
+ if x.neg {
+ b |= 1
+ }
+ buf[i] = b
+ return buf[i:], nil
+}
+
+// GobDecode implements the gob.GobDecoder interface.
+func (z *Int) GobDecode(buf []byte) error {
+ if len(buf) == 0 {
+ // Other side sent a nil or default value.
+ *z = Int{}
+ return nil
+ }
+ b := buf[0]
+ if b>>1 != intGobVersion {
+ return fmt.Errorf("Int.GobDecode: encoding version %d not supported", b>>1)
+ }
+ z.neg = b&1 != 0
+ z.abs = z.abs.setBytes(buf[1:])
+ return nil
+}
+
+// MarshalText implements the encoding.TextMarshaler interface.
+func (x *Int) MarshalText() (text []byte, err error) {
+ if x == nil {
+ return []byte("<nil>"), nil
+ }
+ return x.abs.itoa(x.neg, 10), nil
+}
+
+// UnmarshalText implements the encoding.TextUnmarshaler interface.
+func (z *Int) UnmarshalText(text []byte) error {
+ // TODO(gri): get rid of the []byte/string conversion
+ if _, ok := z.SetString(string(text), 0); !ok {
+ return fmt.Errorf("math/big: cannot unmarshal %q into a *big.Int", text)
+ }
+ return nil
+}
+
+// The JSON marshallers are only here for API backward compatibility
+// (programs that explicitly look for these two methods). JSON works
+// fine with the TextMarshaler only.
+
+// MarshalJSON implements the json.Marshaler interface.
+func (x *Int) MarshalJSON() ([]byte, error) {
+ return x.MarshalText()
+}
+
+// UnmarshalJSON implements the json.Unmarshaler interface.
+func (z *Int) UnmarshalJSON(text []byte) error {
+ return z.UnmarshalText(text)
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package big
+
+import (
+ "bytes"
+ "encoding/gob"
+ "encoding/json"
+ "encoding/xml"
+ "testing"
+)
+
+var encodingTests = []string{
+ "0",
+ "1",
+ "2",
+ "10",
+ "1000",
+ "1234567890",
+ "298472983472983471903246121093472394872319615612417471234712061",
+}
+
+func TestIntGobEncoding(t *testing.T) {
+ var medium bytes.Buffer
+ enc := gob.NewEncoder(&medium)
+ dec := gob.NewDecoder(&medium)
+ for _, test := range encodingTests {
+ for _, sign := range []string{"", "+", "-"} {
+ x := sign + test
+ medium.Reset() // empty buffer for each test case (in case of failures)
+ var tx Int
+ tx.SetString(x, 10)
+ if err := enc.Encode(&tx); err != nil {
+ t.Errorf("encoding of %s failed: %s", &tx, err)
+ continue
+ }
+ var rx Int
+ if err := dec.Decode(&rx); err != nil {
+ t.Errorf("decoding of %s failed: %s", &tx, err)
+ continue
+ }
+ if rx.Cmp(&tx) != 0 {
+ t.Errorf("transmission of %s failed: got %s want %s", &tx, &rx, &tx)
+ }
+ }
+ }
+}
+
+// Sending a nil Int pointer (inside a slice) on a round trip through gob should yield a zero.
+// TODO: top-level nils.
+func TestGobEncodingNilIntInSlice(t *testing.T) {
+ buf := new(bytes.Buffer)
+ enc := gob.NewEncoder(buf)
+ dec := gob.NewDecoder(buf)
+
+ var in = make([]*Int, 1)
+ err := enc.Encode(&in)
+ if err != nil {
+ t.Errorf("gob encode failed: %q", err)
+ }
+ var out []*Int
+ err = dec.Decode(&out)
+ if err != nil {
+ t.Fatalf("gob decode failed: %q", err)
+ }
+ if len(out) != 1 {
+ t.Fatalf("wrong len; want 1 got %d", len(out))
+ }
+ var zero Int
+ if out[0].Cmp(&zero) != 0 {
+ t.Fatalf("transmission of (*Int)(nil) failed: got %s want 0", out)
+ }
+}
+
+func TestIntJSONEncoding(t *testing.T) {
+ for _, test := range encodingTests {
+ for _, sign := range []string{"", "+", "-"} {
+ x := sign + test
+ var tx Int
+ tx.SetString(x, 10)
+ b, err := json.Marshal(&tx)
+ if err != nil {
+ t.Errorf("marshaling of %s failed: %s", &tx, err)
+ continue
+ }
+ var rx Int
+ if err := json.Unmarshal(b, &rx); err != nil {
+ t.Errorf("unmarshaling of %s failed: %s", &tx, err)
+ continue
+ }
+ if rx.Cmp(&tx) != 0 {
+ t.Errorf("JSON encoding of %s failed: got %s want %s", &tx, &rx, &tx)
+ }
+ }
+ }
+}
+
+func TestIntXMLEncoding(t *testing.T) {
+ for _, test := range encodingTests {
+ for _, sign := range []string{"", "+", "-"} {
+ x := sign + test
+ var tx Int
+ tx.SetString(x, 0)
+ b, err := xml.Marshal(&tx)
+ if err != nil {
+ t.Errorf("marshaling of %s failed: %s", &tx, err)
+ continue
+ }
+ var rx Int
+ if err := xml.Unmarshal(b, &rx); err != nil {
+ t.Errorf("unmarshaling of %s failed: %s", &tx, err)
+ continue
+ }
+ if rx.Cmp(&tx) != 0 {
+ t.Errorf("XML encoding of %s failed: got %s want %s", &tx, &rx, &tx)
+ }
+ }
+ }
+}
// x & -x leaves only the right-most bit set in the word. Let k be the
// index of that bit. Since only a single bit is set, the value is two
// to the power of k. Multiplying by a power of two is equivalent to
- // left shifting, in this case by k bits. The de Bruijn constant is
+ // left shifting, in this case by k bits. The de Bruijn constant is
// such that all six bit, consecutive substrings are distinct.
// Therefore, if we have a left shifted version of this constant we can
// find by how many bits it was shifted by looking at which six bit
for j := 0; j < _W; j += n {
if i != len(y)-1 || j != 0 {
// Unrolled loop for significant performance
- // gain. Use go test -bench=".*" in crypto/rsa
+ // gain. Use go test -bench=".*" in crypto/rsa
// to check performance before making changes.
zz = zz.mul(z, z)
zz, z = z, zz
// quotToFloat32 returns the non-negative float32 value
// nearest to the quotient a/b, using round-to-even in
-// halfway cases. It does not mutate its arguments.
+// halfway cases. It does not mutate its arguments.
// Preconditions: b is non-zero; a and b have no common factors.
func quotToFloat32(a, b nat) (f float32, exact bool) {
const (
// quotToFloat64 returns the non-negative float64 value
// nearest to the quotient a/b, using round-to-even in
-// halfway cases. It does not mutate its arguments.
+// halfway cases. It does not mutate its arguments.
// Preconditions: b is non-zero; a and b have no common factors.
func quotToFloat64(a, b nat) (f float64, exact bool) {
const (
)
func ratTok(ch rune) bool {
- return strings.IndexRune("+-/0123456789.eE", ch) >= 0
+ return strings.ContainsRune("+-/0123456789.eE", ch)
}
// Scan is a support routine for fmt.Scanner. It accepts the formats
if err != nil {
return err
}
- if strings.IndexRune("efgEFGv", ch) < 0 {
+ if !strings.ContainsRune("efgEFGv", ch) {
return errors.New("Rat.Scan: invalid verb")
}
if _, ok := z.SetString(string(tok)); !ok {
}
}
-// Test inputs to Rat.SetString. The prefix "long:" causes the test
+// Test inputs to Rat.SetString. The prefix "long:" causes the test
// to be skipped in --test.short mode. (The threshold is about 500us.)
var float64inputs = []string{
// Constants plundered from strconv/testfp.txt.
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// This file implements encoding/decoding of Rats.
+
+package big
+
+import (
+ "encoding/binary"
+ "errors"
+ "fmt"
+)
+
+// Gob codec version. Permits backward-compatible changes to the encoding.
+const ratGobVersion byte = 1
+
+// GobEncode implements the gob.GobEncoder interface.
+func (x *Rat) GobEncode() ([]byte, error) {
+ if x == nil {
+ return nil, nil
+ }
+ buf := make([]byte, 1+4+(len(x.a.abs)+len(x.b.abs))*_S) // extra bytes for version and sign bit (1), and numerator length (4)
+ i := x.b.abs.bytes(buf)
+ j := x.a.abs.bytes(buf[:i])
+ n := i - j
+ if int(uint32(n)) != n {
+ // this should never happen
+ return nil, errors.New("Rat.GobEncode: numerator too large")
+ }
+ binary.BigEndian.PutUint32(buf[j-4:j], uint32(n))
+ j -= 1 + 4
+ b := ratGobVersion << 1 // make space for sign bit
+ if x.a.neg {
+ b |= 1
+ }
+ buf[j] = b
+ return buf[j:], nil
+}
+
+// GobDecode implements the gob.GobDecoder interface.
+func (z *Rat) GobDecode(buf []byte) error {
+ if len(buf) == 0 {
+ // Other side sent a nil or default value.
+ *z = Rat{}
+ return nil
+ }
+ b := buf[0]
+ if b>>1 != ratGobVersion {
+ return fmt.Errorf("Rat.GobDecode: encoding version %d not supported", b>>1)
+ }
+ const j = 1 + 4
+ i := j + binary.BigEndian.Uint32(buf[j-4:j])
+ z.a.neg = b&1 != 0
+ z.a.abs = z.a.abs.setBytes(buf[j:i])
+ z.b.abs = z.b.abs.setBytes(buf[i:])
+ return nil
+}
+
+// MarshalText implements the encoding.TextMarshaler interface.
+func (x *Rat) MarshalText() (text []byte, err error) {
+ // TODO(gri): get rid of the []byte/string conversion
+ return []byte(x.RatString()), nil
+}
+
+// UnmarshalText implements the encoding.TextUnmarshaler interface.
+func (z *Rat) UnmarshalText(text []byte) error {
+ // TODO(gri): get rid of the []byte/string conversion
+ if _, ok := z.SetString(string(text)); !ok {
+ return fmt.Errorf("math/big: cannot unmarshal %q into a *big.Rat", text)
+ }
+ return nil
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package big
+
+import (
+ "bytes"
+ "encoding/gob"
+ "encoding/json"
+ "encoding/xml"
+ "testing"
+)
+
+func TestRatGobEncoding(t *testing.T) {
+ var medium bytes.Buffer
+ enc := gob.NewEncoder(&medium)
+ dec := gob.NewDecoder(&medium)
+ for _, test := range encodingTests {
+ medium.Reset() // empty buffer for each test case (in case of failures)
+ var tx Rat
+ tx.SetString(test + ".14159265")
+ if err := enc.Encode(&tx); err != nil {
+ t.Errorf("encoding of %s failed: %s", &tx, err)
+ continue
+ }
+ var rx Rat
+ if err := dec.Decode(&rx); err != nil {
+ t.Errorf("decoding of %s failed: %s", &tx, err)
+ continue
+ }
+ if rx.Cmp(&tx) != 0 {
+ t.Errorf("transmission of %s failed: got %s want %s", &tx, &rx, &tx)
+ }
+ }
+}
+
+// Sending a nil Rat pointer (inside a slice) on a round trip through gob should yield a zero.
+// TODO: top-level nils.
+func TestGobEncodingNilRatInSlice(t *testing.T) {
+ buf := new(bytes.Buffer)
+ enc := gob.NewEncoder(buf)
+ dec := gob.NewDecoder(buf)
+
+ var in = make([]*Rat, 1)
+ err := enc.Encode(&in)
+ if err != nil {
+ t.Errorf("gob encode failed: %q", err)
+ }
+ var out []*Rat
+ err = dec.Decode(&out)
+ if err != nil {
+ t.Fatalf("gob decode failed: %q", err)
+ }
+ if len(out) != 1 {
+ t.Fatalf("wrong len; want 1 got %d", len(out))
+ }
+ var zero Rat
+ if out[0].Cmp(&zero) != 0 {
+ t.Fatalf("transmission of (*Int)(nil) failed: got %s want 0", out)
+ }
+}
+
+var ratNums = []string{
+ "-141592653589793238462643383279502884197169399375105820974944592307816406286",
+ "-1415926535897932384626433832795028841971",
+ "-141592653589793",
+ "-1",
+ "0",
+ "1",
+ "141592653589793",
+ "1415926535897932384626433832795028841971",
+ "141592653589793238462643383279502884197169399375105820974944592307816406286",
+}
+
+var ratDenoms = []string{
+ "1",
+ "718281828459045",
+ "7182818284590452353602874713526624977572",
+ "718281828459045235360287471352662497757247093699959574966967627724076630353",
+}
+
+func TestRatJSONEncoding(t *testing.T) {
+ for _, num := range ratNums {
+ for _, denom := range ratDenoms {
+ var tx Rat
+ tx.SetString(num + "/" + denom)
+ b, err := json.Marshal(&tx)
+ if err != nil {
+ t.Errorf("marshaling of %s failed: %s", &tx, err)
+ continue
+ }
+ var rx Rat
+ if err := json.Unmarshal(b, &rx); err != nil {
+ t.Errorf("unmarshaling of %s failed: %s", &tx, err)
+ continue
+ }
+ if rx.Cmp(&tx) != 0 {
+ t.Errorf("JSON encoding of %s failed: got %s want %s", &tx, &rx, &tx)
+ }
+ }
+ }
+}
+
+func TestRatXMLEncoding(t *testing.T) {
+ for _, num := range ratNums {
+ for _, denom := range ratDenoms {
+ var tx Rat
+ tx.SetString(num + "/" + denom)
+ b, err := xml.Marshal(&tx)
+ if err != nil {
+ t.Errorf("marshaling of %s failed: %s", &tx, err)
+ continue
+ }
+ var rx Rat
+ if err := xml.Unmarshal(b, &rx); err != nil {
+ t.Errorf("unmarshaling of %s failed: %s", &tx, err)
+ continue
+ }
+ if rx.Cmp(&tx) != 0 {
+ t.Errorf("XML encoding of %s failed: got %s want %s", &tx, &rx, &tx)
+ }
+ }
+ }
+}
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package gc
+
+import "fmt"
+
+const (
+ // These values are known by runtime.
+ ANOEQ = iota
+ AMEM0
+ AMEM8
+ AMEM16
+ AMEM32
+ AMEM64
+ AMEM128
+ ASTRING
+ AINTER
+ ANILINTER
+ AFLOAT32
+ AFLOAT64
+ ACPLX64
+ ACPLX128
+ AMEM = 100
+)
+
+func algtype(t *Type) int {
+ a := algtype1(t, nil)
+ if a == AMEM {
+ switch t.Width {
+ case 0:
+ return AMEM0
+ case 1:
+ return AMEM8
+ case 2:
+ return AMEM16
+ case 4:
+ return AMEM32
+ case 8:
+ return AMEM64
+ case 16:
+ return AMEM128
+ }
+ }
+
+ return a
+}
+
+func algtype1(t *Type, bad **Type) int {
+ if bad != nil {
+ *bad = nil
+ }
+ if t.Broke {
+ return AMEM
+ }
+ if t.Noalg {
+ return ANOEQ
+ }
+
+ switch t.Etype {
+ // will be defined later.
+ case TANY, TFORW:
+ *bad = t
+
+ return -1
+
+ case TINT8,
+ TUINT8,
+ TINT16,
+ TUINT16,
+ TINT32,
+ TUINT32,
+ TINT64,
+ TUINT64,
+ TINT,
+ TUINT,
+ TUINTPTR,
+ TBOOL,
+ TPTR32,
+ TPTR64,
+ TCHAN,
+ TUNSAFEPTR:
+ return AMEM
+
+ case TFUNC, TMAP:
+ if bad != nil {
+ *bad = t
+ }
+ return ANOEQ
+
+ case TFLOAT32:
+ return AFLOAT32
+
+ case TFLOAT64:
+ return AFLOAT64
+
+ case TCOMPLEX64:
+ return ACPLX64
+
+ case TCOMPLEX128:
+ return ACPLX128
+
+ case TSTRING:
+ return ASTRING
+
+ case TINTER:
+ if isnilinter(t) {
+ return ANILINTER
+ }
+ return AINTER
+
+ case TARRAY:
+ if Isslice(t) {
+ if bad != nil {
+ *bad = t
+ }
+ return ANOEQ
+ }
+
+ a := algtype1(t.Type, bad)
+ if a == ANOEQ || a == AMEM {
+ if a == ANOEQ && bad != nil {
+ *bad = t
+ }
+ return a
+ }
+
+ switch t.Bound {
+ case 0:
+ // We checked above that the element type is comparable.
+ return AMEM
+ case 1:
+ // Single-element array is same as its lone element.
+ return a
+ }
+
+ return -1 // needs special compare
+
+ case TSTRUCT:
+ if t.Type != nil && t.Type.Down == nil && !isblanksym(t.Type.Sym) {
+ // One-field struct is same as that one field alone.
+ return algtype1(t.Type.Type, bad)
+ }
+
+ ret := AMEM
+ var a int
+ for t1 := t.Type; t1 != nil; t1 = t1.Down {
+ // All fields must be comparable.
+ a = algtype1(t1.Type, bad)
+
+ if a == ANOEQ {
+ return ANOEQ
+ }
+
+ // Blank fields, padded fields, fields with non-memory
+ // equality need special compare.
+ if a != AMEM || isblanksym(t1.Sym) || ispaddedfield(t1, t.Width) {
+ ret = -1
+ continue
+ }
+ }
+
+ return ret
+ }
+
+ Fatalf("algtype1: unexpected type %v", t)
+ return 0
+}
+
+// Generate a helper function to compute the hash of a value of type t.
+func genhash(sym *Sym, t *Type) {
+ if Debug['r'] != 0 {
+ fmt.Printf("genhash %v %v\n", sym, t)
+ }
+
+ lineno = 1 // less confusing than end of input
+ dclcontext = PEXTERN
+ markdcl()
+
+ // func sym(p *T, h uintptr) uintptr
+ fn := Nod(ODCLFUNC, nil, nil)
+
+ fn.Func.Nname = newname(sym)
+ fn.Func.Nname.Class = PFUNC
+ tfn := Nod(OTFUNC, nil, nil)
+ fn.Func.Nname.Name.Param.Ntype = tfn
+
+ n := Nod(ODCLFIELD, newname(Lookup("p")), typenod(Ptrto(t)))
+ tfn.List = list(tfn.List, n)
+ np := n.Left
+ n = Nod(ODCLFIELD, newname(Lookup("h")), typenod(Types[TUINTPTR]))
+ tfn.List = list(tfn.List, n)
+ nh := n.Left
+ n = Nod(ODCLFIELD, nil, typenod(Types[TUINTPTR])) // return value
+ tfn.Rlist = list(tfn.Rlist, n)
+
+ funchdr(fn)
+ typecheck(&fn.Func.Nname.Name.Param.Ntype, Etype)
+
+ // genhash is only called for types that have equality but
+ // cannot be handled by the standard algorithms,
+ // so t must be either an array or a struct.
+ switch t.Etype {
+ default:
+ Fatalf("genhash %v", t)
+
+ case TARRAY:
+ if Isslice(t) {
+ Fatalf("genhash %v", t)
+ }
+
+ // An array of pure memory would be handled by the
+ // standard algorithm, so the element type must not be
+ // pure memory.
+ hashel := hashfor(t.Type)
+
+ n := Nod(ORANGE, nil, Nod(OIND, np, nil))
+ ni := newname(Lookup("i"))
+ ni.Type = Types[TINT]
+ n.List = list1(ni)
+ n.Colas = true
+ colasdefn(n.List, n)
+ ni = n.List.N
+
+ // h = hashel(&p[i], h)
+ call := Nod(OCALL, hashel, nil)
+
+ nx := Nod(OINDEX, np, ni)
+ nx.Bounded = true
+ na := Nod(OADDR, nx, nil)
+ na.Etype = 1 // no escape to heap
+ call.List = list(call.List, na)
+ call.List = list(call.List, nh)
+ n.Nbody.Append(Nod(OAS, nh, call))
+
+ fn.Nbody.Append(n)
+
+ // Walk the struct using memhash for runs of AMEM
+ // and calling specific hash functions for the others.
+ case TSTRUCT:
+ var call *Node
+ var nx *Node
+ var na *Node
+ var hashel *Node
+
+ t1 := t.Type
+ for {
+ first, size, next := memrun(t, t1)
+ t1 = next
+
+ // Run memhash for fields up to this one.
+ if first != nil {
+ hashel = hashmem(first.Type)
+
+ // h = hashel(&p.first, size, h)
+ call = Nod(OCALL, hashel, nil)
+
+ nx = Nod(OXDOT, np, newname(first.Sym)) // TODO: fields from other packages?
+ na = Nod(OADDR, nx, nil)
+ na.Etype = 1 // no escape to heap
+ call.List = list(call.List, na)
+ call.List = list(call.List, nh)
+ call.List = list(call.List, Nodintconst(size))
+ fn.Nbody.Append(Nod(OAS, nh, call))
+ }
+
+ if t1 == nil {
+ break
+ }
+ if isblanksym(t1.Sym) {
+ t1 = t1.Down
+ continue
+ }
+ if algtype1(t1.Type, nil) == AMEM {
+ // Our memory run might have been stopped by padding or a blank field.
+ // If the next field is memory-ish, it could be the start of a new run.
+ continue
+ }
+
+ hashel = hashfor(t1.Type)
+ call = Nod(OCALL, hashel, nil)
+ nx = Nod(OXDOT, np, newname(t1.Sym)) // TODO: fields from other packages?
+ na = Nod(OADDR, nx, nil)
+ na.Etype = 1 // no escape to heap
+ call.List = list(call.List, na)
+ call.List = list(call.List, nh)
+ fn.Nbody.Append(Nod(OAS, nh, call))
+
+ t1 = t1.Down
+ }
+ }
+
+ r := Nod(ORETURN, nil, nil)
+ r.List = list(r.List, nh)
+ fn.Nbody.Append(r)
+
+ if Debug['r'] != 0 {
+ dumpslice("genhash body", fn.Nbody.Slice())
+ }
+
+ funcbody(fn)
+ Curfn = fn
+ fn.Func.Dupok = true
+ typecheck(&fn, Etop)
+ typecheckslice(fn.Nbody.Slice(), Etop)
+ Curfn = nil
+
+ // Disable safemode while compiling this code: the code we
+ // generate internally can refer to unsafe.Pointer.
+ // In this case it can happen if we need to generate an ==
+ // for a struct containing a reflect.Value, which itself has
+ // an unexported field of type unsafe.Pointer.
+ old_safemode := safemode
+
+ safemode = 0
+ funccompile(fn)
+ safemode = old_safemode
+}
+
+func hashfor(t *Type) *Node {
+ var sym *Sym
+
+ a := algtype1(t, nil)
+ switch a {
+ case AMEM:
+ Fatalf("hashfor with AMEM type")
+
+ case AINTER:
+ sym = Pkglookup("interhash", Runtimepkg)
+
+ case ANILINTER:
+ sym = Pkglookup("nilinterhash", Runtimepkg)
+
+ case ASTRING:
+ sym = Pkglookup("strhash", Runtimepkg)
+
+ case AFLOAT32:
+ sym = Pkglookup("f32hash", Runtimepkg)
+
+ case AFLOAT64:
+ sym = Pkglookup("f64hash", Runtimepkg)
+
+ case ACPLX64:
+ sym = Pkglookup("c64hash", Runtimepkg)
+
+ case ACPLX128:
+ sym = Pkglookup("c128hash", Runtimepkg)
+
+ default:
+ sym = typesymprefix(".hash", t)
+ }
+
+ n := newname(sym)
+ n.Class = PFUNC
+ tfn := Nod(OTFUNC, nil, nil)
+ tfn.List = list(tfn.List, Nod(ODCLFIELD, nil, typenod(Ptrto(t))))
+ tfn.List = list(tfn.List, Nod(ODCLFIELD, nil, typenod(Types[TUINTPTR])))
+ tfn.Rlist = list(tfn.Rlist, Nod(ODCLFIELD, nil, typenod(Types[TUINTPTR])))
+ typecheck(&tfn, Etype)
+ n.Type = tfn.Type
+ return n
+}
+
+// geneq generates a helper function to
+// check equality of two values of type t.
+func geneq(sym *Sym, t *Type) {
+ if Debug['r'] != 0 {
+ fmt.Printf("geneq %v %v\n", sym, t)
+ }
+
+ lineno = 1 // less confusing than end of input
+ dclcontext = PEXTERN
+ markdcl()
+
+ // func sym(p, q *T) bool
+ fn := Nod(ODCLFUNC, nil, nil)
+
+ fn.Func.Nname = newname(sym)
+ fn.Func.Nname.Class = PFUNC
+ tfn := Nod(OTFUNC, nil, nil)
+ fn.Func.Nname.Name.Param.Ntype = tfn
+
+ n := Nod(ODCLFIELD, newname(Lookup("p")), typenod(Ptrto(t)))
+ tfn.List = list(tfn.List, n)
+ np := n.Left
+ n = Nod(ODCLFIELD, newname(Lookup("q")), typenod(Ptrto(t)))
+ tfn.List = list(tfn.List, n)
+ nq := n.Left
+ n = Nod(ODCLFIELD, nil, typenod(Types[TBOOL]))
+ tfn.Rlist = list(tfn.Rlist, n)
+
+ funchdr(fn)
+
+ // geneq is only called for types that have equality but
+ // cannot be handled by the standard algorithms,
+ // so t must be either an array or a struct.
+ switch t.Etype {
+ default:
+ Fatalf("geneq %v", t)
+
+ case TARRAY:
+ if Isslice(t) {
+ Fatalf("geneq %v", t)
+ }
+
+ // An array of pure memory would be handled by the
+ // standard memequal, so the element type must not be
+ // pure memory. Even if we unrolled the range loop,
+ // each iteration would be a function call, so don't bother
+ // unrolling.
+ nrange := Nod(ORANGE, nil, Nod(OIND, np, nil))
+
+ ni := newname(Lookup("i"))
+ ni.Type = Types[TINT]
+ nrange.List = list1(ni)
+ nrange.Colas = true
+ colasdefn(nrange.List, nrange)
+ ni = nrange.List.N
+
+ // if p[i] != q[i] { return false }
+ nx := Nod(OINDEX, np, ni)
+
+ nx.Bounded = true
+ ny := Nod(OINDEX, nq, ni)
+ ny.Bounded = true
+
+ nif := Nod(OIF, nil, nil)
+ nif.Left = Nod(ONE, nx, ny)
+ r := Nod(ORETURN, nil, nil)
+ r.List = list(r.List, Nodbool(false))
+ nif.Nbody.Append(r)
+ nrange.Nbody.Append(nif)
+ fn.Nbody.Append(nrange)
+
+ // return true
+ ret := Nod(ORETURN, nil, nil)
+ ret.List = list(ret.List, Nodbool(true))
+ fn.Nbody.Append(ret)
+
+ // Walk the struct using memequal for runs of AMEM
+ // and calling specific equality tests for the others.
+ // Skip blank-named fields.
+ case TSTRUCT:
+ var conjuncts []*Node
+
+ t1 := t.Type
+ for {
+ first, size, next := memrun(t, t1)
+ t1 = next
+
+ // Run memequal for fields up to this one.
+ // TODO(rsc): All the calls to newname are wrong for
+ // cross-package unexported fields.
+ if first != nil {
+ if first.Down == t1 {
+ conjuncts = append(conjuncts, eqfield(np, nq, newname(first.Sym)))
+ } else if first.Down.Down == t1 {
+ conjuncts = append(conjuncts, eqfield(np, nq, newname(first.Sym)))
+ first = first.Down
+ if !isblanksym(first.Sym) {
+ conjuncts = append(conjuncts, eqfield(np, nq, newname(first.Sym)))
+ }
+ } else {
+ // More than two fields: use memequal.
+ conjuncts = append(conjuncts, eqmem(np, nq, newname(first.Sym), size))
+ }
+ }
+
+ if t1 == nil {
+ break
+ }
+ if isblanksym(t1.Sym) {
+ t1 = t1.Down
+ continue
+ }
+ if algtype1(t1.Type, nil) == AMEM {
+ // Our memory run might have been stopped by padding or a blank field.
+ // If the next field is memory-ish, it could be the start of a new run.
+ continue
+ }
+
+ // Check this field, which is not just memory.
+ conjuncts = append(conjuncts, eqfield(np, nq, newname(t1.Sym)))
+ t1 = t1.Down
+ }
+
+ var and *Node
+ switch len(conjuncts) {
+ case 0:
+ and = Nodbool(true)
+ case 1:
+ and = conjuncts[0]
+ default:
+ and = Nod(OANDAND, conjuncts[0], conjuncts[1])
+ for _, conjunct := range conjuncts[2:] {
+ and = Nod(OANDAND, and, conjunct)
+ }
+ }
+
+ ret := Nod(ORETURN, nil, nil)
+ ret.List = list(ret.List, and)
+ fn.Nbody.Append(ret)
+ }
+
+ if Debug['r'] != 0 {
+ dumpslice("geneq body", fn.Nbody.Slice())
+ }
+
+ funcbody(fn)
+ Curfn = fn
+ fn.Func.Dupok = true
+ typecheck(&fn, Etop)
+ typecheckslice(fn.Nbody.Slice(), Etop)
+ Curfn = nil
+
+ // Disable safemode while compiling this code: the code we
+ // generate internally can refer to unsafe.Pointer.
+ // In this case it can happen if we need to generate an ==
+ // for a struct containing a reflect.Value, which itself has
+ // an unexported field of type unsafe.Pointer.
+ old_safemode := safemode
+ safemode = 0
+
+ // Disable checknils while compiling this code.
+ // We are comparing a struct or an array,
+ // neither of which can be nil, and our comparisons
+ // are shallow.
+ Disable_checknil++
+
+ funccompile(fn)
+
+ safemode = old_safemode
+ Disable_checknil--
+}
+
+// eqfield returns the node
+// p.field == q.field
+func eqfield(p *Node, q *Node, field *Node) *Node {
+ nx := Nod(OXDOT, p, field)
+ ny := Nod(OXDOT, q, field)
+ ne := Nod(OEQ, nx, ny)
+ return ne
+}
+
+// eqmem returns the node
+// memequal(&p.field, &q.field [, size])
+func eqmem(p *Node, q *Node, field *Node, size int64) *Node {
+ var needsize int
+
+ nx := Nod(OADDR, Nod(OXDOT, p, field), nil)
+ nx.Etype = 1 // does not escape
+ ny := Nod(OADDR, Nod(OXDOT, q, field), nil)
+ ny.Etype = 1 // does not escape
+ typecheck(&nx, Erv)
+ typecheck(&ny, Erv)
+
+ call := Nod(OCALL, eqmemfunc(size, nx.Type.Type, &needsize), nil)
+ call.List = list(call.List, nx)
+ call.List = list(call.List, ny)
+ if needsize != 0 {
+ call.List = list(call.List, Nodintconst(size))
+ }
+
+ return call
+}
+
+func eqmemfunc(size int64, type_ *Type, needsize *int) *Node {
+ var fn *Node
+
+ switch size {
+ default:
+ fn = syslook("memequal", 1)
+ *needsize = 1
+
+ case 1, 2, 4, 8, 16:
+ buf := fmt.Sprintf("memequal%d", int(size)*8)
+ fn = syslook(buf, 1)
+ *needsize = 0
+ }
+
+ substArgTypes(fn, type_, type_)
+ return fn
+}
+
+// memrun finds runs of struct fields for which memory-only algs are appropriate.
+// t is the parent struct type, and field is the field at which to start.
+// first is the first field in the memory run.
+// size is the length in bytes of the memory included in the run.
+// next is the next field after the memory run.
+func memrun(t *Type, field *Type) (first *Type, size int64, next *Type) {
+ var offend int64
+ for {
+ if field == nil || algtype1(field.Type, nil) != AMEM || isblanksym(field.Sym) {
+ break
+ }
+ offend = field.Width + field.Type.Width
+ if first == nil {
+ first = field
+ }
+
+ // If it's a memory field but it's padded, stop here.
+ if ispaddedfield(field, t.Width) {
+ field = field.Down
+ break
+ }
+ field = field.Down
+ }
+ if first != nil {
+ size = offend - first.Width // first.Width is offset
+ }
+ return first, size, field
+}
+
+// ispaddedfield reports whether the given field
+// is followed by padding. For the case where t is
+// the last field, total gives the size of the enclosing struct.
+func ispaddedfield(t *Type, total int64) bool {
+ if t.Etype != TFIELD {
+ Fatalf("ispaddedfield called non-field %v", t)
+ }
+ if t.Down == nil {
+ return t.Width+t.Type.Width != total
+ }
+ return t.Width+t.Type.Width != t.Down.Width
+}
}
// For nonzero-sized structs which end in a zero-sized thing, we add
- // an extra byte of padding to the type. This padding ensures that
+ // an extra byte of padding to the type. This padding ensures that
// taking the address of the zero-sized thing can't manufacture a
- // pointer to the next object in the heap. See issue 9401.
+ // pointer to the next object in the heap. See issue 9401.
if flag == 1 && o > starto && o == lastzero {
o++
}
All integer values use a variable-length encoding for compact representation.
-If debugFormat is set, each integer and string value is preceeded by a marker
+If debugFormat is set, each integer and string value is preceded by a marker
and position information in the encoding. This mechanism permits an importer
to recognize immediately when it is out of sync. The importer recognizes this
mode automatically (i.e., it can import export data produced with debugging
// is written out for exported functions with inlined function bodies.
func (p *exporter) collectInlined(n *Node) int {
- if n != nil && n.Func != nil && n.Func.Inl != nil {
+ if n != nil && n.Func != nil && len(n.Func.Inl.Slice()) != 0 {
// when lazily typechecking inlined bodies, some re-exported ones may not have been typechecked yet.
// currently that can leave unresolved ONONAMEs in import-dot-ed packages in the wrong package
if Debug['l'] < 2 {
func (p *exporter) body(i int, f *Func) {
p.int(i)
- p.block(f.Inl)
+ p.block(f.Inl.Slice())
}
-func (p *exporter) block(list *NodeList) {
- p.int(count(list))
- for q := list; q != nil; q = q.Next {
- p.stmt(q.N)
+func (p *exporter) block(list []*Node) {
+ p.int(len(list))
+ for _, n := range list {
+ p.stmt(n)
}
}
// tracef is like fmt.Printf but it rewrites the format string
// to take care of indentation.
func (p *exporter) tracef(format string, args ...interface{}) {
- if strings.IndexAny(format, "<>\n") >= 0 {
+ if strings.ContainsAny(format, "<>\n") {
var buf bytes.Buffer
for i := 0; i < len(format); i++ {
// no need to deal with runes
// package unsafe
Types[TUNSAFEPTR],
+
+ // any type, for builtin export data
+ Types[TANY],
}
}
return predecl
funchdr(n)
// go.y:hidden_import
- n.Func.Inl = nil
+ n.Func.Inl.Set(nil)
funcbody(n)
importlist = append(importlist, n) // TODO(gri) do this only if body is inlineable?
}
{
saved := structpkg
structpkg = tsym.Pkg
- addmethod(sym, n.Type, false, nointerface)
+ addmethod(sym, n.Type, false, false)
structpkg = saved
}
- nointerface = false
funchdr(n)
// (comment from go.y)
// inl.C's inlnode in on a dotmeth node expects to find the inlineable body as
// (dotmeth's type).Nname.Inl, and dotmeth's type has been pulled
- // out by typecheck's lookdot as this $$.ttype. So by providing
+ // out by typecheck's lookdot as this $$.ttype. So by providing
// this back link here we avoid special casing there.
n.Type.Nname = n
// go.y:hidden_import
- n.Func.Inl = nil
+ n.Func.Inl.Set(nil)
funcbody(n)
importlist = append(importlist, n) // TODO(gri) do this only if body is inlineable?
}
package gc
const runtimeimport = "" +
- "package runtime\n" +
+ "package runtime safe\n" +
"func @\"\".newobject (@\"\".typ·2 *byte) (? *any)\n" +
"func @\"\".panicindex ()\n" +
"func @\"\".panicslice ()\n" +
"func @\"\".stringtoslicerune (? *[32]rune, ? string) (? []rune)\n" +
"func @\"\".stringiter (? string, ? int) (? int)\n" +
"func @\"\".stringiter2 (? string, ? int) (@\"\".retk·1 int, @\"\".retv·2 rune)\n" +
- "func @\"\".slicecopy (@\"\".to·2 any, @\"\".fr·3 any, @\"\".wid·4 uintptr) (? int)\n" +
+ "func @\"\".slicecopy (@\"\".to·2 any, @\"\".fr·3 any, @\"\".wid·4 uintptr \"unsafe-uintptr\") (? int)\n" +
"func @\"\".slicestringcopy (@\"\".to·2 any, @\"\".fr·3 any) (? int)\n" +
"func @\"\".typ2Itab (@\"\".typ·2 *byte, @\"\".typ2·3 *byte, @\"\".cache·4 **byte) (@\"\".ret·1 *byte)\n" +
"func @\"\".convI2E (@\"\".elem·2 any) (@\"\".ret·1 any)\n" +
"func @\"\".panicdottype (@\"\".have·1 *byte, @\"\".want·2 *byte, @\"\".iface·3 *byte)\n" +
"func @\"\".ifaceeq (@\"\".i1·2 any, @\"\".i2·3 any) (@\"\".ret·1 bool)\n" +
"func @\"\".efaceeq (@\"\".i1·2 any, @\"\".i2·3 any) (@\"\".ret·1 bool)\n" +
- "func @\"\".ifacethash (@\"\".i1·2 any) (@\"\".ret·1 uint32)\n" +
- "func @\"\".efacethash (@\"\".i1·2 any) (@\"\".ret·1 uint32)\n" +
"func @\"\".makemap (@\"\".mapType·2 *byte, @\"\".hint·3 int64, @\"\".mapbuf·4 *any, @\"\".bucketbuf·5 *any) (@\"\".hmap·1 map[any]any)\n" +
"func @\"\".mapaccess1 (@\"\".mapType·2 *byte, @\"\".hmap·3 map[any]any, @\"\".key·4 *any) (@\"\".val·1 *any)\n" +
"func @\"\".mapaccess1_fast32 (@\"\".mapType·2 *byte, @\"\".hmap·3 map[any]any, @\"\".key·4 any) (@\"\".val·1 *any)\n" +
"func @\"\".writebarrierstring (@\"\".dst·1 *any, @\"\".src·2 any)\n" +
"func @\"\".writebarrierslice (@\"\".dst·1 *any, @\"\".src·2 any)\n" +
"func @\"\".writebarrieriface (@\"\".dst·1 *any, @\"\".src·2 any)\n" +
- "func @\"\".writebarrierfat01 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat10 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat11 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat001 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat010 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat011 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat100 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat101 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat110 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat111 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat0001 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat0010 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat0011 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat0100 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat0101 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat0110 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat0111 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat1000 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat1001 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat1010 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat1011 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat1100 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat1101 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat1110 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
- "func @\"\".writebarrierfat1111 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat01 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat10 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat11 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat001 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat010 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat011 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat100 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat101 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat110 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat111 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat0001 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat0010 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat0011 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat0100 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat0101 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat0110 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat0111 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat1000 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat1001 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat1010 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat1011 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat1100 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat1101 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat1110 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
+ "func @\"\".writebarrierfat1111 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
"func @\"\".typedmemmove (@\"\".typ·1 *byte, @\"\".dst·2 *any, @\"\".src·3 *any)\n" +
"func @\"\".typedslicecopy (@\"\".typ·2 *byte, @\"\".dst·3 any, @\"\".src·4 any) (? int)\n" +
"func @\"\".selectnbsend (@\"\".chanType·2 *byte, @\"\".hchan·3 chan<- any, @\"\".elem·4 *any) (? bool)\n" +
"func @\"\".makeslice (@\"\".typ·2 *byte, @\"\".nel·3 int64, @\"\".cap·4 int64) (@\"\".ary·1 []any)\n" +
"func @\"\".growslice (@\"\".typ·2 *byte, @\"\".old·3 []any, @\"\".cap·4 int) (@\"\".ary·1 []any)\n" +
"func @\"\".growslice_n (@\"\".typ·2 *byte, @\"\".old·3 []any, @\"\".n·4 int) (@\"\".ary·1 []any)\n" +
- "func @\"\".memmove (@\"\".to·1 *any, @\"\".frm·2 *any, @\"\".length·3 uintptr)\n" +
- "func @\"\".memclr (@\"\".ptr·1 *byte, @\"\".length·2 uintptr)\n" +
- "func @\"\".memequal (@\"\".x·2 *any, @\"\".y·3 *any, @\"\".size·4 uintptr) (? bool)\n" +
+ "func @\"\".memmove (@\"\".to·1 *any, @\"\".frm·2 *any, @\"\".length·3 uintptr \"unsafe-uintptr\")\n" +
+ "func @\"\".memclr (@\"\".ptr·1 *byte, @\"\".length·2 uintptr \"unsafe-uintptr\")\n" +
+ "func @\"\".memequal (@\"\".x·2 *any, @\"\".y·3 *any, @\"\".size·4 uintptr \"unsafe-uintptr\") (? bool)\n" +
"func @\"\".memequal8 (@\"\".x·2 *any, @\"\".y·3 *any) (? bool)\n" +
"func @\"\".memequal16 (@\"\".x·2 *any, @\"\".y·3 *any) (? bool)\n" +
"func @\"\".memequal32 (@\"\".x·2 *any, @\"\".y·3 *any) (? bool)\n" +
"func @\"\".int64tofloat64 (? int64) (? float64)\n" +
"func @\"\".uint64tofloat64 (? uint64) (? float64)\n" +
"func @\"\".complex128div (@\"\".num·2 complex128, @\"\".den·3 complex128) (@\"\".quo·1 complex128)\n" +
- "func @\"\".racefuncenter (? uintptr)\n" +
+ "func @\"\".racefuncenter (? uintptr \"unsafe-uintptr\")\n" +
"func @\"\".racefuncexit ()\n" +
- "func @\"\".raceread (? uintptr)\n" +
- "func @\"\".racewrite (? uintptr)\n" +
- "func @\"\".racereadrange (@\"\".addr·1 uintptr, @\"\".size·2 uintptr)\n" +
- "func @\"\".racewriterange (@\"\".addr·1 uintptr, @\"\".size·2 uintptr)\n" +
- "func @\"\".msanread (@\"\".addr·1 uintptr, @\"\".size·2 uintptr)\n" +
- "func @\"\".msanwrite (@\"\".addr·1 uintptr, @\"\".size·2 uintptr)\n" +
+ "func @\"\".raceread (? uintptr \"unsafe-uintptr\")\n" +
+ "func @\"\".racewrite (? uintptr \"unsafe-uintptr\")\n" +
+ "func @\"\".racereadrange (@\"\".addr·1 uintptr \"unsafe-uintptr\", @\"\".size·2 uintptr \"unsafe-uintptr\")\n" +
+ "func @\"\".racewriterange (@\"\".addr·1 uintptr \"unsafe-uintptr\", @\"\".size·2 uintptr \"unsafe-uintptr\")\n" +
+ "func @\"\".msanread (@\"\".addr·1 uintptr \"unsafe-uintptr\", @\"\".size·2 uintptr \"unsafe-uintptr\")\n" +
+ "func @\"\".msanwrite (@\"\".addr·1 uintptr \"unsafe-uintptr\", @\"\".size·2 uintptr \"unsafe-uintptr\")\n" +
"\n" +
"$$\n"
// license that can be found in the LICENSE file.
// NOTE: If you change this file you must run "go generate"
-// to update builtin.go. This is not done automatically
+// to update builtin.go. This is not done automatically
// to avoid depending on having a working compiler binary.
// +build ignore
-package PACKAGE
+package runtime
// emitted by compiler, not referred to by go programs
func ifaceeq(i1 any, i2 any) (ret bool)
func efaceeq(i1 any, i2 any) (ret bool)
-func ifacethash(i1 any) (ret uint32)
-func efacethash(i1 any) (ret uint32)
// *byte is really *runtime.Type
func makemap(mapType *byte, hint int64, mapbuf *any, bucketbuf *any) (hmap map[any]any)
// license that can be found in the LICENSE file.
// NOTE: If you change this file you must run "go generate"
-// to update builtin.go. This is not done automatically
+// to update builtin.go. This is not done automatically
// to avoid depending on having a working compiler binary.
// +build ignore
-package PACKAGE
+package unsafe
type Pointer uintptr // not really; filled in by compiler
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package gc_test
+
+import (
+ "bytes"
+ "internal/testenv"
+ "io/ioutil"
+ "os/exec"
+ "testing"
+)
+
+func TestBuiltin(t *testing.T) {
+ testenv.MustHaveGoRun(t)
+
+ old, err := ioutil.ReadFile("builtin.go")
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ new, err := exec.Command("go", "run", "mkbuiltin.go", "-stdout").Output()
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ if !bytes.Equal(old, new) {
+ t.Fatal("builtin.go out of date; run mkbuiltin.go")
+ }
+}
func cgen_wbptr(n, res *Node) {
if Curfn != nil {
- if Curfn.Func.Nowritebarrier {
+ if Curfn.Func.Pragma&Nowritebarrier != 0 {
Yyerror("write barrier prohibited")
}
if Curfn.Func.WBLineno == 0 {
func cgen_wbfat(n, res *Node) {
if Curfn != nil {
- if Curfn.Func.Nowritebarrier {
+ if Curfn.Func.Pragma&Nowritebarrier != 0 {
Yyerror("write barrier prohibited")
}
if Curfn.Func.WBLineno == 0 {
// If copying .args, that's all the results, so record definition sites
// for them for the liveness analysis.
if ns.Op == ONAME && ns.Sym.Name == ".args" {
- for l := Curfn.Func.Dcl; l != nil; l = l.Next {
- if l.N.Class == PPARAMOUT {
- Gvardef(l.N)
+ for _, ln := range Curfn.Func.Dcl {
+ if ln.Class == PPARAMOUT {
+ Gvardef(ln)
}
}
}
if osrc != -1000 && odst != -1000 && (osrc == 1000 || odst == 1000) || wb && osrc != -1000 {
// osrc and odst both on stack, and at least one is in
- // an unknown position. Could generate code to test
+ // an unknown position. Could generate code to test
// for forward/backward copy, but instead just copy
// to a temporary location first.
//
if hasdefer {
Ginscall(Deferreturn, 0)
}
- Genlist(Curfn.Func.Exit)
+ Genslice(Curfn.Func.Exit.Slice())
p := Thearch.Gins(obj.ARET, nil, nil)
if n != nil && n.Op == ORETJMP {
p.To.Type = obj.TYPE_MEM
}
func_ := Curfn
- func_.Nbody = body
+ func_.Nbody.SetToNodeList(body)
func_.Func.Endlineno = lineno
funcbody(func_)
// ordinary ones in the symbol table; see oldname.
// unhook them.
// make the list of pointers for the closure call.
- var v *Node
- for l := func_.Func.Cvars; l != nil; l = l.Next {
- v = l.N
+ for _, v := range func_.Func.Cvars.Slice() {
v.Name.Param.Closure.Name.Param.Closure = v.Name.Param.Outer
v.Name.Param.Outerexpr = oldname(v.Sym)
}
}
func typecheckclosure(func_ *Node, top int) {
- var n *Node
-
- for l := func_.Func.Cvars; l != nil; l = l.Next {
- n = l.N.Name.Param.Closure
+ for _, ln := range func_.Func.Cvars.Slice() {
+ n := ln.Name.Param.Closure
if !n.Name.Captured {
n.Name.Captured = true
if n.Name.Decldepth == 0 {
}
}
- for l := func_.Func.Dcl; l != nil; l = l.Next {
- if l.N.Op == ONAME && (l.N.Class == PPARAM || l.N.Class == PPARAMOUT) {
- l.N.Name.Decldepth = 1
+ for _, ln := range func_.Func.Dcl {
+ if ln.Op == ONAME && (ln.Class == PPARAM || ln.Class == PPARAMOUT) {
+ ln.Name.Decldepth = 1
}
}
Curfn = func_
olddd := decldepth
decldepth = 1
- typechecklist(func_.Nbody, Etop)
+ typecheckslice(func_.Nbody.Slice(), Etop)
decldepth = olddd
Curfn = oldfn
}
xfunc.Func.Endlineno = func_.Func.Endlineno
makefuncsym(xfunc.Func.Nname.Sym)
- xfunc.Nbody = func_.Nbody
- xfunc.Func.Dcl = concat(func_.Func.Dcl, xfunc.Func.Dcl)
- if xfunc.Nbody == nil {
+ xfunc.Nbody.Set(func_.Nbody.Slice())
+ xfunc.Func.Dcl = append(func_.Func.Dcl, xfunc.Func.Dcl...)
+ func_.Func.Dcl = nil
+ if len(xfunc.Nbody.Slice()) == 0 {
Fatalf("empty body - won't generate any code")
}
typecheck(&xfunc, Etop)
xfunc.Func.Closure = func_
func_.Func.Closure = xfunc
- func_.Nbody = nil
+ func_.Nbody.Set(nil)
func_.List = nil
func_.Rlist = nil
// We use value capturing for values <= 128 bytes that are never reassigned
// after capturing (effectively constant).
func capturevars(xfunc *Node) {
- var v *Node
var outer *Node
lno := int(lineno)
lineno = xfunc.Lineno
func_ := xfunc.Func.Closure
- func_.Func.Enter = nil
- for l := func_.Func.Cvars; l != nil; l = l.Next {
- v = l.N
+ func_.Func.Enter.Set(nil)
+ for _, v := range func_.Func.Cvars.Slice() {
if v.Type == nil {
// if v->type is nil, it means v looked like it was
// going to be used in the closure but wasn't.
}
typecheck(&outer, Erv)
- func_.Func.Enter = list(func_.Func.Enter, outer)
+ func_.Func.Enter.Append(outer)
}
lineno = int32(lno)
original_dcl := xfunc.Func.Dcl
xfunc.Func.Dcl = nil
- var v *Node
var addr *Node
var fld *Type
- for l := func_.Func.Cvars; l != nil; l = l.Next {
- v = l.N
+ for _, v := range func_.Func.Cvars.Slice() {
if v.Op == OXXX {
continue
}
fld.Sym = fld.Nname.Sym
// Declare the new param and add it the first part of the input arguments.
- xfunc.Func.Dcl = list(xfunc.Func.Dcl, fld.Nname)
+ xfunc.Func.Dcl = append(xfunc.Func.Dcl, fld.Nname)
*param = fld
param = &fld.Down
}
*param = original_args
- xfunc.Func.Dcl = concat(xfunc.Func.Dcl, original_dcl)
+ xfunc.Func.Dcl = append(xfunc.Func.Dcl, original_dcl...)
// Recalculate param offsets.
if f.Type.Width > 0 {
xfunc.Type = f.Type // update type of ODCLFUNC
} else {
// The closure is not called, so it is going to stay as closure.
- nvar := 0
-
- var body *NodeList
+ var body []*Node
offset := int64(Widthptr)
var addr *Node
- var v *Node
var cv *Node
- for l := func_.Func.Cvars; l != nil; l = l.Next {
- v = l.N
+ for _, v := range func_.Func.Cvars.Slice() {
if v.Op == OXXX {
continue
}
- nvar++
// cv refers to the field inside of closure OSTRUCTLIT.
cv = Nod(OCLOSUREVAR, nil, nil)
// If it is a small variable captured by value, downgrade it to PAUTO.
v.Class = PAUTO
v.Ullman = 1
- xfunc.Func.Dcl = list(xfunc.Func.Dcl, v)
- body = list(body, Nod(OAS, v, cv))
+ xfunc.Func.Dcl = append(xfunc.Func.Dcl, v)
+ body = append(body, Nod(OAS, v, cv))
} else {
// Declare variable holding addresses taken from closure
// and initialize in entry prologue.
addr.Class = PAUTO
addr.Used = true
addr.Name.Curfn = xfunc
- xfunc.Func.Dcl = list(xfunc.Func.Dcl, addr)
+ xfunc.Func.Dcl = append(xfunc.Func.Dcl, addr)
v.Name.Heapaddr = addr
if v.Name.Byval {
cv = Nod(OADDR, cv, nil)
}
- body = list(body, Nod(OAS, addr, cv))
+ body = append(body, Nod(OAS, addr, cv))
}
}
- typechecklist(body, Etop)
- walkstmtlist(body)
- xfunc.Func.Enter = body
- xfunc.Func.Needctxt = nvar > 0
+ if len(body) > 0 {
+ typecheckslice(body, Etop)
+ walkstmtslice(body)
+ xfunc.Func.Enter.Set(body)
+ xfunc.Func.Needctxt = true
+ }
}
lineno = int32(lno)
func walkclosure(func_ *Node, init **NodeList) *Node {
// If no closure vars, don't bother wrapping.
- if func_.Func.Cvars == nil {
+ if len(func_.Func.Cvars.Slice()) == 0 {
return func_.Func.Closure.Func.Nname
}
typ.List = list1(Nod(ODCLFIELD, newname(Lookup(".F")), typenod(Types[TUINTPTR])))
var typ1 *Node
- var v *Node
- for l := func_.Func.Cvars; l != nil; l = l.Next {
- v = l.N
+ for _, v := range func_.Func.Cvars.Slice() {
if v.Op == OXXX {
continue
}
clos := Nod(OCOMPLIT, nil, Nod(OIND, typ, nil))
clos.Esc = func_.Esc
clos.Right.Implicit = true
- clos.List = concat(list1(Nod(OCFUNC, func_.Func.Closure.Func.Nname, nil)), func_.Func.Enter)
+ clos.List = concat(list1(Nod(OCFUNC, func_.Func.Closure.Func.Nname, nil)), func_.Func.Enter.NodeList())
// Force type conversion from *struct to the func type.
clos = Nod(OCONVNOP, clos, nil)
n = newname(Lookupf("a%d", i))
i++
n.Class = PPARAM
- xfunc.Func.Dcl = list(xfunc.Func.Dcl, n)
+ xfunc.Func.Dcl = append(xfunc.Func.Dcl, n)
callargs = list(callargs, n)
fld = Nod(ODCLFIELD, n, typenod(t.Type))
if t.Isddd {
n = newname(Lookupf("r%d", i))
i++
n.Class = PPARAMOUT
- xfunc.Func.Dcl = list(xfunc.Func.Dcl, n)
+ xfunc.Func.Dcl = append(xfunc.Func.Dcl, n)
retargs = list(retargs, n)
l = list(l, Nod(ODCLFIELD, n, typenod(t.Type)))
}
ptr.Ullman = 1
ptr.Used = true
ptr.Name.Curfn = xfunc
- xfunc.Func.Dcl = list(xfunc.Func.Dcl, ptr)
- var body *NodeList
+ ptr.Xoffset = 0
+ xfunc.Func.Dcl = append(xfunc.Func.Dcl, ptr)
+ var body []*Node
if Isptr[rcvrtype.Etype] || Isinter(rcvrtype) {
ptr.Name.Param.Ntype = typenod(rcvrtype)
- body = list(body, Nod(OAS, ptr, cv))
+ body = append(body, Nod(OAS, ptr, cv))
} else {
ptr.Name.Param.Ntype = typenod(Ptrto(rcvrtype))
- body = list(body, Nod(OAS, ptr, Nod(OADDR, cv, nil)))
+ body = append(body, Nod(OAS, ptr, Nod(OADDR, cv, nil)))
}
call := Nod(OCALL, Nod(OXDOT, ptr, meth), nil)
call.List = callargs
call.Isddd = ddd
if t0.Outtuple == 0 {
- body = list(body, call)
+ body = append(body, call)
} else {
n := Nod(OAS2, nil, nil)
n.List = retargs
n.Rlist = list1(call)
- body = list(body, n)
+ body = append(body, n)
n = Nod(ORETURN, nil, nil)
- body = list(body, n)
+ body = append(body, n)
}
- xfunc.Nbody = body
+ xfunc.Nbody.Set(body)
typecheck(&xfunc, Etop)
sym.Def = xfunc
}
ct := consttype(n)
- var et int
+ var et EType
if ct < 0 {
goto bad
}
- et = int(t.Etype)
+ et = t.Etype
if et == TINTER {
if ct == CTNIL && n.Type == Types[TNIL] {
n.Type = t
}
case CTSTR, CTBOOL:
- if et != int(n.Type.Etype) {
+ if et != n.Type.Etype {
goto bad
}
if consttype(nl) < 0 {
return
}
- wl := int(nl.Type.Etype)
+ wl := nl.Type.Etype
if Isint[wl] || Isfloat[wl] || Iscomplex[wl] {
wl = TIDEAL
}
nr := n.Right
var rv Val
var lno int
- var wr int
+ var wr EType
var v Val
var norig *Node
var nn *Node
case OCOM_ | CTINT_,
OCOM_ | CTRUNE_:
- et := Txxx
+ var et EType = Txxx
if nl.Type != nil {
- et = int(nl.Type.Etype)
+ et = nl.Type.Etype
}
// calculate the mask in b
if consttype(nr) < 0 {
return
}
- wr = int(nr.Type.Etype)
+ wr = nr.Type.Etype
if Isint[wr] || Isfloat[wr] || Iscomplex[wr] {
wr = TIDEAL
}
// if they're both ideal going in they better
// get the same type going out.
// force means must assign concrete (non-ideal) type.
-func defaultlit2(lp **Node, rp **Node, force int) {
+func defaultlit2(lp **Node, rp **Node, force bool) {
l := *lp
r := *rp
if l.Type == nil || r.Type == nil {
return
}
- if force == 0 {
+ if !force {
return
}
+
if l.Type.Etype == TBOOL {
Convlit(lp, Types[TBOOL])
Convlit(rp, Types[TBOOL])
n.Lineno = int32(parserline())
s := n.Sym
- // kludgy: typecheckok means we're past parsing. Eg genwrapper may declare out of package names later.
+ // kludgy: typecheckok means we're past parsing. Eg genwrapper may declare out of package names later.
if importpkg == nil && !typecheckok && s.Pkg != localpkg {
Yyerror("cannot declare name %v", s)
}
Fatalf("automatic outside function")
}
if Curfn != nil {
- Curfn.Func.Dcl = list(Curfn.Func.Dcl, n)
+ Curfn.Func.Dcl = append(Curfn.Func.Dcl, n)
}
if n.Op == OTYPE {
declare_typegen++
n.Name.Param.Closure = c
c.Name.Param.Closure = n
c.Xoffset = 0
- Curfn.Func.Cvars = list(Curfn.Func.Cvars, c)
+ Curfn.Func.Cvars.Append(c)
}
// return ref to closure var, not original
CenterDot = 0xB7
)
// Names sometimes have disambiguation junk
- // appended after a center dot. Discard it when
+ // appended after a center dot. Discard it when
// making the name for the embedded struct field.
name := s.Name
Funcdepth = n.Func.Depth + 1
compile(n)
Curfn = nil
+ Pc = nil
+ continpc = nil
+ breakpc = nil
Funcdepth = 0
dclcontext = PEXTERN
+ flushdata()
+ obj.Flushplist(Ctxt) // convert from Prog list to machine code
}
func funcsym(s *Sym) *Sym {
for _, n := range list {
if n.Func.WBLineno == 0 {
c.curfn = n
- c.visitcodelist(n.Nbody)
+ c.visitcodeslice(n.Nbody.Slice())
}
}
if c.stable {
// Check nowritebarrierrec functions.
for _, n := range list {
- if !n.Func.Nowritebarrierrec {
+ if n.Func.Pragma&Nowritebarrierrec == 0 {
continue
}
call, hasWB := c.best[n]
}
}
+func (c *nowritebarrierrecChecker) visitcodeslice(l []*Node) {
+ for _, n := range l {
+ c.visitcode(n)
+ }
+}
+
func (c *nowritebarrierrecChecker) visitcode(n *Node) {
if n == nil {
return
c.visitcode(n.Left)
c.visitcode(n.Right)
c.visitcodelist(n.List)
- c.visitcodelist(n.Nbody)
+ c.visitcodeslice(n.Nbody.Slice())
c.visitcodelist(n.Rlist)
}
// or single non-recursive functions, bottom up.
//
// Finding these sets is finding strongly connected components
-// in the static call graph. The algorithm for doing that is taken
+// in the static call graph. The algorithm for doing that is taken
// from Sedgewick, Algorithms, Second Edition, p. 482, with two
// adaptations.
//
min := v.visitgen
v.stack = append(v.stack, n)
- min = v.visitcodelist(n.Nbody, min)
+ min = v.visitcodeslice(n.Nbody.Slice(), min)
if (min == id || min == id+1) && n.Func.FCurfn == nil {
// This node is the root of a strongly connected component.
return min
}
+func (v *bottomUpVisitor) visitcodeslice(l []*Node, min uint32) uint32 {
+ for _, n := range l {
+ min = v.visitcode(n, min)
+ }
+ return min
+}
+
func (v *bottomUpVisitor) visitcode(n *Node, min uint32) uint32 {
if n == nil {
return min
min = v.visitcode(n.Left, min)
min = v.visitcode(n.Right, min)
min = v.visitcodelist(n.List, min)
- min = v.visitcodelist(n.Nbody, min)
+ min = v.visitcodeslice(n.Nbody.Slice(), min)
min = v.visitcodelist(n.Rlist, min)
if n.Op == OCALLFUNC || n.Op == OCALLMETH {
//
// First escfunc, esc and escassign recurse over the ast of each
// function to dig out flow(dst,src) edges between any
-// pointer-containing nodes and store them in dst->escflowsrc. For
+// pointer-containing nodes and store them in dst->escflowsrc. For
// variables assigned to a variable in an outer scope or used as a
// return value, they store a flow(theSink, src) edge to a fake node
// 'the Sink'. For variables referenced in closures, an edge
// parameters it can reach as leaking.
//
// If a value's address is taken but the address does not escape,
-// then the value can stay on the stack. If the value new(T) does
+// then the value can stay on the stack. If the value new(T) does
// not escape, then new(T) can be rewritten into a stack allocation.
// The same is true of slice literals.
//
}
// Escape constants are numbered in order of increasing "escapiness"
-// to help make inferences be monotonic. With the exception of
+// to help make inferences be monotonic. With the exception of
// EscNever which is sticky, eX < eY means that eY is more exposed
// than eX, and hence replaces it in a conservative analysis.
const (
}
// For each input parameter to a function, the escapeReturnEncoding describes
-// how the parameter may leak to the function's outputs. This is currently the
+// how the parameter may leak to the function's outputs. This is currently the
// "level" of the leak where level is 0 or larger (negative level means stored into
// something whose address is returned -- but that implies stored into the heap,
// hence EscHeap, which means that the details are not currently relevant. )
savefn := Curfn
Curfn = func_
- for ll := Curfn.Func.Dcl; ll != nil; ll = ll.Next {
- if ll.N.Op != ONAME {
+ for _, ln := range Curfn.Func.Dcl {
+ if ln.Op != ONAME {
continue
}
- llNE := e.nodeEscState(ll.N)
- switch ll.N.Class {
+ llNE := e.nodeEscState(ln)
+ switch ln.Class {
// out params are in a loopdepth between the sink and all local variables
case PPARAMOUT:
llNE.Escloopdepth = 0
case PPARAM:
llNE.Escloopdepth = 1
- if ll.N.Type != nil && !haspointers(ll.N.Type) {
+ if ln.Type != nil && !haspointers(ln.Type) {
break
}
- if Curfn.Nbody == nil && !Curfn.Noescape {
- ll.N.Esc = EscHeap
+ if len(Curfn.Nbody.Slice()) == 0 && !Curfn.Noescape {
+ ln.Esc = EscHeap
} else {
- ll.N.Esc = EscNone // prime for escflood later
+ ln.Esc = EscNone // prime for escflood later
}
- e.noesc = list(e.noesc, ll.N)
+ e.noesc = list(e.noesc, ln)
}
}
// in a mutually recursive group we lose track of the return values
if e.recursive {
- for ll := Curfn.Func.Dcl; ll != nil; ll = ll.Next {
- if ll.N.Op == ONAME && ll.N.Class == PPARAMOUT {
- escflows(e, &e.theSink, ll.N)
+ for _, ln := range Curfn.Func.Dcl {
+ if ln.Op == ONAME && ln.Class == PPARAMOUT {
+ escflows(e, &e.theSink, ln)
}
}
}
- escloopdepthlist(e, Curfn.Nbody)
- esclist(e, Curfn.Nbody, Curfn)
+ escloopdepthslice(e, Curfn.Nbody.Slice())
+ escslice(e, Curfn.Nbody.Slice(), Curfn)
Curfn = savefn
e.loopdepth = saveld
}
// Mark labels that have no backjumps to them as not increasing e->loopdepth.
// Walk hasn't generated (goto|label)->left->sym->label yet, so we'll cheat
-// and set it to one of the following two. Then in esc we'll clear it again.
+// and set it to one of the following two. Then in esc we'll clear it again.
var looping Label
var nonlooping Label
}
}
+func escloopdepthslice(e *EscState, l []*Node) {
+ for _, n := range l {
+ escloopdepth(e, n)
+ }
+}
+
func escloopdepth(e *EscState, n *Node) {
if n == nil {
return
escloopdepth(e, n.Left)
escloopdepth(e, n.Right)
escloopdepthlist(e, n.List)
- escloopdepthlist(e, n.Nbody)
+ escloopdepthslice(e, n.Nbody.Slice())
escloopdepthlist(e, n.Rlist)
}
}
}
+func escslice(e *EscState, l []*Node, up *Node) {
+ for _, n := range l {
+ esc(e, n, up)
+ }
+}
+
func esc(e *EscState, n *Node, up *Node) {
if n == nil {
return
}
+ if n.Type != nil && n.Type.Etype == TFIELD {
+ // This is the left side of x:y in a struct literal.
+ // x is syntax, not an expression.
+ // See #14405.
+ return
+ }
lno := int(setlineno(n))
// Big stuff escapes unconditionally
// "Big" conditions that were scattered around in walk have been gathered here
- if n.Esc != EscHeap && n.Type != nil && (n.Type.Width > MaxStackVarSize ||
- n.Op == ONEW && n.Type.Type.Width >= 1<<16 ||
- n.Op == OMAKESLICE && !isSmallMakeSlice(n)) {
+ if n.Esc != EscHeap && n.Type != nil &&
+ (n.Type.Width > MaxStackVarSize ||
+ n.Op == ONEW && n.Type.Type.Width >= 1<<16 ||
+ n.Op == OMAKESLICE && !isSmallMakeSlice(n)) {
if Debug['m'] > 1 {
Warnl(int(n.Lineno), "%v is too large for stack", n)
}
esc(e, n.Left, n)
esc(e, n.Right, n)
- esclist(e, n.Nbody, n)
+ escslice(e, n.Nbody.Slice(), n)
esclist(e, n.List, n)
esclist(e, n.Rlist, n)
ll = e.nodeEscState(n.List.N).Escretval
}
- for lr := Curfn.Func.Dcl; lr != nil && ll != nil; lr = lr.Next {
- if lr.N.Op != ONAME || lr.N.Class != PPARAMOUT {
+ for _, lrn := range Curfn.Func.Dcl {
+ if ll == nil {
+ break
+ }
+ if lrn.Op != ONAME || lrn.Class != PPARAMOUT {
continue
}
- escassign(e, lr.N, ll.N)
+ escassign(e, lrn, ll.N)
ll = ll.Next
}
// Link addresses of captured variables to closure.
case OCLOSURE:
var a *Node
- var v *Node
- for ll := n.Func.Cvars; ll != nil; ll = ll.Next {
- v = ll.N
+ for _, v := range n.Func.Cvars.Slice() {
if v.Op == OXXX { // unnamed out argument; see dcl.go:/^funcargs
continue
}
dst = &e.theSink
}
- case ODOT: // treat "dst.x = src" as "dst = src"
+ case ODOT: // treat "dst.x = src" as "dst = src"
escassign(e, dst.Left, src)
return
ODOTMETH,
// treat recv.meth as a value with recv in it, only happens in ODEFER and OPROC
// iface.method already leaks iface in esccall, no need to put in extra ODOTINTER edge here
- ODOTTYPE,
ODOTTYPE2,
OSLICE,
OSLICE3,
// Conversions, field access, slice all preserve the input value.
escassign(e, dst, src.Left)
+ case ODOTTYPE:
+ if src.Type != nil && !haspointers(src.Type) {
+ break
+ }
+ escassign(e, dst, src.Left)
+
case OAPPEND:
// Append returns first argument.
// Subsequent arguments are already leaked because they are operands to append.
// Might be pointer arithmetic, in which case
// the operands flow into the result.
- // TODO(rsc): Decide what the story is here. This is unsettling.
+ // TODO(rsc): Decide what the story is here. This is unsettling.
case OADD,
OSUB,
OOR,
// flow are 000, 001, 010, 011 and EEEE is computed Esc bits.
// Note width of xxx depends on value of constant
// bitsPerOutputInTag -- expect 2 or 3, so in practice the
-// tag cache array is 64 or 128 long. Some entries will
+// tag cache array is 64 or 128 long. Some entries will
// never be populated.
var tags [1 << (bitsPerOutputInTag + EscReturnBits)]string
if Istype(t, Tptr) {
// This should model our own sloppy use of OIND to encode
// decreasing levels of indirection; i.e., "indirecting" an array
- // might yield the type of an element. To be enhanced...
+ // might yield the type of an element. To be enhanced...
t = t.Type
}
ind.Type = t
nE := e.nodeEscState(n)
if fn != nil && fn.Op == ONAME && fn.Class == PFUNC &&
- fn.Name.Defn != nil && fn.Name.Defn.Nbody != nil && fn.Name.Param.Ntype != nil && fn.Name.Defn.Esc < EscFuncTagged {
+ fn.Name.Defn != nil && len(fn.Name.Defn.Nbody.Slice()) != 0 && fn.Name.Param.Ntype != nil && fn.Name.Defn.Esc < EscFuncTagged {
if Debug['m'] > 2 {
fmt.Printf("%v::esccall:: %v in recursive group\n", Ctxt.Line(int(lineno)), Nconv(n, obj.FmtShort))
}
- // function in same mutually recursive group. Incorporate into flow graph.
+ // function in same mutually recursive group. Incorporate into flow graph.
// print("esc local fn: %N\n", fn->ntype);
if fn.Name.Defn.Esc == EscFuncUnknown || nE.Escretval != nil {
Fatalf("graph inconsistency")
return
}
- // Imported or completely analyzed function. Use the escape tags.
+ // Imported or completely analyzed function. Use the escape tags.
if nE.Escretval != nil {
Fatalf("esc already decorated call %v\n", Nconv(n, obj.FmtSign))
}
// finding an OADDR just means we're following the upstream of a dereference,
// so this address doesn't leak (yet).
// If level == 0, it means the /value/ of this node can reach the root of this flood.
-// so if this node is an OADDR, it's argument should be marked as escaping iff
-// it's currfn/e->loopdepth are different from the flood's root.
-// Once an object has been moved to the heap, all of it's upstream should be considered
+// so if this node is an OADDR, its argument should be marked as escaping iff
+// its currfn/e->loopdepth are different from the flood's root.
+// Once an object has been moved to the heap, all of its upstream should be considered
// escaping to the global scope.
func escflood(e *EscState, dst *Node) {
switch dst.Op {
// External functions are assumed unsafe,
// unless //go:noescape is given before the declaration.
- if func_.Nbody == nil {
+ if len(func_.Nbody.Slice()) == 0 {
if func_.Noescape {
for t := getinargx(func_.Type).Type; t != nil; t = t.Down {
if haspointers(t.Type) {
savefn := Curfn
Curfn = func_
- for ll := Curfn.Func.Dcl; ll != nil; ll = ll.Next {
- if ll.N.Op != ONAME {
+ for _, ln := range Curfn.Func.Dcl {
+ if ln.Op != ONAME {
continue
}
- switch ll.N.Esc & EscMask {
+ switch ln.Esc & EscMask {
case EscNone, // not touched by escflood
EscReturn:
- if haspointers(ll.N.Type) { // don't bother tagging for scalars
- ll.N.Name.Param.Field.Note = mktag(int(ll.N.Esc))
+ if haspointers(ln.Type) { // don't bother tagging for scalars
+ ln.Name.Param.Field.Note = mktag(int(ln.Esc))
}
case EscHeap, // touched by escflood, moved to heap
}
}
+func reexportdepslice(ll []*Node) {
+ for _, n := range ll {
+ reexportdep(n)
+ }
+}
+
func reexportdep(n *Node) {
if n == nil {
return
reexportdeplist(n.List)
reexportdeplist(n.Rlist)
reexportdeplist(n.Ninit)
- reexportdeplist(n.Nbody)
+ reexportdepslice(n.Nbody.Slice())
}
func dumpexportconst(s *Sym) {
dumpexporttype(t)
if t.Etype == TFUNC && n.Class == PFUNC {
- if n.Func != nil && n.Func.Inl != nil {
+ if n.Func != nil && len(n.Func.Inl.Slice()) != 0 {
// when lazily typechecking inlined bodies, some re-exported ones may not have been typechecked yet.
// currently that can leave unresolved ONONAMEs in import-dot-ed packages in the wrong package
if Debug['l'] < 2 {
}
// NOTE: The space after %#S here is necessary for ld's export data parser.
- exportf("\tfunc %v %v { %v }\n", Sconv(s, obj.FmtSharp), Tconv(t, obj.FmtShort|obj.FmtSharp), Hconv(n.Func.Inl, obj.FmtSharp|obj.FmtBody))
+ exportf("\tfunc %v %v { %v }\n", Sconv(s, obj.FmtSharp), Tconv(t, obj.FmtShort|obj.FmtSharp), Hconvslice(n.Func.Inl.Slice(), obj.FmtSharp|obj.FmtBody))
- reexportdeplist(n.Func.Inl)
+ reexportdepslice(n.Func.Inl.Slice())
} else {
exportf("\tfunc %v %v\n", Sconv(s, obj.FmtSharp), Tconv(t, obj.FmtShort|obj.FmtSharp))
}
if f.Nointerface {
exportf("\t//go:nointerface\n")
}
- if f.Type.Nname != nil && f.Type.Nname.Func.Inl != nil { // nname was set by caninl
+ if f.Type.Nname != nil && len(f.Type.Nname.Func.Inl.Slice()) != 0 { // nname was set by caninl
// when lazily typechecking inlined bodies, some re-exported ones may not have been typechecked yet.
// currently that can leave unresolved ONONAMEs in import-dot-ed packages in the wrong package
if Debug['l'] < 2 {
typecheckinl(f.Type.Nname)
}
- exportf("\tfunc (%v) %v %v { %v }\n", Tconv(getthisx(f.Type).Type, obj.FmtSharp), Sconv(f.Sym, obj.FmtShort|obj.FmtByte|obj.FmtSharp), Tconv(f.Type, obj.FmtShort|obj.FmtSharp), Hconv(f.Type.Nname.Func.Inl, obj.FmtSharp))
- reexportdeplist(f.Type.Nname.Func.Inl)
+ exportf("\tfunc (%v) %v %v { %v }\n", Tconv(getthisx(f.Type).Type, obj.FmtSharp), Sconv(f.Sym, obj.FmtShort|obj.FmtByte|obj.FmtSharp), Tconv(f.Type, obj.FmtShort|obj.FmtSharp), Hconvslice(f.Type.Nname.Func.Inl.Slice(), obj.FmtSharp))
+ reexportdepslice(f.Type.Nname.Func.Inl.Slice())
} else {
exportf("\tfunc (%v) %v %v\n", Tconv(getthisx(f.Type).Type, obj.FmtSharp), Sconv(f.Sym, obj.FmtShort|obj.FmtByte|obj.FmtSharp), Tconv(f.Type, obj.FmtShort|obj.FmtSharp))
}
// mark the symbol so it is not reexported
if s.Def == nil {
- if exportname(s.Name) || initname(s.Name) {
+ if Debug['A'] != 0 || exportname(s.Name) || initname(s.Name) {
s.Flags |= SymExport
} else {
s.Flags |= SymPackage // package scope
// E.g. for %S: %+S %#S %-S print an identifier properly qualified for debug/export/internal mode.
//
// The mode flags +, - and # are sticky, meaning they persist through
-// recursions of %N, %T and %S, but not the h and l flags. The u flag is
+// recursions of %N, %T and %S, but not the h and l flags. The u flag is
// sticky only on %T recursions and only used in %-/Sym mode.
//
TFORW: "FORW",
TFIELD: "FIELD",
TSTRING: "STRING",
+ TUNSAFEPTR: "TUNSAFEPTR",
TANY: "ANY",
}
if name != "" {
str = name + " " + typ
}
- if flag&obj.FmtShort == 0 && !fmtbody && t.Note != nil {
+
+ // The fmtbody flag is intended to suppress escape analysis annotations
+ // when printing a function type used in a function body.
+ // (The escape analysis tags do not apply to func vars.)
+ // But it must not suppress struct field tags.
+ // See golang.org/issue/13777 and golang.org/issue/14331.
+ if flag&obj.FmtShort == 0 && (!fmtbody || !t.Funarg) && t.Note != nil {
str += " " + strconv.Quote(*t.Note)
}
return str
// some statements allow for an init, but at most one,
// but we may have an arbitrary number added, eg by typecheck
- // and inlining. If it doesn't fit the syntax, emit an enclosing
+ // and inlining. If it doesn't fit the syntax, emit an enclosing
// block starting with the init statements.
// if we can just say "for" n->ninit; ... then do so
if fmtmode == FErr {
return "func literal"
}
- if n.Nbody != nil {
+ if len(n.Nbody.Slice()) != 0 {
return fmt.Sprintf("%v { %v }", n.Type, n.Nbody)
}
return fmt.Sprintf("%v { %v }", n.Type, n.Name.Param.Closure.Nbody)
} else {
fmt.Fprintf(&buf, "%v%v", Oconv(int(n.Op), 0), Jconv(n, 0))
}
- if recur && n.Type == nil && n.Name.Param.Ntype != nil {
+ if recur && n.Type == nil && n.Name != nil && n.Name.Param != nil && n.Name.Param.Ntype != nil {
indent(&buf)
fmt.Fprintf(&buf, "%v-ntype%v", Oconv(int(n.Op), 0), n.Name.Param.Ntype)
}
fmt.Fprintf(&buf, "%v-rlist%v", Oconv(int(n.Op), 0), n.Rlist)
}
- if n.Nbody != nil {
+ if len(n.Nbody.Slice()) != 0 {
indent(&buf)
fmt.Fprintf(&buf, "%v-body%v", Oconv(int(n.Op), 0), n.Nbody)
}
return Hconv(l, 0)
}
+func (n Nodes) String() string {
+ return Hconvslice(n.Slice(), 0)
+}
+
// Fmt '%H': NodeList.
// Flags: all those of %N plus ',': separate with comma's instead of semicolons.
func Hconv(l *NodeList, flag int) string {
return buf.String()
}
+func Hconvslice(l []*Node, flag int) string {
+ if len(l) == 0 && fmtmode == FDbg {
+ return "<nil>"
+ }
+
+ sf := flag
+ sm, sb := setfmode(&flag)
+ sep := "; "
+ if fmtmode == FDbg {
+ sep = "\n"
+ } else if flag&obj.FmtComma != 0 {
+ sep = ", "
+ }
+
+ var buf bytes.Buffer
+ for i, n := range l {
+ buf.WriteString(Nconv(n, 0))
+ if i+1 < len(l) {
+ buf.WriteString(sep)
+ }
+ }
+
+ flag = sf
+ fmtbody = sb
+ fmtmode = sm
+ return buf.String()
+}
+
func dumplist(s string, l *NodeList) {
fmt.Printf("%s%v\n", s, Hconv(l, obj.FmtSign))
}
+func dumpslice(s string, l []*Node) {
+ fmt.Printf("%s%v\n", s, Hconvslice(l, obj.FmtSign))
+}
+
func Dump(s string, n *Node) {
fmt.Printf("%s [%p]%v\n", s, n, Nconv(n, obj.FmtSign))
}
oldfn := Curfn
Curfn = n.Name.Curfn
+ if Curfn.Func.Closure != nil && Curfn.Op == OCLOSURE {
+ Curfn = Curfn.Func.Closure
+ }
n.Name.Heapaddr = temp(Ptrto(n.Type))
buf := fmt.Sprintf("&%v", n.Sym)
n.Name.Heapaddr.Sym = Lookup(buf)
return lab
}
+// There is a copy of checkgoto in the new SSA backend.
+// Please keep them in sync.
func checkgoto(from *Node, to *Node) {
if from.Sym == to.Sym {
return
}
}
+func Genslice(l []*Node) {
+ for _, n := range l {
+ gen(n)
+ }
+}
+
// generate code to start new proc running call n.
func cgen_proc(n *Node, proc int) {
switch n.Left.Op {
if Curfn == nil {
Fatalf("no curfn for tempname")
}
+ if Curfn.Func.Closure != nil && Curfn.Op == OCLOSURE {
+ Dump("Tempname", Curfn)
+ Fatalf("adding tempname to wrong closure function")
+ }
if t == nil {
Yyerror("tempname called with nil type")
n.Ullman = 1
n.Esc = EscNever
n.Name.Curfn = Curfn
- Curfn.Func.Dcl = list(Curfn.Func.Dcl, n)
+ Curfn.Func.Dcl = append(Curfn.Func.Dcl, n)
dowidth(t)
n.Xoffset = 0
gen(n.Right) // contin: incr
Patch(p1, Pc) // test:
Bgen(n.Left, false, -1, breakpc) // if(!test) goto break
- Genlist(n.Nbody) // body
+ Genslice(n.Nbody.Slice()) // body
gjmp(continpc)
Patch(breakpc, Pc) // done:
continpc = scontin
p2 := gjmp(nil) // p2: goto else
Patch(p1, Pc) // test:
Bgen(n.Left, false, int(-n.Likely), p2) // if(!test) goto p2
- Genlist(n.Nbody) // then
+ Genslice(n.Nbody.Slice()) // then
p3 := gjmp(nil) // goto done
Patch(p2, Pc) // else:
Genlist(n.Rlist) // else
lab.Breakpc = breakpc
}
- Patch(p1, Pc) // test:
- Genlist(n.Nbody) // switch(test) body
- Patch(breakpc, Pc) // done:
+ Patch(p1, Pc) // test:
+ Genslice(n.Nbody.Slice()) // switch(test) body
+ Patch(breakpc, Pc) // done:
breakpc = sbreak
if lab != nil {
lab.Breakpc = nil
lab.Breakpc = breakpc
}
- Patch(p1, Pc) // test:
- Genlist(n.Nbody) // select() body
- Patch(breakpc, Pc) // done:
+ Patch(p1, Pc) // test:
+ Genslice(n.Nbody.Slice()) // select() body
+ Patch(breakpc, Pc) // done:
breakpc = sbreak
if lab != nil {
lab.Breakpc = nil
cgen_dcl(n.Left)
case OAS:
- if gen_as_init(n) {
+ if gen_as_init(n, false) {
break
}
Cgen_as(n.Left, n.Right)
Cgen_as_wb(n.Left, n.Right, true)
case OAS2DOTTYPE:
- cgen_dottype(n.Rlist.N, n.List.N, n.List.Next.N, false)
+ cgen_dottype(n.Rlist.N, n.List.N, n.List.Next.N, needwritebarrier(n.List.N, n.Rlist.N))
case OCALLMETH:
cgen_callmeth(n, 0)
"log"
"os"
"os/exec"
- "path"
+ "path/filepath"
+ "strings"
"testing"
)
// Make sure "hello world" does not link in all the
-// fmt.scanf routines. See issue 6853.
+// fmt.scanf routines. See issue 6853.
func TestScanfRemoval(t *testing.T) {
testenv.MustHaveGoBuild(t)
defer os.RemoveAll(dir)
// Create source.
- src := path.Join(dir, "test.go")
+ src := filepath.Join(dir, "test.go")
f, err := os.Create(src)
if err != nil {
log.Fatalf("could not create source file: %v", err)
f.Close()
// Name of destination.
- dst := path.Join(dir, "test")
+ dst := filepath.Join(dir, "test")
// Compile source.
cmd := exec.Command("go", "build", "-o", dst, src)
log.Fatalf("scanf code not removed from helloworld")
}
}
+
+// Make sure -S prints assembly code. See issue 14515.
+func TestDashS(t *testing.T) {
+ testenv.MustHaveGoBuild(t)
+
+ // Make a directory to work in.
+ dir, err := ioutil.TempDir("", "issue14515-")
+ if err != nil {
+ log.Fatalf("could not create directory: %v", err)
+ }
+ defer os.RemoveAll(dir)
+
+ // Create source.
+ src := filepath.Join(dir, "test.go")
+ f, err := os.Create(src)
+ if err != nil {
+ log.Fatalf("could not create source file: %v", err)
+ }
+ f.Write([]byte(`
+package main
+import "fmt"
+func main() {
+ fmt.Println("hello world")
+}
+`))
+ f.Close()
+
+ // Compile source.
+ cmd := exec.Command("go", "build", "-gcflags", "-S", "-o", filepath.Join(dir, "test"), src)
+ out, err := cmd.CombinedOutput()
+ if err != nil {
+ log.Fatalf("could not build target: %v", err)
+ }
+
+ patterns := []string{
+ // It is hard to look for actual instructions in an
+ // arch-independent way. So we'll just look for
+ // pseudo-ops that are arch-independent.
+ "\tTEXT\t",
+ "\tFUNCDATA\t",
+ "\tPCDATA\t",
+ }
+ outstr := string(out)
+ for _, p := range patterns {
+ if !strings.Contains(outstr, p) {
+ println(outstr)
+ panic("can't find pattern " + p)
+ }
+ }
+}
import (
"bytes"
- "cmd/compile/internal/big"
"cmd/internal/obj"
)
-// The parser's maximum stack size.
-// We have to use a #define macro here since yacc
-// or bison will check for its definition and use
-// a potentially smaller value if it is undefined.
const (
- NHUNK = 50000
- BUFSIZ = 8192
- NSYMB = 500
- NHASH = 1024
- MAXALIGN = 7
UINF = 100
PRIME1 = 3
BADWIDTH = -1000000000
MaxStackVarSize = 10 * 1024 * 1024
)
-const (
- // These values are known by runtime.
- // The MEMx and NOEQx values must run in parallel. See algtype.
- AMEM = iota
- AMEM0
- AMEM8
- AMEM16
- AMEM32
- AMEM64
- AMEM128
- ANOEQ
- ANOEQ0
- ANOEQ8
- ANOEQ16
- ANOEQ32
- ANOEQ64
- ANOEQ128
- ASTRING
- AINTER
- ANILINTER
- ASLICE
- AFLOAT32
- AFLOAT64
- ACPLX64
- ACPLX128
- AUNK = 100
-)
-
-const (
- // Maximum size in bits for Mpints before signalling
- // overflow and also mantissa precision for Mpflts.
- Mpprec = 512
- // Turn on for constant arithmetic debugging output.
- Mpdebug = false
-)
-
-// Mpint represents an integer constant.
-type Mpint struct {
- Val big.Int
- Ovf bool // set if Val overflowed compiler limit (sticky)
- Rune bool // set if syntax indicates default type rune
-}
-
-// Mpflt represents a floating-point constant.
-type Mpflt struct {
- Val big.Float
-}
-
-// Mpcplx represents a complex constant.
-type Mpcplx struct {
- Real Mpflt
- Imag Mpflt
-}
-
type Val struct {
// U contains one of:
// bool bool when n.ValCtype() == CTBOOL
}
type Sym struct {
- Lexical uint16
- Flags uint8
- Link *Sym
+ Flags SymFlags
Uniqgen uint32
+ Link *Sym
Importdef *Pkg // where imported definition was found
Linkname string // link name
Note *string // literal string annotation
// TARRAY
- Bound int64 // negative is dynamic array
+ Bound int64 // negative is slice
// TMAP
Bucket *Type // internal type representing a hash bucket
E []InitEntry
}
+type SymFlags uint8
+
const (
- SymExport = 1 << 0 // to be exported
- SymPackage = 1 << 1
- SymExported = 1 << 2 // already written out by export
- SymUniq = 1 << 3
- SymSiggen = 1 << 4
- SymAsm = 1 << 5
- SymAlgGen = 1 << 6
+ SymExport SymFlags = 1 << iota // to be exported
+ SymPackage
+ SymExported // already written out by export
+ SymUniq
+ SymSiggen
+ SymAsm
+ SymAlgGen
)
var dclstack *Sym
const (
// types of channel
- // must match ../../pkg/nreflect/type.go:/Chandir
- Cxxx = 0
+ // must match ../../../../reflect/type.go:/ChanDir
Crecv = 1 << 0
Csend = 1 << 1
Cboth = Crecv | Csend
offset int32
}
-type Io struct {
- infile string
- bin *obj.Biobuf
- cp string // used for content when bin==nil
- last int
- peekc int
- peekc1 int // second peekc for ...
- nlsemi bool
- eofnl bool
- importsafe bool
-}
-
type Dlist struct {
field *Type
}
-type Idir struct {
- link *Idir
- dir string
-}
-
// argument passing to/from
// smagic and umagic
type Magic struct {
var dotlist [10]Dlist // size is max depth of embeddeds
-var curio Io
-
-var pushedio Io
-
+// lexlineno is the line number _after_ the most recently read rune.
+// In particular, it's advanced (or rewound) as newlines are read (or unread).
var lexlineno int32
+// lineno is the line number at the start of the most recently lexed token.
var lineno int32
-var prevlineno int32
-
var pragcgobuf string
var infile string
var Debug_checknil int
var Debug_typeassert int
-var importmyname *Sym // my name for package
-
var localpkg *Pkg // package being compiled
var importpkg *Pkg // package being imported
var myimportpath string
-var idirs *Idir
-
var localimport string
var asmhdr string
// when the race detector is enabled.
var instrumenting bool
-// Pending annotations for next func declaration.
-var (
- noescape bool
- noinline bool
- norace bool
- nosplit bool
- nowritebarrier bool
- nowritebarrierrec bool
- systemstack bool
-)
-
var debuglive int
var Ctxt *obj.Link
-var nointerface bool
-
var writearchive int
var bstdout obj.Biobuf
var panicslice *Node
+var panicdivide *Node
+
var throwreturn *Node
+
+var growslice *Node
+
+var writebarrierptr *Node
+var typedmemmove *Node
+
+var panicdottype *Node
Clearp(Pc)
}
+func flushdata() {
+ if dfirst == nil {
+ return
+ }
+ newplist()
+ *Pc = *dfirst
+ Pc = dpc
+ Clearp(Pc)
+ dfirst = nil
+ dpc = nil
+}
+
// Fixup instructions after allocauto (formerly compactframe) has moved all autos around.
func fixautoused(p *obj.Prog) {
for lp := &p; ; {
return pl
}
+// nodarg does something that depends on the value of
+// fp (this was previously completely undocumented).
+//
+// fp=1 corresponds to input args
+// fp=0 corresponds to output args
+// fp=-1 is a special case of output args for a
+// specific call from walk that previously (and
+// incorrectly) passed a 1; the behavior is exactly
+// the same as it is for 1, except that PARAMOUT is
+// generated instead of PARAM.
func nodarg(t *Type, fp int) *Node {
var n *Node
Fatalf("nodarg: not field %v", t)
}
- if fp == 1 {
- var n *Node
- for l := Curfn.Func.Dcl; l != nil; l = l.Next {
- n = l.N
+ if fp == 1 || fp == -1 {
+ for _, n := range Curfn.Func.Dcl {
if (n.Class == PPARAM || n.Class == PPARAMOUT) && !isblanksym(t.Sym) && n.Sym == t.Sym {
return n
}
case 1: // input arg
n.Class = PPARAM
+ case -1: // output arg from paramstoheap
+ n.Class = PPARAMOUT
+
case 2: // offset output arg
Fatalf("shouldn't be used")
}
// hand-craft the following initialization code
// var initdone· uint8 (1)
// func init() (2)
-// if initdone· != 0 { (3)
-// if initdone· == 2 (4)
-// return
-// throw(); (5)
+// if initdone· > 1 { (3)
+// return (3a)
+// if initdone· == 1 { (4)
+// throw(); (4a)
// }
// initdone· = 1; (6)
// // over all matching imported symbols
// initdone· = 2; (10)
// return (11)
// }
-func anyinit(n *NodeList) bool {
+func anyinit(n []*Node) bool {
// are there any interesting init statements
- for l := n; l != nil; l = l.Next {
- switch l.N.Op {
+ for _, ln := range n {
+ switch ln.Op {
case ODCLFUNC, ODCLCONST, ODCLTYPE, OEMPTY:
break
case OAS, OASWB:
- if isblank(l.N.Left) && candiscard(l.N.Right) {
+ if isblank(ln.Left) && candiscard(ln.Right) {
break
}
fallthrough
return
}
- n = initfix(n)
- if !anyinit(n) {
+ nf := initfix(n)
+ if !anyinit(nf) {
return
}
- var r *NodeList
+ var r []*Node
// (1)
gatevar := newname(Lookup("initdone·"))
// (3)
a := Nod(OIF, nil, nil)
-
- a.Left = Nod(ONE, gatevar, Nodintconst(0))
- r = list(r, a)
+ a.Left = Nod(OGT, gatevar, Nodintconst(1))
+ a.Likely = 1
+ r = append(r, a)
+ // (3a)
+ a.Nbody.Set([]*Node{Nod(ORETURN, nil, nil)})
// (4)
b := Nod(OIF, nil, nil)
-
- b.Left = Nod(OEQ, gatevar, Nodintconst(2))
- b.Nbody = list1(Nod(ORETURN, nil, nil))
- a.Nbody = list1(b)
-
- // (5)
- b = syslook("throwinit", 0)
-
- b = Nod(OCALL, b, nil)
- a.Nbody = list(a.Nbody, b)
+ b.Left = Nod(OEQ, gatevar, Nodintconst(1))
+ // this actually isn't likely, but code layout is better
+ // like this: no JMP needed after the call.
+ b.Likely = 1
+ r = append(r, b)
+ // (4a)
+ b.Nbody.Set([]*Node{Nod(OCALL, syslook("throwinit", 0), nil)})
// (6)
a = Nod(OAS, gatevar, Nodintconst(1))
- r = list(r, a)
+ r = append(r, a)
// (7)
for _, s := range initSyms {
if s.Def != nil && s != initsym {
// could check that it is fn of no args/returns
a = Nod(OCALL, s.Def, nil)
- r = list(r, a)
+ r = append(r, a)
}
}
// (8)
- r = concat(r, n)
+ r = append(r, nf...)
// (9)
// could check that it is fn of no args/returns
break
}
a = Nod(OCALL, s.Def, nil)
- r = list(r, a)
+ r = append(r, a)
}
// (10)
a = Nod(OAS, gatevar, Nodintconst(2))
- r = list(r, a)
+ r = append(r, a)
// (11)
a = Nod(ORETURN, nil, nil)
- r = list(r, a)
+ r = append(r, a)
exportsym(fn.Func.Nname)
- fn.Nbody = r
+ fn.Nbody.Set(r)
funcbody(fn)
Curfn = fn
typecheck(&fn, Etop)
- typechecklist(r, Etop)
+ typecheckslice(r, Etop)
Curfn = nil
funccompile(fn)
}
// saves a copy of the body. Then inlcalls walks each function body to
// expand calls to inlinable functions.
//
-// The debug['l'] flag controls the agressiveness. Note that main() swaps level 0 and 1,
+// The debug['l'] flag controls the aggressiveness. Note that main() swaps level 0 and 1,
// making 1 the default and -l disable. -ll and more is useful to flush out bugs.
// These additional levels (beyond -l) may be buggy and are not supported.
// 0: disabled
var inlretvars *NodeList // temp out variables
-// Get the function's package. For ordinary functions it's on the ->sym, but for imported methods
+// Get the function's package. For ordinary functions it's on the ->sym, but for imported methods
// the ->sym can be re-used in the local package, so peel it off the receiver's type.
func fnpkg(fn *Node) *Pkg {
if fn.Type.Thistuple != 0 {
return fn.Sym.Pkg
}
-// Lazy typechecking of imported bodies. For local functions, caninl will set ->typecheck
+// Lazy typechecking of imported bodies. For local functions, caninl will set ->typecheck
// because they're a copy of an already checked body.
func typecheckinl(fn *Node) {
lno := int(setlineno(fn))
}
if Debug['m'] > 2 {
- fmt.Printf("typecheck import [%v] %v { %v }\n", fn.Sym, Nconv(fn, obj.FmtLong), Hconv(fn.Func.Inl, obj.FmtSharp))
+ fmt.Printf("typecheck import [%v] %v { %v }\n", fn.Sym, Nconv(fn, obj.FmtLong), Hconvslice(fn.Func.Inl.Slice(), obj.FmtSharp))
}
save_safemode := safemode
savefn := Curfn
Curfn = fn
- typechecklist(fn.Func.Inl, Etop)
+ typecheckslice(fn.Func.Inl.Slice(), Etop)
Curfn = savefn
safemode = save_safemode
}
// If marked "go:noinline", don't inline
- if fn.Func.Noinline {
+ if fn.Func.Pragma&Noinline != 0 {
return
}
// If fn has no body (is defined outside of Go), cannot inline it.
- if fn.Nbody == nil {
+ if len(fn.Nbody.Slice()) == 0 {
return
}
const maxBudget = 80
budget := maxBudget // allowed hairyness
- if ishairylist(fn.Nbody, &budget) || budget < 0 {
+ if ishairyslice(fn.Nbody.Slice(), &budget) || budget < 0 {
return
}
savefn := Curfn
Curfn = fn
- fn.Func.Nname.Func.Inl = fn.Nbody
- fn.Nbody = inlcopylist(fn.Func.Nname.Func.Inl)
- fn.Func.Nname.Func.Inldcl = inlcopylist(fn.Func.Nname.Name.Defn.Func.Dcl)
+ fn.Func.Nname.Func.Inl.Set(fn.Nbody.Slice())
+ fn.Nbody.Set(inlcopyslice(fn.Func.Nname.Func.Inl.Slice()))
+ inldcl := inlcopyslice(fn.Func.Nname.Name.Defn.Func.Dcl)
+ if len(inldcl) > 0 {
+ fn.Func.Nname.Func.Inldcl = &inldcl
+ }
fn.Func.Nname.Func.InlCost = int32(maxBudget - budget)
// hack, TODO, check for better way to link method nodes back to the thing with the ->inl
fn.Type.Nname = fn.Func.Nname
if Debug['m'] > 1 {
- fmt.Printf("%v: can inline %v as: %v { %v }\n", fn.Line(), Nconv(fn.Func.Nname, obj.FmtSharp), Tconv(fn.Type, obj.FmtSharp), Hconv(fn.Func.Nname.Func.Inl, obj.FmtSharp))
+ fmt.Printf("%v: can inline %v as: %v { %v }\n", fn.Line(), Nconv(fn.Func.Nname, obj.FmtSharp), Tconv(fn.Type, obj.FmtSharp), Hconvslice(fn.Func.Nname.Func.Inl.Slice(), obj.FmtSharp))
} else if Debug['m'] != 0 {
fmt.Printf("%v: can inline %v\n", fn.Line(), fn.Func.Nname)
}
return false
}
+func ishairyslice(ll []*Node, budget *int) bool {
+ for _, n := range ll {
+ if ishairy(n, budget) {
+ return true
+ }
+ }
+ return false
+}
+
func ishairy(n *Node, budget *int) bool {
if n == nil {
return false
switch n.Op {
// Call is okay if inlinable and we have the budget for the body.
case OCALLFUNC:
- if n.Left.Func != nil && n.Left.Func.Inl != nil {
+ if n.Left.Func != nil && len(n.Left.Func.Inl.Slice()) != 0 {
*budget -= int(n.Left.Func.InlCost)
break
}
if n.Left.Op == ONAME && n.Left.Left != nil && n.Left.Left.Op == OTYPE && n.Left.Right != nil && n.Left.Right.Op == ONAME { // methods called as functions
- if n.Left.Sym.Def != nil && n.Left.Sym.Def.Func.Inl != nil {
+ if n.Left.Sym.Def != nil && len(n.Left.Sym.Def.Func.Inl.Slice()) != 0 {
*budget -= int(n.Left.Sym.Def.Func.InlCost)
break
}
if n.Left.Type.Nname == nil {
Fatalf("no function definition for [%p] %v\n", n.Left.Type, Tconv(n.Left.Type, obj.FmtSign))
}
- if n.Left.Type.Nname.Func.Inl != nil {
+ if len(n.Left.Type.Nname.Func.Inl.Slice()) != 0 {
*budget -= int(n.Left.Type.Nname.Func.InlCost)
break
}
(*budget)--
- return *budget < 0 || ishairy(n.Left, budget) || ishairy(n.Right, budget) || ishairylist(n.List, budget) || ishairylist(n.Rlist, budget) || ishairylist(n.Ninit, budget) || ishairylist(n.Nbody, budget)
+ return *budget < 0 || ishairy(n.Left, budget) || ishairy(n.Right, budget) || ishairylist(n.List, budget) || ishairylist(n.Rlist, budget) || ishairylist(n.Ninit, budget) || ishairyslice(n.Nbody.Slice(), budget)
}
// Inlcopy and inlcopylist recursively copy the body of a function.
m := Nod(OXXX, nil, nil)
*m = *n
if m.Func != nil {
- m.Func.Inl = nil
+ m.Func.Inl.Set(nil)
}
m.Left = inlcopy(n.Left)
m.Right = inlcopy(n.Right)
m.List = inlcopylist(n.List)
m.Rlist = inlcopylist(n.Rlist)
m.Ninit = inlcopylist(n.Ninit)
- m.Nbody = inlcopylist(n.Nbody)
+ m.Nbody.Set(inlcopyslice(n.Nbody.Slice()))
return m
}
+// Inlcopyslice is like inlcopylist, but for a slice.
+func inlcopyslice(ll []*Node) []*Node {
+ r := make([]*Node, 0, len(ll))
+ for _, ln := range ll {
+ c := inlcopy(ln)
+ if c != nil {
+ r = append(r, c)
+ }
+ }
+ return r
+}
+
// Inlcalls/nodelist/node walks fn's statements and expressions and substitutes any
-// calls made to inlineable functions. This is the external entry point.
+// calls made to inlineable functions. This is the external entry point.
func inlcalls(fn *Node) {
savefn := Curfn
Curfn = fn
n.Op = OBLOCK
// n->ninit stays
- n.List = n.Nbody
+ n.List = n.Nbody.NodeList()
- n.Nbody = nil
+ n.Nbody.Set(nil)
n.Rlist = nil
}
func inlconv2expr(np **Node) {
n := *np
r := n.Rlist.N
- addinit(&r, concat(n.Ninit, n.Nbody))
+ addinit(&r, concat(n.Ninit, n.Nbody.NodeList()))
*np = r
}
}
l := n.Rlist
- addinit(&l.N, concat(n.Ninit, n.Nbody))
+ addinit(&l.N, concat(n.Ninit, n.Nbody.NodeList()))
return l
}
}
}
+func inlnodeslice(l []*Node) {
+ for i := range l {
+ inlnode(&l[i])
+ }
+}
+
// inlnode recurses over the tree to find inlineable calls, which will
-// be turned into OINLCALLs by mkinlcall. When the recursion comes
+// be turned into OINLCALLs by mkinlcall. When the recursion comes
// back up will examine left, right, list, rlist, ninit, ntest, nincr,
// nbody and nelse and use one of the 4 inlconv/glue functions above
// to turn the OINLCALL into an expression, a statement, or patch it
}
}
- inlnodelist(n.Nbody)
- for l := n.Nbody; l != nil; l = l.Next {
- if l.N.Op == OINLCALL {
- inlconv2stmt(l.N)
+ inlnodeslice(n.Nbody.Slice())
+ for _, n := range n.Nbody.Slice() {
+ if n.Op == OINLCALL {
+ inlconv2stmt(n)
}
}
if Debug['m'] > 3 {
fmt.Printf("%v:call to func %v\n", n.Line(), Nconv(n.Left, obj.FmtSign))
}
- if n.Left.Func != nil && n.Left.Func.Inl != nil { // normal case
+ if n.Left.Func != nil && len(n.Left.Func.Inl.Slice()) != 0 { // normal case
mkinlcall(np, n.Left, n.Isddd)
} else if n.Left.Op == ONAME && n.Left.Left != nil && n.Left.Left.Op == OTYPE && n.Left.Right != nil && n.Left.Right.Op == ONAME { // methods called as functions
if n.Left.Sym.Def != nil {
// parameters.
func mkinlcall1(np **Node, fn *Node, isddd bool) {
// For variadic fn.
- if fn.Func.Inl == nil {
+ if len(fn.Func.Inl.Slice()) == 0 {
return
}
// Bingo, we have a function node, and it has an inlineable body
if Debug['m'] > 1 {
- fmt.Printf("%v: inlining call to %v %v { %v }\n", n.Line(), fn.Sym, Tconv(fn.Type, obj.FmtSharp), Hconv(fn.Func.Inl, obj.FmtSharp))
+ fmt.Printf("%v: inlining call to %v %v { %v }\n", n.Line(), fn.Sym, Tconv(fn.Type, obj.FmtSharp), Hconvslice(fn.Func.Inl.Slice(), obj.FmtSharp))
} else if Debug['m'] != 0 {
fmt.Printf("%v: inlining call to %v\n", n.Line(), fn)
}
//dumplist("ninit pre", ninit);
- var dcl *NodeList
- if fn.Name.Defn != nil { // local function
- dcl = fn.Func.Inldcl // imported function
+ var dcl []*Node
+ if fn.Name.Defn != nil {
+ // local function
+ if fn.Func.Inldcl != nil {
+ dcl = *fn.Func.Inldcl
+ }
} else {
+ // imported function
dcl = fn.Func.Dcl
}
i := 0
// Make temp names to use instead of the originals
- for ll := dcl; ll != nil; ll = ll.Next {
- if ll.N.Class == PPARAMOUT { // return values handled below.
+ for _, ln := range dcl {
+ if ln.Class == PPARAMOUT { // return values handled below.
continue
}
- if ll.N.Op == ONAME {
- ll.N.Name.Inlvar = inlvar(ll.N)
+ if ln.Op == ONAME {
+ ln.Name.Inlvar = inlvar(ln)
// Typecheck because inlvar is not necessarily a function parameter.
- typecheck(&ll.N.Name.Inlvar, Erv)
+ typecheck(&ln.Name.Inlvar, Erv)
- if ll.N.Class&^PHEAP != PAUTO {
- ninit = list(ninit, Nod(ODCL, ll.N.Name.Inlvar, nil)) // otherwise gen won't emit the allocations for heapallocs
+ if ln.Class&^PHEAP != PAUTO {
+ ninit = list(ninit, Nod(ODCL, ln.Name.Inlvar, nil)) // otherwise gen won't emit the allocations for heapallocs
}
}
}
inlretlabel = newlabel_inl()
inlgen++
- body := inlsubstlist(fn.Func.Inl)
+ body := inlsubstslice(fn.Func.Inl.Slice())
- body = list(body, Nod(OGOTO, inlretlabel, nil)) // avoid 'not used' when function doesn't have return
- body = list(body, Nod(OLABEL, inlretlabel, nil))
+ body = append(body, Nod(OGOTO, inlretlabel, nil)) // avoid 'not used' when function doesn't have return
+ body = append(body, Nod(OLABEL, inlretlabel, nil))
- typechecklist(body, Etop)
+ typecheckslice(body, Etop)
//dumplist("ninit post", ninit);
call := Nod(OINLCALL, nil, nil)
call.Ninit = ninit
- call.Nbody = body
+ call.Nbody.Set(body)
call.Rlist = inlretvars
call.Type = n.Type
call.Typecheck = 1
// instead we emit the things that the body needs
// and each use must redo the inlining.
// luckily these are small.
- body = fn.Func.Inl
- fn.Func.Inl = nil // prevent infinite recursion (shouldn't happen anyway)
- inlnodelist(call.Nbody)
- for ll := call.Nbody; ll != nil; ll = ll.Next {
- if ll.N.Op == OINLCALL {
- inlconv2stmt(ll.N)
+ body = fn.Func.Inl.Slice()
+ fn.Func.Inl.Set(nil) // prevent infinite recursion (shouldn't happen anyway)
+ inlnodeslice(call.Nbody.Slice())
+ for _, n := range call.Nbody.Slice() {
+ if n.Op == OINLCALL {
+ inlconv2stmt(n)
}
}
- fn.Func.Inl = body
+ fn.Func.Inl.Set(body)
if Debug['m'] > 2 {
fmt.Printf("%v: After inlining %v\n\n", n.Line(), Nconv(*np, obj.FmtSign))
// This may no longer be necessary now that we run escape analysis
// after wrapper generation, but for 1.5 this is conservatively left
- // unchanged. See bugs 11053 and 9537.
+ // unchanged. See bugs 11053 and 9537.
if var_.Esc == EscHeap {
addrescapes(n)
}
- Curfn.Func.Dcl = list(Curfn.Func.Dcl, n)
+ Curfn.Func.Dcl = append(Curfn.Func.Dcl, n)
return n
}
n.Class = PAUTO
n.Used = true
n.Name.Curfn = Curfn // the calling function, not the called one
- Curfn.Func.Dcl = list(Curfn.Func.Dcl, n)
+ Curfn.Func.Dcl = append(Curfn.Func.Dcl, n)
return n
}
n.Class = PAUTO
n.Used = true
n.Name.Curfn = Curfn // the calling function, not the called one
- Curfn.Func.Dcl = list(Curfn.Func.Dcl, n)
+ Curfn.Func.Dcl = append(Curfn.Func.Dcl, n)
return n
}
return n
}
-// inlsubst and inlsubstlist recursively copy the body of the saved
-// pristine ->inl body of the function while substituting references
+// inlsubst, inlsubstlist, and inlsubstslice recursively copy the body of the
+// saved pristine ->inl body of the function while substituting references
// to input/output parameters with ones to the tmpnames, and
// substituting returns with assignments to the output.
func inlsubstlist(ll *NodeList) *NodeList {
return l
}
+func inlsubstslice(ll []*Node) []*Node {
+ l := make([]*Node, 0, len(ll))
+ for _, n := range ll {
+ l = append(l, inlsubst(n))
+ }
+ return l
+}
+
func inlsubst(n *Node) *Node {
if n == nil {
return nil
m.List = inlsubstlist(n.List)
m.Rlist = inlsubstlist(n.Rlist)
m.Ninit = concat(m.Ninit, inlsubstlist(n.Ninit))
- m.Nbody = inlsubstlist(n.Nbody)
+ m.Nbody.Set(inlsubstslice(n.Nbody.Slice()))
return m
}
}
}
+func setlnoslice(ll []*Node, lno int) {
+ for _, n := range ll {
+ setlno(n, lno)
+ }
+}
+
func setlno(n *Node, lno int) {
if n == nil {
return
setlnolist(n.List, lno)
setlnolist(n.Rlist, lno)
setlnolist(n.Ninit, lno)
- setlnolist(n.Nbody, lno)
+ setlnoslice(n.Nbody.Slice(), lno)
}
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-//go:generate go run mkbuiltin.go runtime unsafe
+//go:generate go run mkbuiltin.go
package gc
import (
- "bytes"
+ "cmd/compile/internal/ssa"
"cmd/internal/obj"
"flag"
"fmt"
Debug_wb int
)
+const BOM = 0xFEFF
+
// Debug arguments.
// These can be specified with the -d flag, as in "-d nil"
// to set the debug_checknil variable. In general the list passed
}
Ctxt.Flag_shared = int32(flag_shared)
Ctxt.Flag_dynlink = flag_dynlink
+ Ctxt.Flag_optimize = Debug['N'] == 0
Ctxt.Debugasm = int32(Debug['S'])
Ctxt.Debugvlog = int32(Debug['v'])
msanpkg.Name = "msan"
}
if flag_race != 0 && flag_msan != 0 {
- log.Fatal("can not use both -race and -msan")
+ log.Fatal("cannot use both -race and -msan")
} else if flag_race != 0 || flag_msan != 0 {
instrumenting = true
}
}
}
}
+ // special case for ssa for now
+ if strings.HasPrefix(name, "ssa/") {
+ // expect form ssa/phase/flag
+ // e.g. -d=ssa/generic_cse/time
+ // _ in phase name also matches space
+ phase := name[4:]
+ flag := "debug" // default flag is debug
+ if i := strings.Index(phase, "/"); i >= 0 {
+ flag = phase[i+1:]
+ phase = phase[:i]
+ }
+ err := ssa.PhaseOption(phase, flag, val)
+ if err != "" {
+ log.Fatalf(err)
+ }
+ continue Split
+ }
log.Fatalf("unknown debug key -d %s\n", name)
}
}
dclcontext = PEXTERN
nerrors = 0
lexlineno = 1
- const BOM = 0xFEFF
+
+ loadsys()
for _, infile = range flag.Args() {
if trace && Debug['x'] != 0 {
linehistpush(infile)
- curio.infile = infile
- var err error
- curio.bin, err = obj.Bopenr(infile)
+ bin, err := obj.Bopenr(infile)
if err != nil {
fmt.Printf("open %s: %v\n", infile, err)
errorexit()
}
- curio.peekc = 0
- curio.peekc1 = 0
- curio.nlsemi = false
- curio.eofnl = false
- curio.last = 0
-
// Skip initial BOM if present.
- if obj.Bgetrune(curio.bin) != BOM {
- obj.Bungetrune(curio.bin)
+ if obj.Bgetrune(bin) != BOM {
+ obj.Bungetrune(bin)
}
block = 1
imported_unsafe = false
- parse_file()
+ parse_file(bin)
if nsyntaxerrors != 0 {
errorexit()
}
+ // Instead of converting EOF into '\n' in getc and count it as an extra line
+ // for the line history to work, and which then has to be corrected elsewhere,
+ // just add a line here.
+ lexlineno++
+
linehistpop()
- if curio.bin != nil {
- obj.Bterm(curio.bin)
- }
+ obj.Bterm(bin)
}
testdclstack()
Curfn = l.N
decldepth = 1
saveerrors()
- typechecklist(l.N.Nbody, Etop)
+ typecheckslice(l.N.Nbody.Slice(), Etop)
checkreturn(l.N)
if nerrors != 0 {
- l.N.Nbody = nil // type errors; do not compile
+ l.N.Nbody.Set(nil) // type errors; do not compile
}
}
}
// Typecheck imported function bodies if debug['l'] > 1,
// otherwise lazily when used or re-exported.
for _, n := range importlist {
- if n.Func.Inl != nil {
+ if len(n.Func.Inl.Slice()) != 0 {
saveerrors()
typecheckinl(n)
}
return false
}
- // symbol table may be first; skip it
- sz := arsize(b, "__.GOSYMDEF")
-
- if sz >= 0 {
- obj.Bseek(b, int64(sz), 1)
- } else {
- obj.Bseek(b, 8, 0)
- }
-
- // package export block is next
- sz = arsize(b, "__.PKGDEF")
-
- if sz <= 0 {
- return false
- }
- return true
+ // package export block should be first
+ sz := arsize(b, "__.PKGDEF")
+ return sz > 0
}
+var idirs []string
+
func addidir(dir string) {
- if dir == "" {
- return
+ if dir != "" {
+ idirs = append(idirs, dir)
}
+}
- var pp **Idir
- for pp = &idirs; *pp != nil; pp = &(*pp).link {
- }
- *pp = new(Idir)
- (*pp).link = nil
- (*pp).dir = dir
+func isDriveLetter(b byte) bool {
+ return 'a' <= b && b <= 'z' || 'A' <= b && b <= 'Z'
}
// is this path a local name? begins with ./ or ../ or /
func islocalname(name string) bool {
return strings.HasPrefix(name, "/") ||
- Ctxt.Windows != 0 && len(name) >= 3 && isAlpha(int(name[0])) && name[1] == ':' && name[2] == '/' ||
+ Ctxt.Windows != 0 && len(name) >= 3 && isDriveLetter(name[0]) && name[1] == ':' && name[2] == '/' ||
strings.HasPrefix(name, "./") || name == "." ||
strings.HasPrefix(name, "../") || name == ".."
}
return "", false
}
- for p := idirs; p != nil; p = p.link {
- file = fmt.Sprintf("%s/%s.a", p.dir, name)
+ for _, dir := range idirs {
+ file = fmt.Sprintf("%s/%s.a", dir, name)
if _, err := os.Stat(file); err == nil {
return file, true
}
- file = fmt.Sprintf("%s/%s.o", p.dir, name)
+ file = fmt.Sprintf("%s/%s.o", dir, name)
if _, err := os.Stat(file); err == nil {
return file, true
}
return "", false
}
-func fakeimport() {
- importpkg = mkpkg("fake")
- cannedimports("fake.o", "$$\n")
+// loadsys loads the definitions for the low-level runtime and unsafe functions,
+// so that the compiler can generate calls to them,
+// but does not make the names "runtime" or "unsafe" visible as packages.
+func loadsys() {
+ if Debug['A'] != 0 {
+ return
+ }
+
+ block = 1
+ iota_ = -1000000
+ incannedimport = 1
+
+ importpkg = Runtimepkg
+ parse_import(obj.Binitr(strings.NewReader(runtimeimport)), nil)
+
+ importpkg = unsafepkg
+ parse_import(obj.Binitr(strings.NewReader(unsafeimport)), nil)
+
+ importpkg = nil
+ incannedimport = 0
}
-// TODO(gri) line argument doesn't appear to be used
-func importfile(f *Val, line int) {
- if _, ok := f.U.(string); !ok {
+func importfile(f *Val, indent []byte) {
+ if importpkg != nil {
+ Fatalf("importpkg not nil")
+ }
+
+ path_, ok := f.U.(string)
+ if !ok {
Yyerror("import statement not a string")
- fakeimport()
return
}
- if len(f.U.(string)) == 0 {
+ if len(path_) == 0 {
Yyerror("import path is empty")
- fakeimport()
return
}
- if isbadimport(f.U.(string)) {
- fakeimport()
+ if isbadimport(path_) {
return
}
// but we reserve the import path "main" to identify
// the main package, just as we reserve the import
// path "math" to identify the standard math package.
- if f.U.(string) == "main" {
+ if path_ == "main" {
Yyerror("cannot import \"main\"")
errorexit()
}
- if myimportpath != "" && f.U.(string) == myimportpath {
- Yyerror("import %q while compiling that package (import cycle)", f.U.(string))
+ if myimportpath != "" && path_ == myimportpath {
+ Yyerror("import %q while compiling that package (import cycle)", path_)
errorexit()
}
- path_ := f.U.(string)
-
if mapped, ok := importMap[path_]; ok {
path_ = mapped
}
errorexit()
}
- importpkg = mkpkg(f.U.(string))
- cannedimports("unsafe.o", unsafeimport)
+ importpkg = unsafepkg
imported_unsafe = true
return
}
if islocalname(path_) {
if path_[0] == '/' {
Yyerror("import path cannot be absolute path")
- fakeimport()
return
}
if localimport != "" {
prefix = localimport
}
- cleanbuf := prefix
- cleanbuf += "/"
- cleanbuf += path_
- cleanbuf = path.Clean(cleanbuf)
- path_ = cleanbuf
+ path_ = path.Join(prefix, path_)
if isbadimport(path_) {
- fakeimport()
return
}
}
file, found := findpkg(path_)
if !found {
- Yyerror("can't find import: %q", f.U.(string))
+ Yyerror("can't find import: %q", path_)
errorexit()
}
importpkg = mkpkg(path_)
- // If we already saw that package, feed a dummy statement
- // to the lexer to avoid parsing export data twice.
if importpkg.Imported {
- tag := ""
- if importpkg.Safe {
- tag = "safe"
- }
-
- p := fmt.Sprintf("package %s %s\n$$\n", importpkg.Name, tag)
- cannedimports(file, p)
return
}
importpkg.Imported = true
- var err error
- var imp *obj.Biobuf
- imp, err = obj.Bopenr(file)
+ imp, err := obj.Bopenr(file)
if err != nil {
- Yyerror("can't open import: %q: %v", f.U.(string), err)
+ Yyerror("can't open import: %q: %v", path_, err)
errorexit()
}
+ defer obj.Bterm(imp)
if strings.HasSuffix(file, ".a") {
if !skiptopkgdef(imp) {
switch c {
case '\n':
// old export format
- pushedio = curio
-
- curio.bin = imp
- curio.peekc = 0
- curio.peekc1 = 0
- curio.infile = file
- curio.nlsemi = false
- typecheckok = true
-
- push_parser()
+ parse_import(imp, indent)
case 'B':
// new export format
obj.Bgetc(imp) // skip \n after $$B
Import(imp)
- // continue as if the package was imported before (see above)
- tag := ""
- if importpkg.Safe {
- tag = "safe"
- }
- p := fmt.Sprintf("package %s %s\n$$\n", importpkg.Name, tag)
- cannedimports(file, p)
- // Reset incannedimport flag (we are not truly in a
- // canned import) - this will cause importpkg.Direct to
- // be set via parser.import_package (was issue #13977).
- //
- // TODO(gri) Remove this global variable and convoluted
- // code in the process of streamlining the import code.
- incannedimport = 0
-
default:
- Yyerror("no import in %q", f.U.(string))
+ Yyerror("no import in %q", path_)
+ errorexit()
}
-}
-
-func unimportfile() {
- pop_parser()
- if curio.bin != nil {
- obj.Bterm(curio.bin)
- curio.bin = nil
- } else {
- lexlineno-- // re correct sys.6 line number
+ if safemode != 0 && !importpkg.Safe {
+ Yyerror("cannot import unsafe package %q", importpkg.Path)
}
-
- curio = pushedio
-
- pushedio.bin = nil
- incannedimport = 0
- typecheckok = false
-}
-
-func cannedimports(file string, cp string) {
- lexlineno++ // if sys.6 is included on line 1,
-
- pushedio = curio
-
- curio.bin = nil
- curio.peekc = 0
- curio.peekc1 = 0
- curio.infile = file
- curio.cp = cp
- curio.nlsemi = false
- curio.importsafe = false
-
- typecheckok = true
- incannedimport = 1
-
- push_parser()
}
-func isSpace(c int) bool {
+func isSpace(c rune) bool {
return c == ' ' || c == '\t' || c == '\n' || c == '\r'
}
-func isAlpha(c int) bool {
- return 'A' <= c && c <= 'Z' || 'a' <= c && c <= 'z'
+func isLetter(c rune) bool {
+ return 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' || c == '_'
}
-func isDigit(c int) bool {
+func isDigit(c rune) bool {
return '0' <= c && c <= '9'
}
-func isAlnum(c int) bool {
- return isAlpha(c) || isDigit(c)
-}
func plan9quote(s string) string {
if s == "" {
return s
}
-func isfrog(c int) bool {
- // complain about possibly invisible control characters
- if c < ' ' {
- return !isSpace(c) // exclude good white space
- }
+type Pragma uint16
- if 0x7f <= c && c <= 0xa0 { // DEL, unicode block including unbreakable space.
- return true
- }
- return false
-}
+const (
+ Nointerface Pragma = 1 << iota
+ Noescape // func parameters don't escape
+ Norace // func must not have race detector annotations
+ Nosplit // func should not execute on separate stack
+ Noinline // func should not be inlined
+ Systemstack // func must run on system stack
+ Nowritebarrier // emit compiler error instead of write barrier
+ Nowritebarrierrec // error on write barrier in this or recursive callees
+ CgoUnsafeArgs // treat a pointer to one arg as a pointer to them all
+)
-type yySymType struct {
- sym *Sym
- val Val
- op Op
+type lexer struct {
+ // source
+ bin *obj.Biobuf
+ peekr1 rune
+ peekr2 rune // second peekc for ...
+
+ nlsemi bool // if set, '\n' and EOF translate to ';'
+
+ // pragma flags
+ // accumulated by lexer; reset by parser
+ pragma Pragma
+
+ // current token
+ tok int32
+ sym_ *Sym // valid if tok == LNAME
+ val Val // valid if tok == LLITERAL
+ op Op // valid if tok == LOPER, LASOP, or LINCOP, or prec > 0
+ prec OpPrec // operator precedence; 0 if not a binary operator
}
+type OpPrec int
+
const (
- LLITERAL = 57346 + iota
+ // Precedences of binary operators (must be > 0).
+ PCOMM OpPrec = 1 + iota
+ POROR
+ PANDAND
+ PCMP
+ PADD
+ PMUL
+)
+
+const (
+ // The value of single-char tokens is just their character's Unicode value.
+ // They are all below utf8.RuneSelf. Shift other tokens up to avoid conflicts.
+
+ // names and literals
+ LNAME = utf8.RuneSelf + iota
+ LLITERAL
+
+ // operator-based operations
+ LOPER
LASOP
+ LINCOP
+
+ // miscellaneous
LCOLAS
+ LCOMM
+ LDDD
+
+ // keywords
LBREAK
LCASE
LCHAN
LCONST
LCONTINUE
- LDDD
LDEFAULT
LDEFER
LELSE
LIMPORT
LINTERFACE
LMAP
- LNAME
LPACKAGE
LRANGE
LRETURN
LSWITCH
LTYPE
LVAR
- LANDAND
- LANDNOT
- LCOMM
- LDEC
- LEQ
- LGE
- LGT
+
LIGNORE
- LINC
- LLE
- LLSH
- LLT
- LNE
- LOROR
- LRSH
)
-func _yylex(yylval *yySymType) int32 {
- var c1 int
- var op Op
- var escflag int
- var v int64
- var cp *bytes.Buffer
- var s *Sym
- var str string
-
- prevlineno = lineno
+func (l *lexer) next() {
+ nlsemi := l.nlsemi
+ l.nlsemi = false
+ l.prec = 0
l0:
- c := getc()
- if isSpace(c) {
- if c == '\n' && curio.nlsemi {
- ungetc(c)
+ // skip white space
+ c := l.getr()
+ for isSpace(c) {
+ if c == '\n' && nlsemi {
if Debug['x'] != 0 {
fmt.Printf("lex: implicit semi\n")
}
- return ';'
+ // Insert implicit semicolon on previous line,
+ // before the newline character.
+ lineno = lexlineno - 1
+ l.tok = ';'
+ return
}
-
- goto l0
- }
-
- lineno = lexlineno // start of token
-
- if c >= utf8.RuneSelf {
- // all multibyte runes are alpha
- cp = &lexbuf
- cp.Reset()
-
- goto talph
- }
-
- if isAlpha(c) {
- cp = &lexbuf
- cp.Reset()
- goto talph
+ c = l.getr()
}
- if isDigit(c) {
- cp = &lexbuf
- cp.Reset()
- if c != '0' {
- for {
- cp.WriteByte(byte(c))
- c = getc()
- if isDigit(c) {
- continue
- }
- if c == '.' {
- goto casedot
- }
- if c == 'e' || c == 'E' || c == 'p' || c == 'P' {
- goto caseep
- }
- if c == 'i' {
- goto casei
- }
- goto ncu
- }
- }
-
- cp.WriteByte(byte(c))
- c = getc()
- if c == 'x' || c == 'X' {
- for {
- cp.WriteByte(byte(c))
- c = getc()
- if isDigit(c) {
- continue
- }
- if c >= 'a' && c <= 'f' {
- continue
- }
- if c >= 'A' && c <= 'F' {
- continue
- }
- if lexbuf.Len() == 2 {
- Yyerror("malformed hex constant")
- }
- if c == 'p' {
- goto caseep
- }
- goto ncu
- }
- }
-
- if c == 'p' { // 0p begins floating point zero
- goto caseep
- }
-
- c1 = 0
- for {
- if !isDigit(c) {
- break
- }
- if c < '0' || c > '7' {
- c1 = 1 // not octal
- }
- cp.WriteByte(byte(c))
- c = getc()
- }
+ // start of token
+ lineno = lexlineno
- if c == '.' {
- goto casedot
- }
- if c == 'e' || c == 'E' {
- goto caseep
- }
- if c == 'i' {
- goto casei
- }
- if c1 != 0 {
- Yyerror("malformed octal constant")
+ // identifiers and keywords
+ // (for better error messages consume all chars >= utf8.RuneSelf for identifiers)
+ if isLetter(c) || c >= utf8.RuneSelf {
+ l.ident(c)
+ if l.tok == LIGNORE {
+ goto l0
}
- goto ncu
+ return
}
+ // c < utf8.RuneSelf
+
+ var c1 rune
+ var op Op
+ var prec OpPrec
switch c {
case EOF:
- lineno = prevlineno
- ungetc(EOF)
- return -1
+ l.ungetr(EOF) // return EOF again in future next call
+ // Treat EOF as "end of line" for the purposes
+ // of inserting a semicolon.
+ if nlsemi {
+ if Debug['x'] != 0 {
+ fmt.Printf("lex: implicit semi\n")
+ }
+ l.tok = ';'
+ return
+ }
+ l.tok = -1
+ return
- case '_':
- cp = &lexbuf
- cp.Reset()
- goto talph
+ case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
+ l.number(c)
+ return
case '.':
- c1 = getc()
+ c1 = l.getr()
if isDigit(c1) {
- cp = &lexbuf
- cp.Reset()
- cp.WriteByte(byte(c))
- c = c1
- goto casedot
+ l.ungetr(c1)
+ l.number('.')
+ return
}
if c1 == '.' {
- c1 = getc()
+ c1 = l.getr()
if c1 == '.' {
c = LDDD
goto lx
}
- ungetc(c1)
+ l.ungetr(c1)
c1 = '.'
}
- // "..."
case '"':
- lexbuf.Reset()
- lexbuf.WriteString(`"<string>"`)
-
- cp = &strbuf
- cp.Reset()
-
- for {
- if escchar('"', &escflag, &v) {
- break
- }
- if v < utf8.RuneSelf || escflag != 0 {
- cp.WriteByte(byte(v))
- } else {
- cp.WriteRune(rune(v))
- }
- }
-
- goto strlit
+ l.stdString()
+ return
- // `...`
case '`':
- lexbuf.Reset()
- lexbuf.WriteString("`<string>`")
-
- cp = &strbuf
- cp.Reset()
-
- for {
- c = int(getr())
- if c == '\r' {
- continue
- }
- if c == EOF {
- Yyerror("eof in string")
- break
- }
-
- if c == '`' {
- break
- }
- cp.WriteRune(rune(c))
- }
-
- goto strlit
+ l.rawString()
+ return
- // '.'
case '\'':
- if escchar('\'', &escflag, &v) {
- Yyerror("empty character literal or unescaped ' in character literal")
- v = '\''
- }
-
- if !escchar('\'', &escflag, &v) {
- Yyerror("missing '")
- ungetc(int(v))
- }
-
- x := new(Mpint)
- yylval.val.U = x
- Mpmovecfix(x, v)
- x.Rune = true
- if Debug['x'] != 0 {
- fmt.Printf("lex: codepoint literal\n")
- }
- litbuf = "string literal"
- return LLITERAL
+ l.rune()
+ return
case '/':
- c1 = getc()
+ c1 = l.getr()
if c1 == '*' {
- nl := false
+ c = l.getr()
for {
- c = int(getr())
- if c == '\n' {
- nl = true
- }
- for c == '*' {
- c = int(getr())
+ if c == '*' {
+ c = l.getr()
if c == '/' {
- if nl {
- ungetc('\n')
- }
- goto l0
- }
-
- if c == '\n' {
- nl = true
+ break
}
+ continue
}
-
if c == EOF {
Yyerror("eof in comment")
errorexit()
}
+ c = l.getr()
+ }
+
+ // A comment containing newlines acts like a newline.
+ if lexlineno > lineno && nlsemi {
+ if Debug['x'] != 0 {
+ fmt.Printf("lex: implicit semi\n")
+ }
+ l.tok = ';'
+ return
}
+ goto l0
}
if c1 == '/' {
- c = getlinepragma()
+ c = l.getlinepragma()
for {
if c == '\n' || c == EOF {
- ungetc(c)
+ l.ungetr(c)
goto l0
}
- c = int(getr())
+ c = l.getr()
}
}
- if c1 == '=' {
- op = ODIV
- goto asop
- }
+ op = ODIV
+ prec = PMUL
+ goto binop1
case ':':
- c1 = getc()
+ c1 = l.getr()
if c1 == '=' {
- c = int(LCOLAS)
+ c = LCOLAS
goto lx
}
case '*':
- c1 = getc()
- if c1 == '=' {
- op = OMUL
- goto asop
- }
+ op = OMUL
+ prec = PMUL
+ goto binop
case '%':
- c1 = getc()
- if c1 == '=' {
- op = OMOD
- goto asop
- }
+ op = OMOD
+ prec = PMUL
+ goto binop
case '+':
- c1 = getc()
- if c1 == '+' {
- c = int(LINC)
- goto lx
- }
-
- if c1 == '=' {
- op = OADD
- goto asop
- }
+ op = OADD
+ goto incop
case '-':
- c1 = getc()
- if c1 == '-' {
- c = int(LDEC)
- goto lx
- }
-
- if c1 == '=' {
- op = OSUB
- goto asop
- }
+ op = OSUB
+ goto incop
case '>':
- c1 = getc()
+ c = LOPER
+ c1 = l.getr()
if c1 == '>' {
- c = int(LRSH)
- c1 = getc()
- if c1 == '=' {
- op = ORSH
- goto asop
- }
-
- break
+ op = ORSH
+ prec = PMUL
+ goto binop
}
+ l.prec = PCMP
if c1 == '=' {
- c = int(LGE)
+ l.op = OGE
goto lx
}
-
- c = int(LGT)
+ l.op = OGT
case '<':
- c1 = getc()
+ c = LOPER
+ c1 = l.getr()
if c1 == '<' {
- c = int(LLSH)
- c1 = getc()
- if c1 == '=' {
- op = OLSH
- goto asop
- }
-
- break
+ op = OLSH
+ prec = PMUL
+ goto binop
}
- if c1 == '=' {
- c = int(LLE)
+ if c1 == '-' {
+ c = LCOMM
+ // Not a binary operator, but parsed as one
+ // so we can give a good error message when used
+ // in an expression context.
+ l.prec = PCOMM
+ l.op = OSEND
goto lx
}
- if c1 == '-' {
- c = int(LCOMM)
+ l.prec = PCMP
+ if c1 == '=' {
+ l.op = OLE
goto lx
}
-
- c = int(LLT)
+ l.op = OLT
case '=':
- c1 = getc()
+ c1 = l.getr()
if c1 == '=' {
- c = int(LEQ)
+ c = LOPER
+ l.prec = PCMP
+ l.op = OEQ
goto lx
}
case '!':
- c1 = getc()
+ c1 = l.getr()
if c1 == '=' {
- c = int(LNE)
+ c = LOPER
+ l.prec = PCMP
+ l.op = ONE
goto lx
}
case '&':
- c1 = getc()
+ c1 = l.getr()
if c1 == '&' {
- c = int(LANDAND)
+ c = LOPER
+ l.prec = PANDAND
+ l.op = OANDAND
goto lx
}
if c1 == '^' {
- c = int(LANDNOT)
- c1 = getc()
- if c1 == '=' {
- op = OANDNOT
- goto asop
- }
-
- break
+ c = LOPER
+ op = OANDNOT
+ prec = PMUL
+ goto binop
}
- if c1 == '=' {
- op = OAND
- goto asop
- }
+ op = OAND
+ prec = PMUL
+ goto binop1
case '|':
- c1 = getc()
+ c1 = l.getr()
if c1 == '|' {
- c = int(LOROR)
+ c = LOPER
+ l.prec = POROR
+ l.op = OOROR
goto lx
}
- if c1 == '=' {
- op = OOR
- goto asop
- }
+ op = OOR
+ prec = PADD
+ goto binop1
case '^':
- c1 = getc()
- if c1 == '=' {
- op = OXOR
- goto asop
+ op = OXOR
+ prec = PADD
+ goto binop
+
+ case '(', '[', '{', ',', ';':
+ goto lx
+
+ case ')', ']', '}':
+ l.nlsemi = true
+ goto lx
+
+ case '#', '$', '?', '@', '\\':
+ if importpkg != nil {
+ goto lx
}
+ fallthrough
default:
- goto lx
+ // anything else is illegal
+ Yyerror("syntax error: illegal character %#U", c)
+ goto l0
}
- ungetc(c1)
+ l.ungetr(c1)
lx:
if Debug['x'] != 0 {
- if c > 0xff {
- fmt.Printf("%v lex: TOKEN %s\n", Ctxt.Line(int(lexlineno)), lexname(c))
+ if c >= utf8.RuneSelf {
+ fmt.Printf("%v lex: TOKEN %s\n", Ctxt.Line(int(lineno)), lexname(c))
} else {
- fmt.Printf("%v lex: TOKEN '%c'\n", Ctxt.Line(int(lexlineno)), c)
+ fmt.Printf("%v lex: TOKEN '%c'\n", Ctxt.Line(int(lineno)), c)
}
}
- if isfrog(c) {
- Yyerror("illegal character 0x%x", uint(c))
- goto l0
- }
- if importpkg == nil && (c == '#' || c == '$' || c == '?' || c == '@' || c == '\\') {
- Yyerror("%s: unexpected %c", "syntax error", c)
- goto l0
+ l.tok = c
+ return
+
+incop:
+ c1 = l.getr()
+ if c1 == c {
+ l.nlsemi = true
+ l.op = op
+ c = LINCOP
+ goto lx
}
+ prec = PADD
+ goto binop1
- return int32(c)
+binop:
+ c1 = l.getr()
+binop1:
+ if c1 != '=' {
+ l.ungetr(c1)
+ l.op = op
+ l.prec = prec
+ goto lx
+ }
-asop:
- yylval.op = op
+ l.op = op
if Debug['x'] != 0 {
fmt.Printf("lex: TOKEN ASOP %s=\n", goopnames[op])
}
- return LASOP
+ l.tok = LASOP
+}
+
+func (l *lexer) ident(c rune) {
+ cp := &lexbuf
+ cp.Reset()
- // cp is set to lexbuf and some
- // prefix has been stored
-talph:
+ // accelerate common case (7bit ASCII)
+ for isLetter(c) || isDigit(c) {
+ cp.WriteByte(byte(c))
+ c = l.getr()
+ }
+
+ // general case
for {
if c >= utf8.RuneSelf {
- ungetc(c)
- r := rune(getr())
-
- // 0xb7 · is used for internal names
- if !unicode.IsLetter(r) && !unicode.IsDigit(r) && (importpkg == nil || r != 0xb7) {
- Yyerror("invalid identifier character U+%04x", r)
- }
- if cp.Len() == 0 && unicode.IsDigit(r) {
- Yyerror("identifier cannot begin with digit U+%04x", r)
+ if unicode.IsLetter(c) || c == '_' || unicode.IsDigit(c) || importpkg != nil && c == 0xb7 {
+ if cp.Len() == 0 && unicode.IsDigit(c) {
+ Yyerror("identifier cannot begin with digit %#U", c)
+ }
+ } else {
+ Yyerror("invalid identifier character %#U", c)
}
- cp.WriteRune(r)
- } else if !isAlnum(c) && c != '_' {
- break
- } else {
+ cp.WriteRune(c)
+ } else if isLetter(c) || isDigit(c) {
cp.WriteByte(byte(c))
+ } else {
+ break
}
- c = getc()
+ c = l.getr()
}
cp = nil
- ungetc(c)
+ l.ungetr(c)
- s = LookupBytes(lexbuf.Bytes())
- if s.Lexical == LIGNORE {
- goto l0
+ name := lexbuf.Bytes()
+
+ if len(name) >= 2 {
+ if tok, ok := keywords[string(name)]; ok {
+ if Debug['x'] != 0 {
+ fmt.Printf("lex: %s\n", lexname(tok))
+ }
+ switch tok {
+ case LBREAK, LCONTINUE, LFALL, LRETURN:
+ l.nlsemi = true
+ }
+ l.tok = tok
+ return
+ }
}
+ s := LookupBytes(name)
if Debug['x'] != 0 {
- fmt.Printf("lex: %s %s\n", s, lexname(int(s.Lexical)))
+ fmt.Printf("lex: ident %s\n", s)
}
- yylval.sym = s
- return int32(s.Lexical)
+ l.sym_ = s
+ l.nlsemi = true
+ l.tok = LNAME
+}
-ncu:
- cp = nil
- ungetc(c)
+var keywords = map[string]int32{
+ "break": LBREAK,
+ "case": LCASE,
+ "chan": LCHAN,
+ "const": LCONST,
+ "continue": LCONTINUE,
+ "default": LDEFAULT,
+ "defer": LDEFER,
+ "else": LELSE,
+ "fallthrough": LFALL,
+ "for": LFOR,
+ "func": LFUNC,
+ "go": LGO,
+ "goto": LGOTO,
+ "if": LIF,
+ "import": LIMPORT,
+ "interface": LINTERFACE,
+ "map": LMAP,
+ "package": LPACKAGE,
+ "range": LRANGE,
+ "return": LRETURN,
+ "select": LSELECT,
+ "struct": LSTRUCT,
+ "switch": LSWITCH,
+ "type": LTYPE,
+ "var": LVAR,
+
+ // 💩
+ "notwithstanding": LIGNORE,
+ "thetruthofthematter": LIGNORE,
+ "despiteallobjections": LIGNORE,
+ "whereas": LIGNORE,
+ "insofaras": LIGNORE,
+}
- str = lexbuf.String()
- yylval.val.U = new(Mpint)
- mpatofix(yylval.val.U.(*Mpint), str)
- if yylval.val.U.(*Mpint).Ovf {
- Yyerror("overflow in constant")
- Mpmovecfix(yylval.val.U.(*Mpint), 0)
- }
+func (l *lexer) number(c rune) {
+ var str string
+ cp := &lexbuf
+ cp.Reset()
- if Debug['x'] != 0 {
- fmt.Printf("lex: integer literal\n")
- }
- litbuf = "literal " + str
- return LLITERAL
+ // parse mantissa before decimal point or exponent
+ isInt := false
+ malformedOctal := false
+ if c != '.' {
+ if c != '0' {
+ // decimal or float
+ for isDigit(c) {
+ cp.WriteByte(byte(c))
+ c = l.getr()
+ }
-casedot:
- for {
- cp.WriteByte(byte(c))
- c = getc()
- if !isDigit(c) {
- break
+ } else {
+ // c == 0
+ cp.WriteByte('0')
+ c = l.getr()
+ if c == 'x' || c == 'X' {
+ isInt = true // must be int
+ cp.WriteByte(byte(c))
+ c = l.getr()
+ for isDigit(c) || 'a' <= c && c <= 'f' || 'A' <= c && c <= 'F' {
+ cp.WriteByte(byte(c))
+ c = l.getr()
+ }
+ if lexbuf.Len() == 2 {
+ Yyerror("malformed hex constant")
+ }
+ } else {
+ // decimal 0, octal, or float
+ for isDigit(c) {
+ if c > '7' {
+ malformedOctal = true
+ }
+ cp.WriteByte(byte(c))
+ c = l.getr()
+ }
+ }
}
}
- if c == 'i' {
- goto casei
- }
- if c != 'e' && c != 'E' {
- goto caseout
- }
+ // unless we have a hex number, parse fractional part or exponent, if any
+ if !isInt {
+ isInt = true // assume int unless proven otherwise
-caseep:
- if importpkg == nil && (c == 'p' || c == 'P') {
- // <mantissa>p<base-2-exponent> is allowed in .a/.o imports,
- // but not in .go sources. See #9036.
- Yyerror("malformed floating point constant")
- }
- cp.WriteByte(byte(c))
- c = getc()
- if c == '+' || c == '-' {
- cp.WriteByte(byte(c))
- c = getc()
- }
+ // fraction
+ if c == '.' {
+ isInt = false
+ cp.WriteByte('.')
+ c = l.getr()
+ for isDigit(c) {
+ cp.WriteByte(byte(c))
+ c = l.getr()
+ }
+ // Falling through to exponent parsing here permits invalid
+ // floating-point numbers with fractional mantissa and base-2
+ // (p or P) exponent. We don't care because base-2 exponents
+ // can only show up in machine-generated textual export data
+ // which will use correct formatting.
+ }
+
+ // exponent
+ // base-2 exponent (p or P) is only allowed in export data (see #9036)
+ // TODO(gri) Once we switch to binary import data, importpkg will
+ // always be nil in this function. Simplify the code accordingly.
+ if c == 'e' || c == 'E' || importpkg != nil && (c == 'p' || c == 'P') {
+ isInt = false
+ cp.WriteByte(byte(c))
+ c = l.getr()
+ if c == '+' || c == '-' {
+ cp.WriteByte(byte(c))
+ c = l.getr()
+ }
+ if !isDigit(c) {
+ Yyerror("malformed floating point constant exponent")
+ }
+ for isDigit(c) {
+ cp.WriteByte(byte(c))
+ c = l.getr()
+ }
+ }
- if !isDigit(c) {
- Yyerror("malformed floating point constant exponent")
- }
- for isDigit(c) {
- cp.WriteByte(byte(c))
- c = getc()
- }
+ // imaginary constant
+ if c == 'i' {
+ str = lexbuf.String()
+ x := new(Mpcplx)
+ Mpmovecflt(&x.Real, 0.0)
+ mpatoflt(&x.Imag, str)
+ if x.Imag.Val.IsInf() {
+ Yyerror("overflow in imaginary constant")
+ Mpmovecflt(&x.Imag, 0.0)
+ }
+ l.val.U = x
- if c == 'i' {
- goto casei
+ if Debug['x'] != 0 {
+ fmt.Printf("lex: imaginary literal\n")
+ }
+ goto done
+ }
}
- goto caseout
- // imaginary constant
-casei:
- cp = nil
+ l.ungetr(c)
- str = lexbuf.String()
- yylval.val.U = new(Mpcplx)
- Mpmovecflt(&yylval.val.U.(*Mpcplx).Real, 0.0)
- mpatoflt(&yylval.val.U.(*Mpcplx).Imag, str)
- if yylval.val.U.(*Mpcplx).Imag.Val.IsInf() {
- Yyerror("overflow in imaginary constant")
- Mpmovecflt(&yylval.val.U.(*Mpcplx).Imag, 0.0)
- }
+ if isInt {
+ if malformedOctal {
+ Yyerror("malformed octal constant")
+ }
- if Debug['x'] != 0 {
- fmt.Printf("lex: imaginary literal\n")
+ str = lexbuf.String()
+ x := new(Mpint)
+ mpatofix(x, str)
+ if x.Ovf {
+ Yyerror("overflow in constant")
+ Mpmovecfix(x, 0)
+ }
+ l.val.U = x
+
+ if Debug['x'] != 0 {
+ fmt.Printf("lex: integer literal\n")
+ }
+
+ } else { // float
+
+ str = lexbuf.String()
+ x := newMpflt()
+ mpatoflt(x, str)
+ if x.Val.IsInf() {
+ Yyerror("overflow in float constant")
+ Mpmovecflt(x, 0.0)
+ }
+ l.val.U = x
+
+ if Debug['x'] != 0 {
+ fmt.Printf("lex: floating literal\n")
+ }
}
+
+done:
litbuf = "literal " + str
- return LLITERAL
+ l.nlsemi = true
+ l.tok = LLITERAL
+}
-caseout:
- cp = nil
- ungetc(c)
+func (l *lexer) stdString() {
+ lexbuf.Reset()
+ lexbuf.WriteString(`"<string>"`)
+
+ cp := &strbuf
+ cp.Reset()
- str = lexbuf.String()
- yylval.val.U = newMpflt()
- mpatoflt(yylval.val.U.(*Mpflt), str)
- if yylval.val.U.(*Mpflt).Val.IsInf() {
- Yyerror("overflow in float constant")
- Mpmovecflt(yylval.val.U.(*Mpflt), 0.0)
+ for {
+ r, b, ok := l.onechar('"')
+ if !ok {
+ break
+ }
+ if r == 0 {
+ cp.WriteByte(b)
+ } else {
+ cp.WriteRune(r)
+ }
}
+ l.val.U = internString(cp.Bytes())
if Debug['x'] != 0 {
- fmt.Printf("lex: floating literal\n")
+ fmt.Printf("lex: string literal\n")
}
- litbuf = "literal " + str
- return LLITERAL
+ litbuf = "string literal"
+ l.nlsemi = true
+ l.tok = LLITERAL
+}
-strlit:
- yylval.val.U = internString(cp.Bytes())
+func (l *lexer) rawString() {
+ lexbuf.Reset()
+ lexbuf.WriteString("`<string>`")
+
+ cp := &strbuf
+ cp.Reset()
+
+ for {
+ c := l.getr()
+ if c == '\r' {
+ continue
+ }
+ if c == EOF {
+ Yyerror("eof in string")
+ break
+ }
+ if c == '`' {
+ break
+ }
+ cp.WriteRune(c)
+ }
+
+ l.val.U = internString(cp.Bytes())
if Debug['x'] != 0 {
fmt.Printf("lex: string literal\n")
}
litbuf = "string literal"
- return LLITERAL
+ l.nlsemi = true
+ l.tok = LLITERAL
+}
+
+func (l *lexer) rune() {
+ r, b, ok := l.onechar('\'')
+ if !ok {
+ Yyerror("empty character literal or unescaped ' in character literal")
+ r = '\''
+ }
+ if r == 0 {
+ r = rune(b)
+ }
+
+ if c := l.getr(); c != '\'' {
+ Yyerror("missing '")
+ l.ungetr(c)
+ }
+
+ x := new(Mpint)
+ l.val.U = x
+ Mpmovecfix(x, int64(r))
+ x.Rune = true
+ if Debug['x'] != 0 {
+ fmt.Printf("lex: codepoint literal\n")
+ }
+ litbuf = "rune literal"
+ l.nlsemi = true
+ l.tok = LLITERAL
}
var internedStrings = map[string]string{}
func internString(b []byte) string {
s, ok := internedStrings[string(b)] // string(b) here doesn't allocate
- if ok {
- return s
+ if !ok {
+ s = string(b)
+ internedStrings[s] = s
}
- s = string(b)
- internedStrings[s] = s
return s
}
func more(pp *string) bool {
p := *pp
- for p != "" && isSpace(int(p[0])) {
+ for p != "" && isSpace(rune(p[0])) {
p = p[1:]
}
*pp = p
// //line parse.y:15
// as a discontinuity in sequential line numbers.
// the next line of input comes from parse.y:15
-func getlinepragma() int {
- var cmd, verb, name string
-
- c := int(getr())
- if c == 'g' {
+func (l *lexer) getlinepragma() rune {
+ c := l.getr()
+ if c == 'g' { // check for //go: directive
cp := &lexbuf
cp.Reset()
cp.WriteByte('g') // already read
for {
- c = int(getr())
+ c = l.getr()
if c == EOF || c >= utf8.RuneSelf {
return c
}
pragcgo(text)
}
- cmd = text
- verb = cmd
- if i := strings.Index(verb, " "); i >= 0 {
+ verb := text
+ if i := strings.Index(text, " "); i >= 0 {
verb = verb[:i]
}
- if verb == "go:linkname" {
+ switch verb {
+ case "go:linkname":
if !imported_unsafe {
Yyerror("//go:linkname only allowed in Go files that import \"unsafe\"")
}
- f := strings.Fields(cmd)
+ f := strings.Fields(text)
if len(f) != 3 {
Yyerror("usage: //go:linkname localname linkname")
- return c
+ break
}
-
Lookup(f[1]).Linkname = f[2]
- return c
- }
-
- if verb == "go:nointerface" && obj.Fieldtrack_enabled != 0 {
- nointerface = true
- return c
- }
-
- if verb == "go:noescape" {
- noescape = true
- return c
- }
-
- if verb == "go:norace" {
- norace = true
- return c
- }
-
- if verb == "go:nosplit" {
- nosplit = true
- return c
- }
-
- if verb == "go:noinline" {
- noinline = true
- return c
- }
-
- if verb == "go:systemstack" {
+ case "go:nointerface":
+ if obj.Fieldtrack_enabled != 0 {
+ l.pragma |= Nointerface
+ }
+ case "go:noescape":
+ l.pragma |= Noescape
+ case "go:norace":
+ l.pragma |= Norace
+ case "go:nosplit":
+ l.pragma |= Nosplit
+ case "go:noinline":
+ l.pragma |= Noinline
+ case "go:systemstack":
if compiling_runtime == 0 {
Yyerror("//go:systemstack only allowed in runtime")
}
- systemstack = true
- return c
- }
-
- if verb == "go:nowritebarrier" {
+ l.pragma |= Systemstack
+ case "go:nowritebarrier":
if compiling_runtime == 0 {
Yyerror("//go:nowritebarrier only allowed in runtime")
}
- nowritebarrier = true
- return c
- }
-
- if verb == "go:nowritebarrierrec" {
+ l.pragma |= Nowritebarrier
+ case "go:nowritebarrierrec":
if compiling_runtime == 0 {
Yyerror("//go:nowritebarrierrec only allowed in runtime")
}
- nowritebarrierrec = true
- nowritebarrier = true // Implies nowritebarrier
- return c
+ l.pragma |= Nowritebarrierrec | Nowritebarrier // implies Nowritebarrier
+ case "go:cgo_unsafe_args":
+ l.pragma |= CgoUnsafeArgs
}
return c
}
+
+ // check for //line directive
if c != 'l' {
return c
}
for i := 1; i < 5; i++ {
- c = int(getr())
- if c != int("line "[i]) {
+ c = l.getr()
+ if c != rune("line "[i]) {
return c
}
}
cp.Reset()
linep := 0
for {
- c = int(getr())
+ c = l.getr()
if c == EOF {
return c
}
}
cp.WriteByte(byte(c))
}
-
cp = nil
if linep == 0 {
return c
}
text := strings.TrimSuffix(lexbuf.String(), "\r")
- n := 0
- for _, c := range text[linep:] {
- if c < '0' || c > '9' {
- goto out
- }
- n = n*10 + int(c) - '0'
- if n > 1e8 {
- Yyerror("line number out of range")
- errorexit()
- }
+ n, err := strconv.Atoi(text[linep:])
+ if err != nil {
+ return c // todo: make this an error instead? it is almost certainly a bug.
+ }
+ if n > 1e8 {
+ Yyerror("line number out of range")
+ errorexit()
}
-
if n <= 0 {
return c
}
- name = text[:linep-1]
- linehistupdate(name, n)
- return c
-
-out:
+ linehistupdate(text[:linep-1], n)
return c
}
return ""
}
i := 0
- for i < len(p) && !isSpace(int(p[i])) && p[i] != '"' {
+ for i < len(p) && !isSpace(rune(p[i])) && p[i] != '"' {
i++
}
sym := p[:i]
}
}
-func yylex(yylval *yySymType) int32 {
- lx := _yylex(yylval)
-
- if curio.nlsemi && lx == EOF {
- // Treat EOF as "end of line" for the purposes
- // of inserting a semicolon.
- lx = ';'
- }
-
- switch lx {
- case LNAME,
- LLITERAL,
- LBREAK,
- LCONTINUE,
- LFALL,
- LRETURN,
- LINC,
- LDEC,
- ')',
- '}',
- ']':
- curio.nlsemi = true
-
- default:
- curio.nlsemi = false
- }
-
- return lx
-}
-
-func getc() int {
- c := curio.peekc
- if c != 0 {
- curio.peekc = curio.peekc1
- curio.peekc1 = 0
- goto check
+func (l *lexer) getr() rune {
+ // unread rune != 0 available
+ if r := l.peekr1; r != 0 {
+ l.peekr1 = l.peekr2
+ l.peekr2 = 0
+ if r == '\n' && importpkg == nil {
+ lexlineno++
+ }
+ return r
}
- if curio.bin == nil {
- if len(curio.cp) == 0 {
- c = 0
- } else {
- c = int(curio.cp[0])
- curio.cp = curio.cp[1:]
+redo:
+ // common case: 7bit ASCII
+ c := obj.Bgetc(l.bin)
+ if c < utf8.RuneSelf {
+ if c == 0 {
+ yyerrorl(int(lexlineno), "illegal NUL byte")
+ return 0
}
- } else {
- loop:
- c = obj.Bgetc(curio.bin)
- // recognize BOM (U+FEFF): UTF-8 encoding is 0xef 0xbb 0xbf
- if c == 0xef {
- buf, err := curio.bin.Peek(2)
- if err != nil {
- yyerrorl(int(lexlineno), "illegal UTF-8 sequence ef % x followed by read error (%v)", string(buf), err)
- errorexit()
- }
- if buf[0] == 0xbb && buf[1] == 0xbf {
- yyerrorl(int(lexlineno), "Unicode (UTF-8) BOM in middle of file")
-
- // consume BOM bytes
- obj.Bgetc(curio.bin)
- obj.Bgetc(curio.bin)
- goto loop
- }
+ if c == '\n' && importpkg == nil {
+ lexlineno++
}
+ return rune(c)
}
+ // c >= utf8.RuneSelf
-check:
- switch c {
- case 0:
- if curio.bin != nil {
- Yyerror("illegal NUL byte")
- break
- }
- fallthrough
+ // uncommon case: non-ASCII
+ var buf [utf8.UTFMax]byte
+ buf[0] = byte(c)
+ buf[1] = byte(obj.Bgetc(l.bin))
+ i := 2
+ for ; i < len(buf) && !utf8.FullRune(buf[:i]); i++ {
+ buf[i] = byte(obj.Bgetc(l.bin))
+ }
- // insert \n at EOF
- case EOF:
- if curio.eofnl || curio.last == '\n' {
- return EOF
- }
- curio.eofnl = true
- c = '\n'
- fallthrough
+ r, w := utf8.DecodeRune(buf[:i])
+ if r == utf8.RuneError && w == 1 {
+ // The string conversion here makes a copy for passing
+ // to fmt.Printf, so that buf itself does not escape and
+ // can be allocated on the stack.
+ yyerrorl(int(lexlineno), "illegal UTF-8 sequence % x", string(buf[:i]))
+ }
- case '\n':
- if pushedio.bin == nil {
- lexlineno++
- }
+ if r == BOM {
+ yyerrorl(int(lexlineno), "Unicode (UTF-8) BOM in middle of file")
+ goto redo
}
- curio.last = c
- return c
+ return r
}
-func ungetc(c int) {
- curio.peekc1 = curio.peekc
- curio.peekc = c
- if c == '\n' && pushedio.bin == nil {
+func (l *lexer) ungetr(r rune) {
+ l.peekr2 = l.peekr1
+ l.peekr1 = r
+ if r == '\n' && importpkg == nil {
lexlineno--
}
}
-func getr() int32 {
- var buf [utf8.UTFMax]byte
-
- for i := 0; ; i++ {
- c := getc()
- if i == 0 && c < utf8.RuneSelf {
- return int32(c)
- }
- buf[i] = byte(c)
- if i+1 == len(buf) || utf8.FullRune(buf[:i+1]) {
- r, w := utf8.DecodeRune(buf[:i+1])
- if r == utf8.RuneError && w == 1 {
- lineno = lexlineno
- // The string conversion here makes a copy for passing
- // to fmt.Printf, so that buf itself does not escape and can
- // be allocated on the stack.
- Yyerror("illegal UTF-8 sequence % x", string(buf[:i+1]))
- }
- return int32(r)
- }
- }
-}
-
-func escchar(e int, escflg *int, val *int64) bool {
- *escflg = 0
-
- c := int(getr())
+// onechar lexes a single character within a rune or interpreted string literal,
+// handling escape sequences as necessary.
+func (l *lexer) onechar(quote rune) (r rune, b byte, ok bool) {
+ c := l.getr()
switch c {
case EOF:
Yyerror("eof in string")
- return true
+ l.ungetr(EOF)
+ return
case '\n':
Yyerror("newline in string")
- return true
+ l.ungetr('\n')
+ return
case '\\':
break
+ case quote:
+ return
+
default:
- if c == e {
- return true
- }
- *val = int64(c)
- return false
+ return c, 0, true
}
- u := 0
- c = int(getr())
- var i int
+ c = l.getr()
switch c {
case 'x':
- *escflg = 1 // it's a byte
- i = 2
- goto hex
+ return 0, byte(l.hexchar(2)), true
case 'u':
- i = 4
- u = 1
- goto hex
+ return l.unichar(4), 0, true
case 'U':
- i = 8
- u = 1
- goto hex
-
- case '0',
- '1',
- '2',
- '3',
- '4',
- '5',
- '6',
- '7':
- *escflg = 1 // it's a byte
- l := int64(c) - '0'
+ return l.unichar(8), 0, true
+
+ case '0', '1', '2', '3', '4', '5', '6', '7':
+ x := c - '0'
for i := 2; i > 0; i-- {
- c = getc()
+ c = l.getr()
if c >= '0' && c <= '7' {
- l = l*8 + int64(c) - '0'
+ x = x*8 + c - '0'
continue
}
Yyerror("non-octal character in escape sequence: %c", c)
- ungetc(c)
+ l.ungetr(c)
}
- if l > 255 {
- Yyerror("octal escape value > 255: %d", l)
+ if x > 255 {
+ Yyerror("octal escape value > 255: %d", x)
}
- *val = l
- return false
+ return 0, byte(x), true
case 'a':
c = '\a'
c = '\\'
default:
- if c != e {
+ if c != quote {
Yyerror("unknown escape sequence: %c", c)
}
}
- *val = int64(c)
- return false
+ return c, 0, true
+}
-hex:
- l := int64(0)
- for ; i > 0; i-- {
- c = getc()
- if c >= '0' && c <= '9' {
- l = l*16 + int64(c) - '0'
- continue
- }
+func (l *lexer) unichar(n int) rune {
+ x := l.hexchar(n)
+ if x > utf8.MaxRune || 0xd800 <= x && x < 0xe000 {
+ Yyerror("invalid Unicode code point in escape sequence: %#x", x)
+ x = utf8.RuneError
+ }
+ return rune(x)
+}
- if c >= 'a' && c <= 'f' {
- l = l*16 + int64(c) - 'a' + 10
- continue
- }
+func (l *lexer) hexchar(n int) uint32 {
+ var x uint32
- if c >= 'A' && c <= 'F' {
- l = l*16 + int64(c) - 'A' + 10
- continue
+ for ; n > 0; n-- {
+ var d uint32
+ switch c := l.getr(); {
+ case isDigit(c):
+ d = uint32(c - '0')
+ case 'a' <= c && c <= 'f':
+ d = uint32(c - 'a' + 10)
+ case 'A' <= c && c <= 'F':
+ d = uint32(c - 'A' + 10)
+ default:
+ Yyerror("non-hex character in escape sequence: %c", c)
+ l.ungetr(c)
+ return x
}
-
- Yyerror("non-hex character in escape sequence: %c", c)
- ungetc(c)
- break
+ x = x*16 + d
}
- if u != 0 && (l > utf8.MaxRune || (0xd800 <= l && l < 0xe000)) {
- Yyerror("invalid Unicode code point in escape sequence: %#x", l)
- l = utf8.RuneError
- }
+ return x
+}
- *val = l
- return false
+var basicTypes = [...]struct {
+ name string
+ etype EType
+}{
+ {"int8", TINT8},
+ {"int16", TINT16},
+ {"int32", TINT32},
+ {"int64", TINT64},
+ {"uint8", TUINT8},
+ {"uint16", TUINT16},
+ {"uint32", TUINT32},
+ {"uint64", TUINT64},
+ {"float32", TFLOAT32},
+ {"float64", TFLOAT64},
+ {"complex64", TCOMPLEX64},
+ {"complex128", TCOMPLEX128},
+ {"bool", TBOOL},
+ {"string", TSTRING},
+ {"any", TANY},
}
-var syms = []struct {
- name string
- lexical int
- etype EType
- op Op
+var builtinFuncs = [...]struct {
+ name string
+ op Op
}{
- // basic types
- {"int8", LNAME, TINT8, OXXX},
- {"int16", LNAME, TINT16, OXXX},
- {"int32", LNAME, TINT32, OXXX},
- {"int64", LNAME, TINT64, OXXX},
- {"uint8", LNAME, TUINT8, OXXX},
- {"uint16", LNAME, TUINT16, OXXX},
- {"uint32", LNAME, TUINT32, OXXX},
- {"uint64", LNAME, TUINT64, OXXX},
- {"float32", LNAME, TFLOAT32, OXXX},
- {"float64", LNAME, TFLOAT64, OXXX},
- {"complex64", LNAME, TCOMPLEX64, OXXX},
- {"complex128", LNAME, TCOMPLEX128, OXXX},
- {"bool", LNAME, TBOOL, OXXX},
- {"string", LNAME, TSTRING, OXXX},
- {"any", LNAME, TANY, OXXX},
- {"break", LBREAK, Txxx, OXXX},
- {"case", LCASE, Txxx, OXXX},
- {"chan", LCHAN, Txxx, OXXX},
- {"const", LCONST, Txxx, OXXX},
- {"continue", LCONTINUE, Txxx, OXXX},
- {"default", LDEFAULT, Txxx, OXXX},
- {"else", LELSE, Txxx, OXXX},
- {"defer", LDEFER, Txxx, OXXX},
- {"fallthrough", LFALL, Txxx, OXXX},
- {"for", LFOR, Txxx, OXXX},
- {"func", LFUNC, Txxx, OXXX},
- {"go", LGO, Txxx, OXXX},
- {"goto", LGOTO, Txxx, OXXX},
- {"if", LIF, Txxx, OXXX},
- {"import", LIMPORT, Txxx, OXXX},
- {"interface", LINTERFACE, Txxx, OXXX},
- {"map", LMAP, Txxx, OXXX},
- {"package", LPACKAGE, Txxx, OXXX},
- {"range", LRANGE, Txxx, OXXX},
- {"return", LRETURN, Txxx, OXXX},
- {"select", LSELECT, Txxx, OXXX},
- {"struct", LSTRUCT, Txxx, OXXX},
- {"switch", LSWITCH, Txxx, OXXX},
- {"type", LTYPE, Txxx, OXXX},
- {"var", LVAR, Txxx, OXXX},
- {"append", LNAME, Txxx, OAPPEND},
- {"cap", LNAME, Txxx, OCAP},
- {"close", LNAME, Txxx, OCLOSE},
- {"complex", LNAME, Txxx, OCOMPLEX},
- {"copy", LNAME, Txxx, OCOPY},
- {"delete", LNAME, Txxx, ODELETE},
- {"imag", LNAME, Txxx, OIMAG},
- {"len", LNAME, Txxx, OLEN},
- {"make", LNAME, Txxx, OMAKE},
- {"new", LNAME, Txxx, ONEW},
- {"panic", LNAME, Txxx, OPANIC},
- {"print", LNAME, Txxx, OPRINT},
- {"println", LNAME, Txxx, OPRINTN},
- {"real", LNAME, Txxx, OREAL},
- {"recover", LNAME, Txxx, ORECOVER},
- {"notwithstanding", LIGNORE, Txxx, OXXX},
- {"thetruthofthematter", LIGNORE, Txxx, OXXX},
- {"despiteallobjections", LIGNORE, Txxx, OXXX},
- {"whereas", LIGNORE, Txxx, OXXX},
- {"insofaras", LIGNORE, Txxx, OXXX},
+ {"append", OAPPEND},
+ {"cap", OCAP},
+ {"close", OCLOSE},
+ {"complex", OCOMPLEX},
+ {"copy", OCOPY},
+ {"delete", ODELETE},
+ {"imag", OIMAG},
+ {"len", OLEN},
+ {"make", OMAKE},
+ {"new", ONEW},
+ {"panic", OPANIC},
+ {"print", OPRINT},
+ {"println", OPRINTN},
+ {"real", OREAL},
+ {"recover", ORECOVER},
}
// lexinit initializes known symbols and the basic types.
func lexinit() {
- for _, s := range syms {
- lex := s.lexical
- s1 := Lookup(s.name)
- s1.Lexical = uint16(lex)
-
- if etype := s.etype; etype != Txxx {
- if int(etype) >= len(Types) {
- Fatalf("lexinit: %s bad etype", s.name)
+ for _, s := range basicTypes {
+ etype := s.etype
+ if int(etype) >= len(Types) {
+ Fatalf("lexinit: %s bad etype", s.name)
+ }
+ s2 := Pkglookup(s.name, builtinpkg)
+ t := Types[etype]
+ if t == nil {
+ t = typ(etype)
+ t.Sym = s2
+ if etype != TANY && etype != TSTRING {
+ dowidth(t)
}
- s2 := Pkglookup(s.name, builtinpkg)
- t := Types[etype]
- if t == nil {
- t = typ(etype)
- t.Sym = s2
-
- if etype != TANY && etype != TSTRING {
- dowidth(t)
- }
- Types[etype] = t
- }
-
- s2.Lexical = LNAME
- s2.Def = typenod(t)
- s2.Def.Name = new(Name)
- continue
+ Types[etype] = t
}
+ s2.Def = typenod(t)
+ s2.Def.Name = new(Name)
+ }
+ for _, s := range builtinFuncs {
// TODO(marvin): Fix Node.EType type union.
- if etype := s.op; etype != OXXX {
- s2 := Pkglookup(s.name, builtinpkg)
- s2.Lexical = LNAME
- s2.Def = Nod(ONAME, nil, nil)
- s2.Def.Sym = s2
- s2.Def.Etype = EType(etype)
- }
+ s2 := Pkglookup(s.name, builtinpkg)
+ s2.Def = Nod(ONAME, nil, nil)
+ s2.Def.Sym = s2
+ s2.Def.Etype = EType(s.op)
}
// logically, the type of a string literal.
s.Def = nodlit(v)
s.Def.Sym = s
s.Def.Name = new(Name)
+
+ s = Pkglookup("iota", builtinpkg)
+ s.Def = Nod(OIOTA, nil, nil)
+ s.Def.Sym = s
+ s.Def.Name = new(Name)
}
func lexinit1() {
t.Type.Type = f
// error type
- s := Lookup("error")
-
- s.Lexical = LNAME
- s1 := Pkglookup("error", builtinpkg)
+ s := Pkglookup("error", builtinpkg)
errortype = t
- errortype.Sym = s1
- s1.Lexical = LNAME
- s1.Def = typenod(errortype)
+ errortype.Sym = s
+ s.Def = typenod(errortype)
// byte alias
- s = Lookup("byte")
-
- s.Lexical = LNAME
- s1 = Pkglookup("byte", builtinpkg)
+ s = Pkglookup("byte", builtinpkg)
bytetype = typ(TUINT8)
- bytetype.Sym = s1
- s1.Lexical = LNAME
- s1.Def = typenod(bytetype)
- s1.Def.Name = new(Name)
+ bytetype.Sym = s
+ s.Def = typenod(bytetype)
+ s.Def.Name = new(Name)
// rune alias
- s = Lookup("rune")
-
- s.Lexical = LNAME
- s1 = Pkglookup("rune", builtinpkg)
+ s = Pkglookup("rune", builtinpkg)
runetype = typ(TINT32)
- runetype.Sym = s1
- s1.Lexical = LNAME
- s1.Def = typenod(runetype)
- s1.Def.Name = new(Name)
-}
-
-func lexfini() {
- for i := range syms {
- lex := syms[i].lexical
- if lex != LNAME {
- continue
- }
- s := Lookup(syms[i].name)
- s.Lexical = uint16(lex)
-
- etype := syms[i].etype
- if etype != Txxx && (etype != TANY || Debug['A'] != 0) && s.Def == nil {
- s.Def = typenod(Types[etype])
- s.Def.Name = new(Name)
- s.Origpkg = builtinpkg
- }
-
- // TODO(marvin): Fix Node.EType type union.
- etype = EType(syms[i].op)
- if etype != EType(OXXX) && s.Def == nil {
- s.Def = Nod(ONAME, nil, nil)
- s.Def.Sym = s
- s.Def.Etype = etype
- s.Origpkg = builtinpkg
- }
- }
+ runetype.Sym = s
+ s.Def = typenod(runetype)
+ s.Def.Name = new(Name)
// backend-specific builtin types (e.g. int).
for i := range Thearch.Typedefs {
- s := Lookup(Thearch.Typedefs[i].Name)
- if s.Def == nil {
- s.Def = typenod(Types[Thearch.Typedefs[i].Etype])
- s.Def.Name = new(Name)
- s.Origpkg = builtinpkg
- }
- }
-
- // there's only so much table-driven we can handle.
- // these are special cases.
- if s := Lookup("byte"); s.Def == nil {
- s.Def = typenod(bytetype)
+ s := Pkglookup(Thearch.Typedefs[i].Name, builtinpkg)
+ s.Def = typenod(Types[Thearch.Typedefs[i].Etype])
s.Def.Name = new(Name)
s.Origpkg = builtinpkg
}
+}
- if s := Lookup("error"); s.Def == nil {
- s.Def = typenod(errortype)
- s.Def.Name = new(Name)
- s.Origpkg = builtinpkg
- }
-
- if s := Lookup("rune"); s.Def == nil {
- s.Def = typenod(runetype)
- s.Def.Name = new(Name)
- s.Origpkg = builtinpkg
- }
-
- if s := Lookup("nil"); s.Def == nil {
- var v Val
- v.U = new(NilVal)
- s.Def = nodlit(v)
- s.Def.Sym = s
- s.Def.Name = new(Name)
- s.Origpkg = builtinpkg
- }
-
- if s := Lookup("iota"); s.Def == nil {
- s.Def = Nod(OIOTA, nil, nil)
- s.Def.Sym = s
- s.Origpkg = builtinpkg
- }
-
- if s := Lookup("true"); s.Def == nil {
- s.Def = Nodbool(true)
- s.Def.Sym = s
- s.Def.Name = new(Name)
- s.Origpkg = builtinpkg
- }
+func lexfini() {
+ for _, s := range builtinpkg.Syms {
+ if s.Def == nil {
+ continue
+ }
+ s1 := Lookup(s.Name)
+ if s1.Def != nil {
+ continue
+ }
- if s := Lookup("false"); s.Def == nil {
- s.Def = Nodbool(false)
- s.Def.Sym = s
- s.Def.Name = new(Name)
- s.Origpkg = builtinpkg
+ s1.Def = s.Def
+ s1.Block = s.Block
}
nodfp = Nod(ONAME, nil, nil)
nodfp.Sym = Lookup(".fp")
}
-var lexn = map[int]string{
- LANDAND: "ANDAND",
- LANDNOT: "ANDNOT",
- LASOP: "ASOP",
+var lexn = map[rune]string{
+ LNAME: "NAME",
+ LLITERAL: "LITERAL",
+
+ LOPER: "OPER",
+ LASOP: "ASOP",
+ LINCOP: "INCOP",
+
+ LCOLAS: "COLAS",
+ LCOMM: "COMM",
+ LDDD: "DDD",
+
LBREAK: "BREAK",
LCASE: "CASE",
LCHAN: "CHAN",
- LCOLAS: "COLAS",
- LCOMM: "<-",
LCONST: "CONST",
LCONTINUE: "CONTINUE",
- LDDD: "...",
- LDEC: "DEC",
LDEFAULT: "DEFAULT",
LDEFER: "DEFER",
LELSE: "ELSE",
- LEQ: "EQ",
LFALL: "FALL",
LFOR: "FOR",
LFUNC: "FUNC",
- LGE: "GE",
LGO: "GO",
LGOTO: "GOTO",
- LGT: "GT",
LIF: "IF",
LIMPORT: "IMPORT",
- LINC: "INC",
LINTERFACE: "INTERFACE",
- LLE: "LE",
- LLITERAL: "LITERAL",
- LLSH: "LSH",
- LLT: "LT",
LMAP: "MAP",
- LNAME: "NAME",
- LNE: "NE",
- LOROR: "OROR",
LPACKAGE: "PACKAGE",
LRANGE: "RANGE",
LRETURN: "RETURN",
- LRSH: "RSH",
LSELECT: "SELECT",
LSTRUCT: "STRUCT",
LSWITCH: "SWITCH",
LVAR: "VAR",
}
-func lexname(lex int) string {
+func lexname(lex rune) string {
if s, ok := lexn[lex]; ok {
return s
}
// +build ignore
-// Generate builtin.go from builtin/runtime.go and builtin/unsafe.go
-// (passed as arguments on the command line by a go:generate comment).
+// Generate builtin.go from builtin/runtime.go and builtin/unsafe.go.
// Run this after changing builtin/runtime.go and builtin/unsafe.go
// or after changing the export metadata format in the compiler.
// Either way, you need to have a working compiler binary first.
package main
import (
- "bufio"
+ "bytes"
+ "flag"
"fmt"
"io"
+ "io/ioutil"
"log"
"os"
"os/exec"
- "strings"
)
+var stdout = flag.Bool("stdout", false, "write to stdout instead of builtin.go")
+
func main() {
- f, err := os.Create("builtin.go")
- if err != nil {
- log.Fatal(err)
- }
- defer f.Close()
- w := bufio.NewWriter(f)
+ flag.Parse()
- fmt.Fprintln(w, "// AUTO-GENERATED by mkbuiltin.go; DO NOT EDIT")
- fmt.Fprintln(w, "")
- fmt.Fprintln(w, "package gc")
+ var b bytes.Buffer
+ fmt.Fprintln(&b, "// AUTO-GENERATED by mkbuiltin.go; DO NOT EDIT")
+ fmt.Fprintln(&b, "")
+ fmt.Fprintln(&b, "package gc")
- for _, name := range os.Args[1:] {
- mkbuiltin(w, name)
- }
+ mkbuiltin(&b, "runtime")
+ mkbuiltin(&b, "unsafe")
- if err := w.Flush(); err != nil {
+ var err error
+ if *stdout {
+ _, err = os.Stdout.Write(b.Bytes())
+ } else {
+ err = ioutil.WriteFile("builtin.go", b.Bytes(), 0666)
+ }
+ if err != nil {
log.Fatal(err)
}
}
-// Compile .go file, import data from .6 file, and write Go string version.
+// Compile .go file, import data from .o file, and write Go string version.
func mkbuiltin(w io.Writer, name string) {
- if err := exec.Command("go", "tool", "compile", "-A", "builtin/"+name+".go").Run(); err != nil {
+ args := []string{"tool", "compile", "-A"}
+ if name == "runtime" {
+ args = append(args, "-u")
+ }
+ args = append(args, "builtin/"+name+".go")
+
+ if err := exec.Command("go", args...).Run(); err != nil {
log.Fatal(err)
}
obj := name + ".o"
defer os.Remove(obj)
- r, err := os.Open(obj)
+ b, err := ioutil.ReadFile(obj)
if err != nil {
log.Fatal(err)
}
- defer r.Close()
- scanner := bufio.NewScanner(r)
// Look for $$ that introduces imports.
- for scanner.Scan() {
- if strings.Contains(scanner.Text(), "$$") {
- goto Begin
- }
+ i := bytes.Index(b, []byte("\n$$\n"))
+ if i < 0 {
+ log.Fatal("did not find beginning of imports")
}
- log.Fatal("did not find beginning of imports")
-
-Begin:
- initfunc := fmt.Sprintf("init_%s_function", name)
-
- fmt.Fprintf(w, "\nconst %simport = \"\" +\n", name)
-
- // sys.go claims to be in package PACKAGE to avoid
- // conflicts during "go tool compile sys.go". Rename PACKAGE to $2.
- replacer := strings.NewReplacer("PACKAGE", name)
+ i += 4
- // Process imports, stopping at $$ that closes them.
- for scanner.Scan() {
- p := scanner.Text()
- if strings.Contains(p, "$$") {
- goto End
- }
+ // Look for $$ that closes imports.
+ j := bytes.Index(b[i:], []byte("\n$$\n"))
+ if j < 0 {
+ log.Fatal("did not find end of imports")
+ }
+ j += i + 4
+ // Process and reformat imports.
+ fmt.Fprintf(w, "\nconst %simport = \"\"", name)
+ for _, p := range bytes.SplitAfter(b[i:j], []byte("\n")) {
// Chop leading white space.
- p = strings.TrimLeft(p, " \t")
-
- // Cut out decl of init_$1_function - it doesn't exist.
- if strings.Contains(p, initfunc) {
+ p = bytes.TrimLeft(p, " \t")
+ if len(p) == 0 {
continue
}
- fmt.Fprintf(w, "\t%q +\n", replacer.Replace(p)+"\n")
+ fmt.Fprintf(w, " +\n\t%q", p)
}
- log.Fatal("did not find end of imports")
-
-End:
- fmt.Fprintf(w, "\t\"$$\\n\"\n")
+ fmt.Fprintf(w, "\n")
}
"math"
)
-/// implements float arihmetic
+// implements float arithmetic
+
+const (
+ // Maximum size in bits for Mpints before signalling
+ // overflow and also mantissa precision for Mpflts.
+ Mpprec = 512
+ // Turn on for constant arithmetic debugging output.
+ Mpdebug = false
+)
+
+// Mpflt represents a floating-point constant.
+type Mpflt struct {
+ Val big.Float
+}
+
+// Mpcplx represents a complex constant.
+type Mpcplx struct {
+ Real Mpflt
+ Imag Mpflt
+}
func newMpflt() *Mpflt {
var a Mpflt
"fmt"
)
-/// implements fix arithmetic
+// implements integer arithmetic
+
+// Mpint represents an integer constant.
+type Mpint struct {
+ Val big.Int
+ Ovf bool // set if Val overflowed compiler limit (sticky)
+ Rune bool // set if syntax indicates default type rune
+}
func mpsetovf(a *Mpint) {
a.Val.SetUint64(1) // avoid spurious div-zero errors
s := Mpgetfix(b)
if s < 0 || s >= Mpprec {
- Yyerror("stupid shift: %d", s)
+ msg := "shift count too large"
+ if s < 0 {
+ msg = "invalid negative shift count"
+ }
+ Yyerror("%s: %d", msg, s)
Mpmovecfix(a, 0)
return
}
s := Mpgetfix(b)
if s < 0 {
- Yyerror("stupid shift: %d", s)
+ Yyerror("invalid negative shift count: %d", s)
if a.Val.Sign() < 0 {
Mpmovecfix(a, -1)
} else {
off = dsname(symdata, off, s[n:n+m])
}
- off = duint8(symdata, off, 0) // terminating NUL for runtime
- off = (off + Widthptr - 1) &^ (Widthptr - 1) // round to pointer alignment
+ off = duint8(symdata, off, 0) // terminating NUL for runtime
ggloblsym(symdata, int32(off), obj.DUPOK|obj.RODATA|obj.LOCAL)
return symhdr, symdata
OLROT: "LROT",
ORROTC: "RROTC",
ORETJMP: "RETJMP",
+ OPS: "OPS",
+ OPC: "OPC",
+ OSQRT: "OSQRT",
+ OGETG: "OGETG",
OEND: "END",
}
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
)
// Rewrite tree to use separate statements to enforce
-// order of evaluation. Makes walk easier, because it
+// order of evaluation. Makes walk easier, because it
// can (after this runs) reorder at will within an expression.
//
// Rewrite x op= y into x = x op y.
// Order holds state during the ordering process.
type Order struct {
- out *NodeList // list of generated statements
- temp *NodeList // head of stack of temporary variables
- free *NodeList // free list of NodeList* structs (for use in temp)
+ out []*Node // list of generated statements
+ temp []*Node // stack of temporary variables
}
// Order rewrites fn->nbody to apply the ordering constraints
func order(fn *Node) {
if Debug['W'] > 1 {
s := fmt.Sprintf("\nbefore order %v", fn.Func.Nname.Sym)
- dumplist(s, fn.Nbody)
+ dumpslice(s, fn.Nbody.Slice())
}
- orderblock(&fn.Nbody)
+ orderblockNodes(&fn.Nbody)
}
// Ordertemp allocates a new temporary with the given type,
if clear {
a := Nod(OAS, var_, nil)
typecheck(&a, Etop)
- order.out = list(order.out, a)
+ order.out = append(order.out, a)
}
- l := order.free
- if l == nil {
- l = new(NodeList)
- }
- order.free = l.Next
- l.Next = order.temp
- l.N = var_
- order.temp = l
+ order.temp = append(order.temp, var_)
return var_
}
var_ := ordertemp(t, order, clear != 0)
a := Nod(OAS, var_, n)
typecheck(&a, Etop)
- order.out = list(order.out, a)
+ order.out = append(order.out, a)
return var_
}
*np = ordercopyexpr(n, n.Type, order, 0)
}
+type ordermarker int
+
// Marktemp returns the top of the temporary variable stack.
-func marktemp(order *Order) *NodeList {
- return order.temp
+func marktemp(order *Order) ordermarker {
+ return ordermarker(len(order.temp))
}
// Poptemp pops temporaries off the stack until reaching the mark,
// which must have been returned by marktemp.
-func poptemp(mark *NodeList, order *Order) {
- var l *NodeList
-
- for {
- l = order.temp
- if l == mark {
- break
- }
- order.temp = l.Next
- l.Next = order.free
- order.free = l
- }
+func poptemp(mark ordermarker, order *Order) {
+ order.temp = order.temp[:mark]
}
// Cleantempnopop emits to *out VARKILL instructions for each temporary
// above the mark on the temporary stack, but it does not pop them
// from the stack.
-func cleantempnopop(mark *NodeList, order *Order, out **NodeList) {
+func cleantempnopop(mark ordermarker, order *Order, out *[]*Node) {
var kill *Node
- for l := order.temp; l != mark; l = l.Next {
- if l.N.Name.Keepalive {
- l.N.Name.Keepalive = false
- kill = Nod(OVARLIVE, l.N, nil)
+ for i := len(order.temp) - 1; i >= int(mark); i-- {
+ n := order.temp[i]
+ if n.Name.Keepalive {
+ n.Name.Keepalive = false
+ n.Addrtaken = true // ensure SSA keeps the n variable
+ kill = Nod(OVARLIVE, n, nil)
typecheck(&kill, Etop)
- *out = list(*out, kill)
+ *out = append(*out, kill)
}
- kill = Nod(OVARKILL, l.N, nil)
+ kill = Nod(OVARKILL, n, nil)
typecheck(&kill, Etop)
- *out = list(*out, kill)
+ *out = append(*out, kill)
}
}
// Cleantemp emits VARKILL instructions for each temporary above the
// mark on the temporary stack and removes them from the stack.
-func cleantemp(top *NodeList, order *Order) {
+func cleantemp(top ordermarker, order *Order) {
cleantempnopop(top, order, &order.out)
poptemp(top, order)
}
}
}
+// Orderstmtslice orders each of the statements in the slice.
+func orderstmtslice(l []*Node, order *Order) {
+ for _, n := range l {
+ orderstmt(n, order)
+ }
+}
+
// Orderblock orders the block of statements *l onto a new list,
// and then replaces *l with that list.
func orderblock(l **NodeList) {
mark := marktemp(&order)
orderstmtlist(*l, &order)
cleantemp(mark, &order)
- *l = order.out
+ var ll *NodeList
+ for _, n := range order.out {
+ ll = list(ll, n)
+ }
+ *l = ll
+}
+
+// OrderblockNodes orders the block of statements in n into a new slice,
+// and then replaces the old slice in n with the new slice.
+func orderblockNodes(n *Nodes) {
+ var order Order
+ mark := marktemp(&order)
+ orderstmtslice(n.Slice(), &order)
+ cleantemp(mark, &order)
+ n.Set(order.out)
}
// Orderexprinplace orders the side effects in *np and
n := *np
var order Order
orderexpr(&n, &order, nil)
- addinit(&n, order.out)
+ addinitslice(&n, order.out)
// insert new temporaries from order
// at head of outer list.
- lp := &order.temp
-
- for *lp != nil {
- lp = &(*lp).Next
- }
- *lp = outer.temp
- outer.temp = order.temp
+ outer.temp = append(outer.temp, order.temp...)
*np = n
}
mark := marktemp(&order)
orderstmt(n, &order)
cleantemp(mark, &order)
- *np = liststmt(order.out)
+ *np = liststmtslice(order.out)
}
// Orderinit moves n's init list to order->out.
// Ordermapassign appends n to order->out, introducing temporaries
// to make sure that all map assignments have the form m[k] = x,
-// where x is adressable.
+// where x is addressable.
// (Orderexpr has already been called on n, so we know k is addressable.)
//
// If n is m[k] = x where x is not addressable, the rewrite is:
Fatalf("ordermapassign %v", Oconv(int(n.Op), 0))
case OAS:
- order.out = list(order.out, n)
+ order.out = append(order.out, n)
// We call writebarrierfat only for values > 4 pointers long. See walk.go.
if (n.Left.Op == OINDEXMAP || (needwritebarrier(n.Left, n.Right) && n.Left.Type.Width > int64(4*Widthptr))) && !isaddrokay(n.Right) {
n.Left = ordertemp(m.Type, order, false)
a := Nod(OAS, m, n.Left)
typecheck(&a, Etop)
- order.out = list(order.out, a)
+ order.out = append(order.out, a)
}
case OAS2, OAS2DOTTYPE, OAS2MAPR, OAS2FUNC:
- var post *NodeList
+ var post []*Node
var m *Node
var a *Node
for l := n.List; l != nil; l = l.Next {
l.N = ordertemp(m.Type, order, false)
a = Nod(OAS, m, l.N)
typecheck(&a, Etop)
- post = list(post, a)
+ post = append(post, a)
} else if instrumenting && n.Op == OAS2FUNC && !isblank(l.N) {
m = l.N
l.N = ordertemp(m.Type, order, false)
a = Nod(OAS, m, l.N)
typecheck(&a, Etop)
- post = list(post, a)
+ post = append(post, a)
}
}
- order.out = list(order.out, n)
- order.out = concat(order.out, post)
+ order.out = append(order.out, n)
+ order.out = append(order.out, post...)
}
}
Fatalf("orderstmt %v", Oconv(int(n.Op), 0))
case OVARKILL, OVARLIVE:
- order.out = list(order.out, n)
+ order.out = append(order.out, n)
case OAS:
t := marktemp(order)
case OAS2, OAS2DOTTYPE:
ordermapassign(n, order)
default:
- order.out = list(order.out, n)
+ order.out = append(order.out, n)
}
cleantemp(t, order)
orderexprlist(n.List, order)
orderexpr(&n.Rlist.N.Left, order, nil) // i in i.(T)
if isblank(n.List.N) {
- order.out = list(order.out, n)
+ order.out = append(order.out, n)
} else {
typ := n.Rlist.N.Type
tmp1 := ordertemp(typ, order, haspointers(typ))
- order.out = list(order.out, n)
+ order.out = append(order.out, n)
r := Nod(OAS, n.List.N, tmp1)
typecheck(&r, Etop)
ordermapassign(r, order)
} else {
tmp2 = ordertemp(Types[TBOOL], order, false)
}
- order.out = list(order.out, n)
+ order.out = append(order.out, n)
r := Nod(OAS, n.List.N, tmp1)
typecheck(&r, Etop)
ordermapassign(r, order)
OGOTO,
OLABEL,
ORETJMP:
- order.out = list(order.out, n)
+ order.out = append(order.out, n)
// Special: handle call arguments.
case OCALLFUNC, OCALLINTER, OCALLMETH:
t := marktemp(order)
ordercall(n, order)
- order.out = list(order.out, n)
+ order.out = append(order.out, n)
cleantemp(t, order)
// Special: order arguments to inner call but not call itself.
ordercall(n.Left, order)
}
- order.out = list(order.out, n)
+ order.out = append(order.out, n)
cleantemp(t, order)
case ODELETE:
orderexpr(&n.List.N, order, nil)
orderexpr(&n.List.Next.N, order, nil)
orderaddrtemp(&n.List.Next.N, order) // map key
- order.out = list(order.out, n)
+ order.out = append(order.out, n)
cleantemp(t, order)
// Clean temporaries from condition evaluation at
t := marktemp(order)
orderexprinplace(&n.Left, order)
- var l *NodeList
+ var l []*Node
cleantempnopop(t, order, &l)
- n.Nbody = concat(l, n.Nbody)
- orderblock(&n.Nbody)
+ n.Nbody.Set(append(l, n.Nbody.Slice()...))
+ orderblockNodes(&n.Nbody)
orderstmtinplace(&n.Right)
- order.out = list(order.out, n)
+ order.out = append(order.out, n)
cleantemp(t, order)
// Clean temporaries from condition at
t := marktemp(order)
orderexprinplace(&n.Left, order)
- var l *NodeList
+ var l []*Node
cleantempnopop(t, order, &l)
- n.Nbody = concat(l, n.Nbody)
+ n.Nbody.Set(append(l, n.Nbody.Slice()...))
l = nil
cleantempnopop(t, order, &l)
- n.Rlist = concat(l, n.Rlist)
+ var ll *NodeList
+ for _, n := range l {
+ ll = list(ll, n)
+ }
+ n.Rlist = concat(ll, n.Rlist)
poptemp(t, order)
- orderblock(&n.Nbody)
+ orderblockNodes(&n.Nbody)
orderblock(&n.Rlist)
- order.out = list(order.out, n)
+ order.out = append(order.out, n)
// Special: argument will be converted to interface using convT2E
// so make sure it is an addressable temporary.
if !Isinter(n.Left.Type) {
orderaddrtemp(&n.Left, order)
}
- order.out = list(order.out, n)
+ order.out = append(order.out, n)
cleantemp(t, order)
// n->right is the expression being ranged over.
for l := n.List; l != nil; l = l.Next {
orderexprinplace(&l.N, order)
}
- orderblock(&n.Nbody)
- order.out = list(order.out, n)
+ orderblockNodes(&n.Nbody)
+ order.out = append(order.out, n)
cleantemp(t, order)
case ORETURN:
ordercallargs(&n.List, order)
- order.out = list(order.out, n)
+ order.out = append(order.out, n)
// Special: clean case temporaries in each block entry.
// Select must enter one of its blocks, so there is no
}
}
- orderblock(&l.N.Nbody)
+ orderblockNodes(&l.N.Nbody)
}
// Now that we have accumulated all the temporaries, clean them.
// Also insert any ninit queued during the previous loop.
// (The temporary cleaning must follow that ninit work.)
for l := n.List; l != nil; l = l.Next {
- cleantempnopop(t, order, &l.N.Ninit)
- l.N.Nbody = concat(l.N.Ninit, l.N.Nbody)
+ s := make([]*Node, 0, count(l.N.Ninit))
+ for ll := l.N.Ninit; ll != nil; ll = ll.Next {
+ s = append(s, ll.N)
+ }
+ cleantempnopop(t, order, &s)
+ l.N.Nbody.Set(append(s, l.N.Nbody.Slice()...))
l.N.Ninit = nil
}
- order.out = list(order.out, n)
+ order.out = append(order.out, n)
poptemp(t, order)
// Special: value being sent is passed as a pointer; make it addressable.
orderexpr(&n.Left, order, nil)
orderexpr(&n.Right, order, nil)
orderaddrtemp(&n.Right, order)
- order.out = list(order.out, n)
+ order.out = append(order.out, n)
cleantemp(t, order)
// TODO(rsc): Clean temporaries more aggressively.
Fatalf("order switch case %v", Oconv(int(l.N.Op), 0))
}
orderexprlistinplace(l.N.List, order)
- orderblock(&l.N.Nbody)
+ orderblockNodes(&l.N.Nbody)
}
- order.out = list(order.out, n)
+ order.out = append(order.out, n)
cleantemp(t, order)
}
// Clean temporaries from first branch at beginning of second.
// Leave them on the stack so that they can be killed in the outer
// context in case the short circuit is taken.
- var l *NodeList
+ var s []*Node
- cleantempnopop(mark, order, &l)
+ cleantempnopop(mark, order, &s)
+ var l *NodeList
+ for _, n := range s {
+ l = list(l, n)
+ }
n.Right.Ninit = concat(l, n.Right.Ninit)
orderexprinplace(&n.Right, order)
}
case OCLOSURE:
- if n.Noescape && n.Func.Cvars != nil {
+ if n.Noescape && len(n.Func.Cvars.Slice()) > 0 {
prealloc[n] = ordertemp(Types[TUINT8], order, false) // walk will fill in correct type
}
package gc
// The recursive-descent parser is built around a slighty modified grammar
-// of Go to accomodate for the constraints imposed by strict one token look-
+// of Go to accommodate for the constraints imposed by strict one token look-
// ahead, and for better error handling. Subsequent checks of the constructed
// syntax tree restrict the language accepted by the compiler to proper Go.
//
// to handle optional commas and semicolons before a closing ) or } .
import (
+ "cmd/internal/obj"
"fmt"
"strconv"
"strings"
const trace = false // if set, parse tracing can be enabled with -x
-// TODO(gri) Once we handle imports w/o redirecting the underlying
-// source of the lexer we can get rid of these. They are here for
-// compatibility with the existing yacc-based parser setup (issue 13242).
-var thenewparser parser // the parser in use
-var savedstate []parser // saved parser state, used during import
-
-func push_parser() {
- // Indentation (for tracing) must be preserved across parsers
- // since we are changing the lexer source (and parser state)
- // under foot, in the middle of productions. This won't be
- // needed anymore once we fix issue 13242, but neither will
- // be the push/pop_parser functionality.
- // (Instead we could just use a global variable indent, but
- // but eventually indent should be parser-specific anyway.)
- indent := thenewparser.indent
- savedstate = append(savedstate, thenewparser)
- thenewparser = parser{indent: indent} // preserve indentation
- thenewparser.next()
-}
-
-func pop_parser() {
- indent := thenewparser.indent
- n := len(savedstate) - 1
- thenewparser = savedstate[n]
- thenewparser.indent = indent // preserve indentation
- savedstate = savedstate[:n]
+// parse_import parses the export data of a package that is imported.
+func parse_import(bin *obj.Biobuf, indent []byte) {
+ newparser(bin, indent).import_package()
}
-// parse_file sets up a new parser and parses a single Go source file.
-func parse_file() {
- thenewparser = parser{}
- thenewparser.loadsys()
- thenewparser.next()
- thenewparser.file()
-}
-
-// loadsys loads the definitions for the low-level runtime functions,
-// so that the compiler can generate calls to them,
-// but does not make the name "runtime" visible as a package.
-func (p *parser) loadsys() {
- if trace && Debug['x'] != 0 {
- defer p.trace("loadsys")()
- }
-
- importpkg = Runtimepkg
-
- if Debug['A'] != 0 {
- cannedimports("runtime.Builtin", "package runtime\n\n$$\n\n")
- } else {
- cannedimports("runtime.Builtin", runtimeimport)
- }
- curio.importsafe = true
-
- p.import_package()
- p.import_there()
-
- importpkg = nil
+// parse_file parses a single Go source file.
+func parse_file(bin *obj.Biobuf) {
+ newparser(bin, nil).file()
}
type parser struct {
- tok int32 // next token (one-token look-ahead)
- op Op // valid if tok == LASOP
- val Val // valid if tok == LLITERAL
- sym_ *Sym // valid if tok == LNAME
- fnest int // function nesting level (for error handling)
- xnest int // expression nesting level (for complit ambiguity resolution)
- yy yySymType // for temporary use by next
- indent []byte // tracing support
-}
-
-func (p *parser) next() {
- p.tok = yylex(&p.yy)
- p.op = p.yy.op
- p.val = p.yy.val
- p.sym_ = p.yy.sym
+ lexer
+ fnest int // function nesting level (for error handling)
+ xnest int // expression nesting level (for complit ambiguity resolution)
+ indent []byte // tracing support
+}
+
+// newparser returns a new parser ready to parse from src.
+// indent is the initial indentation for tracing output.
+func newparser(src *obj.Biobuf, indent []byte) *parser {
+ var p parser
+ p.bin = src
+ p.indent = indent
+ p.next()
+ return &p
}
func (p *parser) got(tok int32) bool {
// determine token string
var tok string
switch p.tok {
- case LLITERAL:
- tok = litbuf
case LNAME:
if p.sym_ != nil && p.sym_.Name != "" {
tok = p.sym_.Name
} else {
tok = "name"
}
+ case LLITERAL:
+ tok = litbuf
+ case LOPER:
+ tok = goopnames[p.op]
case LASOP:
tok = goopnames[p.op] + "="
+ case LINCOP:
+ tok = goopnames[p.op] + goopnames[p.op]
default:
tok = tokstring(p.tok)
}
}
// Like syntax_error, but reports error at given line rather than current lexer line.
-func (p *parser) syntax_error_at(lineno int32, msg string) {
- defer func(lineno int32) {
- lexlineno = lineno
- }(lexlineno)
- lexlineno = lineno
+func (p *parser) syntax_error_at(lno int32, msg string) {
+ defer func(lno int32) {
+ lineno = lno
+ }(lineno)
+ lineno = lno
p.syntax_error(msg)
}
}
var tokstrings = map[int32]string{
- LLITERAL: "LLITERAL",
- LASOP: "op=",
- LCOLAS: ":=",
+ LNAME: "NAME",
+ LLITERAL: "LITERAL",
+
+ LOPER: "op",
+ LASOP: "op=",
+ LINCOP: "opop",
+
+ LCOLAS: ":=",
+ LCOMM: "<-",
+ LDDD: "...",
+
LBREAK: "break",
LCASE: "case",
LCHAN: "chan",
LCONST: "const",
LCONTINUE: "continue",
- LDDD: "...",
LDEFAULT: "default",
LDEFER: "defer",
LELSE: "else",
LIMPORT: "import",
LINTERFACE: "interface",
LMAP: "map",
- LNAME: "LNAME",
LPACKAGE: "package",
LRANGE: "range",
LRETURN: "return",
LSWITCH: "switch",
LTYPE: "type",
LVAR: "var",
- LANDAND: "&&",
- LANDNOT: "&^",
- LCOMM: "<-",
- LDEC: "--",
- LEQ: "==",
- LGE: ">=",
- LGT: ">",
- LIGNORE: "LIGNORE", // we should never see this one
- LINC: "++",
- LLE: "<=",
- LLSH: "<<",
- LLT: "<",
- LNE: "!=",
- LOROR: "||",
- LRSH: ">>",
}
// usage: defer p.trace(msg)()
defer p.trace("package_")()
}
- if p.got(LPACKAGE) {
- mkpackage(p.sym().Name)
- } else {
- prevlineno = lineno // see issue #13267
+ if !p.got(LPACKAGE) {
p.syntax_error("package statement must be first")
errorexit()
}
+ mkpackage(p.sym().Name)
}
// ImportDecl = "import" ( ImportSpec | "(" { ImportSpec ";" } ")" ) .
p.want(LIMPORT)
if p.got('(') {
for p.tok != EOF && p.tok != ')' {
- p.import_stmt()
+ p.importdcl()
if !p.osemi(')') {
break
}
}
p.want(')')
} else {
- p.import_stmt()
- }
-}
-
-func (p *parser) import_stmt() {
- if trace && Debug['x'] != 0 {
- defer p.trace("import_stmt")()
- }
-
- line := int32(p.import_here())
- if p.tok == LPACKAGE {
- p.import_package()
- p.import_there()
-
- ipkg := importpkg
- my := importmyname
- importpkg = nil
- importmyname = nil
-
- if my == nil {
- my = Lookup(ipkg.Name)
- }
-
- pack := Nod(OPACK, nil, nil)
- pack.Sym = my
- pack.Name.Pkg = ipkg
- pack.Lineno = line
-
- if strings.HasPrefix(my.Name, ".") {
- importdot(ipkg, pack)
- return
- }
- if my.Name == "init" {
- lineno = line
- Yyerror("cannot import package as init - init must be a func")
- return
- }
- if my.Name == "_" {
- return
- }
- if my.Def != nil {
- lineno = line
- redeclare(my, "as imported package name")
- }
- my.Def = pack
- my.Lastlineno = line
- my.Block = 1 // at top level
-
- return
- }
-
- p.import_there()
- // When an invalid import path is passed to importfile,
- // it calls Yyerror and then sets up a fake import with
- // no package statement. This allows us to test more
- // than one invalid import statement in a single file.
- if nerrors == 0 {
- Fatalf("phase error in import")
+ p.importdcl()
}
}
// ImportSpec = [ "." | PackageName ] ImportPath .
// ImportPath = string_lit .
-//
-// import_here switches the underlying lexed source to the export data
-// of the imported package.
-func (p *parser) import_here() int {
+func (p *parser) importdcl() {
if trace && Debug['x'] != 0 {
- defer p.trace("import_here")()
+ defer p.trace("importdcl")()
}
- importmyname = nil
+ var my *Sym
switch p.tok {
case LNAME, '@', '?':
// import with given name
- importmyname = p.sym()
+ my = p.sym()
case '.':
// import into my name space
- importmyname = Lookup(".")
+ my = Lookup(".")
p.next()
}
- var path Val
- if p.tok == LLITERAL {
- path = p.val
- p.next()
- } else {
+ if p.tok != LLITERAL {
p.syntax_error("missing import path; require quoted string")
p.advance(';', ')')
+ return
+ }
+
+ line := int32(parserline())
+
+ // We need to clear importpkg before calling p.next(),
+ // otherwise it will affect lexlineno.
+ // TODO(mdempsky): Fix this clumsy API.
+ importfile(&p.val, p.indent)
+ ipkg := importpkg
+ importpkg = nil
+
+ p.next()
+ if ipkg == nil {
+ if nerrors == 0 {
+ Fatalf("phase error in import")
+ }
+ return
+ }
+
+ ipkg.Direct = true
+
+ if my == nil {
+ my = Lookup(ipkg.Name)
}
- line := parserline()
- importfile(&path, line)
- return line
+ pack := Nod(OPACK, nil, nil)
+ pack.Sym = my
+ pack.Name.Pkg = ipkg
+ pack.Lineno = line
+
+ if strings.HasPrefix(my.Name, ".") {
+ importdot(ipkg, pack)
+ return
+ }
+ if my.Name == "init" {
+ lineno = line
+ Yyerror("cannot import package as init - init must be a func")
+ return
+ }
+ if my.Name == "_" {
+ return
+ }
+ if my.Def != nil {
+ lineno = line
+ redeclare(my, "as imported package name")
+ }
+ my.Def = pack
+ my.Lastlineno = line
+ my.Block = 1 // at top level
}
// import_package parses the header of an imported package as exported
p.import_error()
}
+ importsafe := false
if p.tok == LNAME {
if p.sym_.Name == "safe" {
- curio.importsafe = true
+ importsafe = true
}
p.next()
}
} else if importpkg.Name != name {
Yyerror("conflicting names %s and %s for package %q", importpkg.Name, name, importpkg.Path)
}
- if incannedimport == 0 {
- importpkg.Direct = true
- }
- importpkg.Safe = curio.importsafe
-
- if safemode != 0 && !curio.importsafe {
- Yyerror("cannot import unsafe package %q", importpkg.Path)
- }
-}
-
-// import_there parses the imported package definitions and then switches
-// the underlying lexed source back to the importing package.
-func (p *parser) import_there() {
- if trace && Debug['x'] != 0 {
- defer p.trace("import_there")()
- }
+ importpkg.Safe = importsafe
+ typecheckok = true
defercheckwidth()
p.hidden_import_list()
}
resumecheckwidth()
- unimportfile()
+ typecheckok = false
}
// Declaration = ConstDecl | TypeDecl | VarDecl .
stmt.Etype = EType(op) // rathole to pass opcode
return stmt
- case LINC:
- // expr LINC
+ case LINCOP:
+ // expr LINCOP
p.next()
stmt := Nod(OASOP, lhs, Nodintconst(1))
stmt.Implicit = true
- stmt.Etype = EType(OADD)
- return stmt
-
- case LDEC:
- // expr LDEC
- p.next()
-
- stmt := Nod(OASOP, lhs, Nodintconst(1))
- stmt.Implicit = true
- stmt.Etype = EType(OSUB)
+ stmt.Etype = EType(p.op)
return stmt
case ':':
ls = p.stmt()
if ls == missing_stmt {
// report error at line of ':' token
- p.syntax_error_at(prevlineno, "missing statement after label")
+ p.syntax_error_at(label.Lineno, "missing statement after label")
// we are already at the end of the labeled statement - no need to advance
return missing_stmt
}
stmt := p.case_(tswitch) // does markdcl
stmt.Xoffset = int64(block)
- stmt.Nbody = p.stmt_list()
+ stmt.Nbody.SetToNodeList(p.stmt_list())
popdcl()
stmt := p.for_header()
body := p.loop_body("for clause")
- stmt.Nbody = concat(stmt.Nbody, body)
+ stmt.Nbody.AppendNodeList(body)
return stmt
}
Yyerror("missing condition in if statement")
}
- stmt.Nbody = p.loop_body("if clause")
-
- l := p.elseif_list_else() // does markdcl
-
- n := stmt
- popdcl()
- for nn := l; nn != nil; nn = nn.Next {
- if nn.N.Op == OIF {
- popdcl()
- }
- n.Rlist = list1(nn.N)
- n = nn.N
- }
-
- return stmt
-}
-
-func (p *parser) elseif() *NodeList {
- if trace && Debug['x'] != 0 {
- defer p.trace("elseif")()
- }
-
- // LELSE LIF already consumed
- markdcl() // matching popdcl in if_stmt
-
- stmt := p.if_header()
- if stmt.Left == nil {
- Yyerror("missing condition in if statement")
- }
-
- stmt.Nbody = p.loop_body("if clause")
-
- return list1(stmt)
-}
+ stmt.Nbody.SetToNodeList(p.loop_body("if clause"))
-func (p *parser) elseif_list_else() (l *NodeList) {
- if trace && Debug['x'] != 0 {
- defer p.trace("elseif_list_else")()
- }
-
- for p.got(LELSE) {
- if p.got(LIF) {
- l = concat(l, p.elseif())
+ if p.got(LELSE) {
+ if p.tok == LIF {
+ stmt.Rlist = list1(p.if_stmt())
} else {
- l = concat(l, p.else_())
- break
+ stmt.Rlist = list1(p.compound_stmt(true))
}
}
- return l
-}
-
-func (p *parser) else_() *NodeList {
- if trace && Debug['x'] != 0 {
- defer p.trace("else")()
- }
-
- l := &NodeList{N: p.compound_stmt(true)}
- l.End = l
- return l
-
+ popdcl()
+ return stmt
}
// switch_stmt parses both expression and type switch statements.
return hdr
}
-// TODO(gri) should have lexer return this info - no need for separate lookup
-// (issue 13244)
-var prectab = map[int32]struct {
- prec int // > 0 (0 indicates not found)
- op Op
-}{
- // not an expression anymore, but left in so we can give a good error
- // message when used in expression context
- LCOMM: {1, OSEND},
-
- LOROR: {2, OOROR},
-
- LANDAND: {3, OANDAND},
-
- LEQ: {4, OEQ},
- LNE: {4, ONE},
- LLE: {4, OLE},
- LGE: {4, OGE},
- LLT: {4, OLT},
- LGT: {4, OGT},
-
- '+': {5, OADD},
- '-': {5, OSUB},
- '|': {5, OOR},
- '^': {5, OXOR},
-
- '*': {6, OMUL},
- '/': {6, ODIV},
- '%': {6, OMOD},
- '&': {6, OAND},
- LLSH: {6, OLSH},
- LRSH: {6, ORSH},
- LANDNOT: {6, OANDNOT},
-}
-
// Expression = UnaryExpr | Expression binary_op Expression .
-func (p *parser) bexpr(prec int) *Node {
+func (p *parser) bexpr(prec OpPrec) *Node {
// don't trace bexpr - only leads to overly nested trace output
+ // prec is precedence of the prior/enclosing binary operator (if any),
+ // so we only want to parse tokens of greater precedence.
+
x := p.uexpr()
- t := prectab[p.tok]
- for tprec := t.prec; tprec >= prec; tprec-- {
- for tprec == prec {
- p.next()
- y := p.bexpr(t.prec + 1)
- x = Nod(t.op, x, y)
- t = prectab[p.tok]
- tprec = t.prec
- }
+ for p.prec > prec {
+ op, prec1 := p.op, p.prec
+ p.next()
+ x = Nod(op, x, p.bexpr(prec1))
}
return x
}
defer p.trace("expr")()
}
- return p.bexpr(1)
+ return p.bexpr(0)
}
func unparen(x *Node) *Node {
return nil
}
-func (p *parser) dcl_name(sym *Sym) *Node {
+func (p *parser) dcl_name() *Node {
if trace && Debug['x'] != 0 {
defer p.trace("dcl_name")()
}
+ symlineno := lineno
+ sym := p.sym()
if sym == nil {
- yyerrorl(int(prevlineno), "invalid declaration")
+ yyerrorl(int(symlineno), "invalid declaration")
return nil
}
return dclname(sym)
}
p.want(LFUNC)
- f := p.fndcl()
+ f := p.fndcl(p.pragma&Nointerface != 0)
body := p.fnbody()
if f == nil {
return nil
}
- if noescape && body != nil {
+
+ f.Nbody.SetToNodeList(body)
+ f.Noescape = p.pragma&Noescape != 0
+ if f.Noescape && body != nil {
Yyerror("can only use //go:noescape with external func implementations")
}
-
- f.Nbody = body
+ f.Func.Pragma = p.pragma
f.Func.Endlineno = lineno
- f.Noescape = noescape
- f.Func.Norace = norace
- f.Func.Nosplit = nosplit
- f.Func.Noinline = noinline
- f.Func.Nowritebarrier = nowritebarrier
- f.Func.Nowritebarrierrec = nowritebarrierrec
- f.Func.Systemstack = systemstack
+
funcbody(f)
return f
// Function = Signature FunctionBody .
// MethodDecl = "func" Receiver MethodName ( Function | Signature ) .
// Receiver = Parameters .
-func (p *parser) fndcl() *Node {
+func (p *parser) fndcl(nointerface bool) *Node {
if trace && Debug['x'] != 0 {
defer p.trace("fndcl")()
}
ss.Type = functype(s2.N, s6, s8)
checkwidth(ss.Type)
- addmethod(s4, ss.Type, false, nointerface)
- nointerface = false
+ addmethod(s4, ss.Type, false, false)
funchdr(ss)
// inl.C's inlnode in on a dotmeth node expects to find the inlineable body as
// (dotmeth's type).Nname.Inl, and dotmeth's type has been pulled
- // out by typecheck's lookdot as this $$.ttype. So by providing
+ // out by typecheck's lookdot as this $$.ttype. So by providing
// this back link here we avoid special casing there.
ss.Type.Nname = ss
return ss
l = list(l, p.xfndcl())
default:
- if p.tok == '{' && l != nil && l.End.N.Op == ODCLFUNC && l.End.N.Nbody == nil {
+ if p.tok == '{' && l != nil && l.End.N.Op == ODCLFUNC && len(l.End.N.Nbody.Slice()) == 0 {
// opening { of function declaration on next line
p.syntax_error("unexpected semicolon or newline before {")
} else {
testdclstack()
}
- noescape = false
- noinline = false
- nointerface = false
- norace = false
- nosplit = false
- nowritebarrier = false
- nowritebarrierrec = false
- systemstack = false
+ // Reset p.pragma BEFORE advancing to the next token (consuming ';')
+ // since comments before may set pragmas for the next function decl.
+ p.pragma = 0
- // Consume ';' AFTER resetting the above flags since
- // it may read the subsequent comment line which may
- // set the flags for the next function declaration.
if p.tok != EOF && !p.got(';') {
p.syntax_error("after top level declaration")
p.advance(LVAR, LCONST, LTYPE, LFUNC)
stmt := Nod(ORETURN, nil, nil)
stmt.List = results
if stmt.List == nil && Curfn != nil {
- for l := Curfn.Func.Dcl; l != nil; l = l.Next {
- if l.N.Class == PPARAM {
+ for _, ln := range Curfn.Func.Dcl {
+ if ln.Class == PPARAM {
continue
}
- if l.N.Class != PPARAMOUT {
+ if ln.Class != PPARAMOUT {
break
}
- if l.N.Sym.Def != l.N {
- Yyerror("%s is shadowed during return", l.N.Sym.Name)
+ if ln.Sym.Def != ln {
+ Yyerror("%s is shadowed during return", ln.Sym.Name)
}
}
}
defer p.trace("dcl_name_list")()
}
- l := list1(p.dcl_name(p.sym()))
+ l := list1(p.dcl_name())
for p.got(',') {
- l = list(l, p.dcl_name(p.sym()))
+ l = list(l, p.dcl_name())
}
return l
}
return
}
- s2.Func.Inl = s3
+ s2.Func.Inl.SetToNodeList(s3)
funcbody(s2)
importlist = append(importlist, s2)
if Debug['E'] > 0 {
fmt.Printf("import [%q] func %v \n", importpkg.Path, s2)
- if Debug['m'] > 2 && s2.Func.Inl != nil {
+ if Debug['m'] > 2 && len(s2.Func.Inl.Slice()) != 0 {
fmt.Printf("inl body:%v\n", s2.Func.Inl)
}
}
package gc
import (
+ "cmd/compile/internal/ssa"
"cmd/internal/obj"
"crypto/md5"
"fmt"
+ "sort"
"strings"
)
// the top of the stack and increasing in size.
// Non-autos sort on offset.
func cmpstackvarlt(a, b *Node) bool {
- if a.Class != b.Class {
- if a.Class == PAUTO {
- return false
- }
- return true
+ if (a.Class == PAUTO) != (b.Class == PAUTO) {
+ return b.Class == PAUTO
}
if a.Class != PAUTO {
- if a.Xoffset < b.Xoffset {
- return true
- }
- if a.Xoffset > b.Xoffset {
- return false
- }
- return false
+ return a.Xoffset < b.Xoffset
}
if a.Used != b.Used {
return ap
}
- if a.Type.Width < b.Type.Width {
- return false
- }
- if a.Type.Width > b.Type.Width {
- return true
+ if a.Type.Width != b.Type.Width {
+ return a.Type.Width > b.Type.Width
}
return a.Sym.Name < b.Sym.Name
}
+// byStackvar implements sort.Interface for []*Node using cmpstackvarlt.
+type byStackVar []*Node
+
+func (s byStackVar) Len() int { return len(s) }
+func (s byStackVar) Less(i, j int) bool { return cmpstackvarlt(s[i], s[j]) }
+func (s byStackVar) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
+
// stkdelta records the stack offset delta for a node
// during the compaction of the stack frame to remove
// unused stack slots.
Stksize = 0
stkptrsize = 0
- if Curfn.Func.Dcl == nil {
+ if len(Curfn.Func.Dcl) == 0 {
return
}
// Mark the PAUTO's unused.
- for ll := Curfn.Func.Dcl; ll != nil; ll = ll.Next {
- if ll.N.Class == PAUTO {
- ll.N.Used = false
+ for _, ln := range Curfn.Func.Dcl {
+ if ln.Class == PAUTO {
+ ln.Used = false
}
}
markautoused(ptxt)
- listsort(&Curfn.Func.Dcl, cmpstackvarlt)
+ sort.Sort(byStackVar(Curfn.Func.Dcl))
// Unused autos are at the end, chop 'em off.
- ll := Curfn.Func.Dcl
-
- n := ll.N
+ n := Curfn.Func.Dcl[0]
if n.Class == PAUTO && n.Op == ONAME && !n.Used {
// No locals used at all
Curfn.Func.Dcl = nil
return
}
- for ll := Curfn.Func.Dcl; ll.Next != nil; ll = ll.Next {
- n = ll.Next.N
+ for i := 1; i < len(Curfn.Func.Dcl); i++ {
+ n = Curfn.Func.Dcl[i]
if n.Class == PAUTO && n.Op == ONAME && !n.Used {
- ll.Next = nil
- Curfn.Func.Dcl.End = ll
+ Curfn.Func.Dcl = Curfn.Func.Dcl[:i]
break
}
}
// Reassign stack offsets of the locals that are still there.
var w int64
- for ll := Curfn.Func.Dcl; ll != nil; ll = ll.Next {
- n = ll.N
+ for _, n := range Curfn.Func.Dcl {
if n.Class != PAUTO || n.Op != ONAME {
continue
}
fixautoused(ptxt)
// The debug information needs accurate offsets on the symbols.
- for ll := Curfn.Func.Dcl; ll != nil; ll = ll.Next {
- if ll.N.Class != PAUTO || ll.N.Op != ONAME {
+ for _, ln := range Curfn.Func.Dcl {
+ if ln.Class != PAUTO || ln.Op != ONAME {
continue
}
- ll.N.Xoffset += stkdelta[ll.N]
- delete(stkdelta, ll.N)
+ ln.Xoffset += stkdelta[ln]
+ delete(stkdelta, ln)
}
}
Deferreturn = Sysfunc("deferreturn")
Panicindex = Sysfunc("panicindex")
panicslice = Sysfunc("panicslice")
+ panicdivide = Sysfunc("panicdivide")
throwreturn = Sysfunc("throwreturn")
+ growslice = Sysfunc("growslice")
+ writebarrierptr = Sysfunc("writebarrierptr")
+ typedmemmove = Sysfunc("typedmemmove")
+ panicdottype = Sysfunc("panicdottype")
}
lno := setlineno(fn)
var nam *Node
var gcargs *Sym
var gclocals *Sym
- if fn.Nbody == nil {
+ var ssafn *ssa.Func
+ if len(fn.Nbody.Slice()) == 0 {
if pure_go != 0 || strings.HasPrefix(fn.Func.Nname.Sym.Name, "init.") {
Yyerror("missing function body for %q", fn.Func.Nname.Sym.Name)
goto ret
if t.Nname != nil {
n = Nod(OAS, t.Nname, nil)
typecheck(&n, Etop)
- Curfn.Nbody = concat(list1(n), Curfn.Nbody)
+ Curfn.Nbody.Set(append([]*Node{n}, Curfn.Nbody.Slice()...))
}
t = structnext(&save)
goto ret
}
+ // Build an SSA backend function.
+ if shouldssa(Curfn) {
+ ssafn = buildssa(Curfn)
+ }
+
continpc = nil
breakpc = nil
if fn.Func.Needctxt {
ptxt.From3.Offset |= obj.NEEDCTXT
}
- if fn.Func.Nosplit {
+ if fn.Func.Pragma&Nosplit != 0 {
ptxt.From3.Offset |= obj.NOSPLIT
}
- if fn.Func.Systemstack {
+ if fn.Func.Pragma&Systemstack != 0 {
ptxt.From.Sym.Cfunc = 1
}
gtrack(tracksym(t))
}
- for l := fn.Func.Dcl; l != nil; l = l.Next {
- n = l.N
+ for _, n := range fn.Func.Dcl {
if n.Op != ONAME { // might be OTYPE or OLITERAL
continue
}
switch n.Class {
case PAUTO, PPARAM, PPARAMOUT:
- Nodconst(&nod1, Types[TUINTPTR], l.N.Type.Width)
- p = Thearch.Gins(obj.ATYPE, l.N, &nod1)
- p.From.Gotype = Linksym(ngotype(l.N))
+ Nodconst(&nod1, Types[TUINTPTR], n.Type.Width)
+ p = Thearch.Gins(obj.ATYPE, n, &nod1)
+ p.From.Gotype = Linksym(ngotype(n))
}
}
- Genlist(Curfn.Func.Enter)
- Genlist(Curfn.Nbody)
+ if ssafn != nil {
+ genssa(ssafn, ptxt, gcargs, gclocals)
+ if Curfn.Func.Endlineno != 0 {
+ lineno = Curfn.Func.Endlineno
+ }
+ ssafn.Free()
+ return
+ }
+ Genslice(Curfn.Func.Enter.Slice())
+ Genslice(Curfn.Nbody.Slice())
gclean()
checklabels()
if nerrors != 0 {
Node{Class: PFUNC, Xoffset: 10},
false,
},
+ {
+ Node{Class: PPARAM, Xoffset: 10},
+ Node{Class: PPARAMOUT, Xoffset: 20},
+ true,
+ },
+ {
+ Node{Class: PPARAMOUT, Xoffset: 10},
+ Node{Class: PPARAM, Xoffset: 20},
+ true,
+ },
{
Node{Class: PAUTO, Used: true},
Node{Class: PAUTO, Used: false},
if got != d.lt {
t.Errorf("want %#v < %#v", d.a, d.b)
}
+ // If we expect a < b to be true, check that b < a is false.
+ if d.lt && cmpstackvarlt(&d.b, &d.a) {
+ t.Errorf("unexpected %#v < %#v", d.b, d.a)
+ }
}
}
"cmd/internal/obj"
"fmt"
"sort"
+ "strings"
)
const (
// An ordinary basic block.
//
-// Instructions are threaded together in a doubly-linked list. To iterate in
+// Instructions are threaded together in a doubly-linked list. To iterate in
// program order follow the link pointer from the first node and stop after the
// last node has been visited
//
}
// Inserts prev before curr in the instruction
-// stream. Any control flow, such as branches or fall throughs, that target the
+// stream. Any control flow, such as branches or fall-throughs, that target the
// existing instruction are adjusted to target the new instruction.
func splicebefore(lv *Liveness, bb *BasicBlock, prev *obj.Prog, curr *obj.Prog) {
// There may be other instructions pointing at curr,
}
}
-// Iterates over a basic block applying a callback to each instruction. There
-// are two criteria for termination. If the end of basic block is reached a
-// value of zero is returned. If the callback returns a non-zero value, the
+// Iterates over a basic block applying a callback to each instruction. There
+// are two criteria for termination. If the end of basic block is reached a
+// value of zero is returned. If the callback returns a non-zero value, the
// iteration is stopped and the value of the callback is returned.
func blockany(bb *BasicBlock, f func(*obj.Prog) bool) bool {
for p := bb.last; p != nil; p = p.Opt.(*obj.Prog) {
// variables.
func getvariables(fn *Node) []*Node {
result := make([]*Node, 0, 0)
- for ll := fn.Func.Dcl; ll != nil; ll = ll.Next {
- if ll.N.Op == ONAME {
+ for _, ln := range fn.Func.Dcl {
+ if ln.Op == ONAME {
// In order for GODEBUG=gcdead=1 to work, each bitmap needs
// to contain information about all variables covered by the bitmap.
// For local variables, the bitmap only covers the stkptrsize
// Later, when we want to find the index of a node in the variables list,
// we will check that n->curfn == curfn and n->opt > 0. Then n->opt - 1
// is the index in the variables list.
- ll.N.SetOpt(nil)
+ ln.SetOpt(nil)
// The compiler doesn't emit initializations for zero-width parameters or results.
- if ll.N.Type.Width == 0 {
+ if ln.Type.Width == 0 {
continue
}
- ll.N.Name.Curfn = Curfn
- switch ll.N.Class {
+ ln.Name.Curfn = Curfn
+ switch ln.Class {
case PAUTO:
- if haspointers(ll.N.Type) {
- ll.N.SetOpt(int32(len(result)))
- result = append(result, ll.N)
+ if haspointers(ln.Type) {
+ ln.SetOpt(int32(len(result)))
+ result = append(result, ln)
}
case PPARAM, PPARAMOUT:
- ll.N.SetOpt(int32(len(result)))
- result = append(result, ll.N)
+ ln.SetOpt(int32(len(result)))
+ result = append(result, ln)
}
}
}
return result
}
-// A pretty printer for control flow graphs. Takes an array of BasicBlock*s.
+// A pretty printer for control flow graphs. Takes an array of BasicBlock*s.
func printcfg(cfg []*BasicBlock) {
for _, bb := range cfg {
printblock(bb)
}
// Assigns a reverse post order number to each connected basic block using the
-// standard algorithm. Unconnected blocks will not be affected.
+// standard algorithm. Unconnected blocks will not be affected.
func reversepostorder(root *BasicBlock, rpo *int32) {
root.mark = VISITED
for _, bb := range root.succ {
func (x blockrpocmp) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x blockrpocmp) Less(i, j int) bool { return x[i].rpo < x[j].rpo }
-// A pattern matcher for call instructions. Returns true when the instruction
+// A pattern matcher for call instructions. Returns true when the instruction
// is a call to a specific package qualified function name.
func iscall(prog *obj.Prog, name *obj.LSym) bool {
if prog == nil {
}
// Walk backwards from a runtime·selectgo call up to its immediately dominating
-// runtime·newselect call. Any successor nodes of communication clause nodes
-// are implicit successors of the runtime·selectgo call node. The goal of this
+// runtime·newselect call. Any successor nodes of communication clause nodes
+// are implicit successors of the runtime·selectgo call node. The goal of this
// analysis is to add these missing edges to complete the control flow graph.
func addselectgosucc(selectgo *BasicBlock) {
var succ *BasicBlock
}
}
-// The entry point for the missing selectgo control flow algorithm. Takes an
+// The entry point for the missing selectgo control flow algorithm. Takes an
// array of BasicBlock*s containing selectgo calls.
func fixselectgo(selectgo []*BasicBlock) {
for _, bb := range selectgo {
}
}
-// Constructs a control flow graph from a sequence of instructions. This
+// Constructs a control flow graph from a sequence of instructions. This
// procedure is complicated by various sources of implicit control flow that are
-// not accounted for using the standard cfg construction algorithm. Returns an
+// not accounted for using the standard cfg construction algorithm. Returns an
// array of BasicBlock*s in control flow graph form (basic blocks ordered by
// their RPO number).
func newcfg(firstp *obj.Prog) []*BasicBlock {
- // Reset the opt field of each prog to nil. In the first and second
+ // Reset the opt field of each prog to nil. In the first and second
// passes, instructions that are labels temporarily use the opt field to
- // point to their basic block. In the third pass, the opt field reset
+ // point to their basic block. In the third pass, the opt field reset
// to point to the predecessor of an instruction in its basic block.
for p := firstp; p != nil; p = p.Link {
p.Opt = nil
bb := newblock(firstp)
cfg = append(cfg, bb)
- for p := firstp; p != nil; p = p.Link {
+ for p := firstp; p != nil && p.As != obj.AEND; p = p.Link {
Thearch.Proginfo(p)
if p.To.Type == obj.TYPE_BRANCH {
if p.To.Val == nil {
}
// Loop through all basic blocks maximally growing the list of
- // contained instructions until a label is reached. Add edges
+ // contained instructions until a label is reached. Add edges
// for branches and fall-through instructions.
for _, bb := range cfg {
- for p := bb.last; p != nil; p = p.Link {
+ for p := bb.last; p != nil && p.As != obj.AEND; p = p.Link {
if p.Opt != nil && p != bb.last {
break
}
// Stop before an unreachable RET, to avoid creating
// unreachable control flow nodes.
if p.Link != nil && p.Link.As == obj.ARET && p.Link.Mode == 1 {
+ // TODO: remove after SSA is done. SSA does not
+ // generate any unreachable RET instructions.
break
}
}
// Add back links so the instructions in a basic block can be traversed
- // backward. This is the final state of the instruction opt field.
+ // backward. This is the final state of the instruction opt field.
for _, bb := range cfg {
p := bb.first
var prev *obj.Prog
rpo := int32(len(cfg))
reversepostorder(bb, &rpo)
- // Sort the basic blocks by their depth first number. The
+ // Sort the basic blocks by their depth first number. The
// array is now a depth-first spanning tree with the first
// node being the root.
sort.Sort(blockrpocmp(cfg))
// Unreachable control flow nodes are indicated by a -1 in the rpo
- // field. If we see these nodes something must have gone wrong in an
+ // field. If we see these nodes something must have gone wrong in an
// upstream compilation phase.
bb = cfg[0]
if bb.rpo == -1 {
}
// Computes the effects of an instruction on a set of
-// variables. The vars argument is an array of Node*s.
+// variables. The vars argument is an array of Node*s.
//
// The output vectors give bits for variables:
// uevar - used by this instruction
bvresetall(avarinit)
if prog.As == obj.ARET {
- // Return instructions implicitly read all the arguments. For
- // the sake of correctness, out arguments must be read. For the
+ // Return instructions implicitly read all the arguments. For
+ // the sake of correctness, out arguments must be read. For the
// sake of backtrace quality, we read in arguments as well.
//
// A return instruction with a p->to is a tail return, which brings
}
// Constructs a new liveness structure used to hold the global state of the
-// liveness computation. The cfg argument is an array of BasicBlock*s and the
+// liveness computation. The cfg argument is an array of BasicBlock*s and the
// vars argument is an array of Node*s.
func newliveness(fn *Node, ptxt *obj.Prog, cfg []*BasicBlock, vars []*Node) *Liveness {
result := new(Liveness)
fmt.Printf("\n")
}
-// Pretty print a variable node. Uses Pascal like conventions for pointers and
+// Pretty print a variable node. Uses Pascal like conventions for pointers and
// addresses to avoid confusing the C like conventions used in the node variable
// names.
func printnode(node *Node) {
fmt.Printf(" %v%s%s", node, p, a)
}
-// Pretty print a list of variables. The vars argument is an array of Node*s.
+// Pretty print a list of variables. The vars argument is an array of Node*s.
func printvars(name string, bv Bvec, vars []*Node) {
fmt.Printf("%s:", name)
for i, node := range vars {
}
func checkauto(fn *Node, p *obj.Prog, n *Node) {
- for l := fn.Func.Dcl; l != nil; l = l.Next {
- if l.N.Op == ONAME && l.N.Class == PAUTO && l.N == n {
+ for _, ln := range fn.Func.Dcl {
+ if ln.Op == ONAME && ln.Class == PAUTO && ln == n {
return
}
}
}
fmt.Printf("checkauto %v: %v (%p; class=%d) not found in %p %v\n", funcSym(Curfn), n, n, n.Class, p, p)
- for l := fn.Func.Dcl; l != nil; l = l.Next {
- fmt.Printf("\t%v (%p; class=%d)\n", l.N, l.N, l.N.Class)
+ for _, ln := range fn.Func.Dcl {
+ fmt.Printf("\t%v (%p; class=%d)\n", ln, ln, ln.Class)
}
Yyerror("checkauto: invariant lost")
}
if isfunny(n) {
return
}
- var a *Node
var class Class
- for l := fn.Func.Dcl; l != nil; l = l.Next {
- a = l.N
+ for _, a := range fn.Func.Dcl {
class = a.Class &^ PHEAP
if a.Op == ONAME && (class == PPARAM || class == PPARAMOUT) && a == n {
return
}
fmt.Printf("checkparam %v: %v (%p; class=%d) not found in %v\n", Curfn, n, n, n.Class, p)
- for l := fn.Func.Dcl; l != nil; l = l.Next {
- fmt.Printf("\t%v (%p; class=%d)\n", l.N, l.N, l.N.Class)
+ for _, ln := range fn.Func.Dcl {
+ fmt.Printf("\t%v (%p; class=%d)\n", ln, ln, ln.Class)
}
Yyerror("checkparam: invariant lost")
}
}
}
-// Check instruction invariants. We assume that the nodes corresponding to the
+// Check instruction invariants. We assume that the nodes corresponding to the
// sources and destinations of memory operations will be declared in the
-// function. This is not strictly true, as is the case for the so-called funny
-// nodes and there are special cases to skip over that stuff. The analysis will
+// function. This is not strictly true, as is the case for the so-called funny
+// nodes and there are special cases to skip over that stuff. The analysis will
// fail if this invariant blindly changes.
func checkptxt(fn *Node, firstp *obj.Prog) {
if debuglive == 0 {
case TARRAY:
// The value of t->bound is -1 for slices types and >=0 for
- // for fixed array types. All other values are invalid.
+ // for fixed array types. All other values are invalid.
if t.Bound < -1 {
Fatalf("onebitwalktype1: invalid bound, %v", t)
}
return int32(Curfn.Type.Argwid / int64(Widthptr))
}
-// Generates live pointer value maps for arguments and local variables. The
-// this argument and the in arguments are always assumed live. The vars
+// Generates live pointer value maps for arguments and local variables. The
+// this argument and the in arguments are always assumed live. The vars
// argument is an array of Node*s.
func onebitlivepointermap(lv *Liveness, liveout Bvec, vars []*Node, args Bvec, locals Bvec) {
var node *Node
return prog.As == obj.ATEXT || prog.As == obj.ACALL
}
-// Initializes the sets for solving the live variables. Visits all the
+// Initializes the sets for solving the live variables. Visits all the
// instructions in each basic block to summarizes the information at each basic
// block
func livenessprologue(lv *Liveness) {
}
}
- // Iterate through the blocks in reverse round-robin fashion. A work
- // queue might be slightly faster. As is, the number of iterations is
+ // Iterate through the blocks in reverse round-robin fashion. A work
+ // queue might be slightly faster. As is, the number of iterations is
// so low that it hardly seems to be worth the complexity.
change = 1
for change != 0 {
change = 0
- // Walk blocks in the general direction of propagation. This
+ // Walk blocks in the general direction of propagation. This
// improves convergence.
for i := len(lv.cfg) - 1; i >= 0; i-- {
bb := lv.cfg[i]
}
n = lv.vars[j]
if n.Class != PPARAM {
- yyerrorl(int(p.Lineno), "internal error: %v %v recorded as live on entry", Curfn.Func.Nname, Nconv(n, obj.FmtLong))
+ yyerrorl(int(p.Lineno), "internal error: %v %v recorded as live on entry, p.Pc=%v", Curfn.Func.Nname, Nconv(n, obj.FmtLong), p.Pc)
}
}
}
if msg != nil {
fmt_ = ""
fmt_ += fmt.Sprintf("%v: live at ", p.Line())
- if p.As == obj.ACALL && p.To.Node != nil {
- fmt_ += fmt.Sprintf("call to %s:", ((p.To.Node).(*Node)).Sym.Name)
+ if p.As == obj.ACALL && p.To.Sym != nil {
+ name := p.To.Sym.Name
+ i := strings.Index(name, ".")
+ if i >= 0 {
+ name = name[i+1:]
+ }
+ fmt_ += fmt.Sprintf("call to %s:", name)
} else if p.As == obj.ACALL {
fmt_ += "indirect call:"
} else {
fmt.Printf("\n")
}
-// Dumps an array of bitmaps to a symbol as a sequence of uint32 values. The
-// first word dumped is the total number of bitmaps. The second word is the
-// length of the bitmaps. All bitmaps are assumed to be of equal length. The
-// words that are followed are the raw bitmap words. The arr argument is an
+// Dumps an array of bitmaps to a symbol as a sequence of uint32 values. The
+// first word dumped is the total number of bitmaps. The second word is the
+// length of the bitmaps. All bitmaps are assumed to be of equal length. The
+// words that are followed are the raw bitmap words. The arr argument is an
// array of Node*s.
func onebitwritesymbol(arr []Bvec, sym *Sym) {
var i int
}
}
-// Entry pointer for liveness analysis. Constructs a complete CFG, solves for
+// Entry pointer for liveness analysis. Constructs a complete CFG, solves for
// the liveness of pointer variables in the function, and emits a runtime data
// structure read by the garbage collector.
func liveness(fn *Node, firstp *obj.Prog, argssym *Sym, livesym *Sym) {
onebitwritesymbol(lv.argslivepointers, argssym)
// Free everything.
- for l := fn.Func.Dcl; l != nil; l = l.Next {
- if l.N != nil {
- l.N.SetOpt(nil)
+ for _, ln := range fn.Func.Dcl {
+ if ln != nil {
+ ln.SetOpt(nil)
}
}
freeliveness(lv)
fmt.Printf("%v\n", p)
}
if p.As != obj.ACALL && p.To.Type == obj.TYPE_BRANCH && p.To.Val.(*obj.Prog) != nil && p.To.Val.(*obj.Prog).As == obj.AJMP {
- p.To.Val = chasejmp(p.To.Val.(*obj.Prog), &jmploop)
- if Debug['R'] != 0 && Debug['v'] != 0 {
- fmt.Printf("->%v\n", p)
+ if Debug['N'] == 0 {
+ p.To.Val = chasejmp(p.To.Val.(*obj.Prog), &jmploop)
+ if Debug['R'] != 0 && Debug['v'] != 0 {
+ fmt.Printf("->%v\n", p)
+ }
}
}
p.Opt = dead
}
-
if Debug['R'] != 0 && Debug['v'] != 0 {
fmt.Printf("\n")
}
// pass 4: elide JMP to next instruction.
// only safe if there are no jumps to JMPs anymore.
- if jmploop == 0 {
+ if jmploop == 0 && Debug['N'] == 0 {
var last *obj.Prog
for p := firstp; p != nil; p = p.Link {
if p.As == obj.AJMP && p.To.Type == obj.TYPE_BRANCH && p.To.Val == p.Link {
// Control flow analysis. The Flow structures hold predecessor and successor
// information as well as basic loop analysis.
//
-// graph = flowstart(firstp, 0);
+// graph = Flowstart(firstp, nil)
// ... use flow graph ...
-// flowend(graph); // free graph
+// Flowend(graph) // free graph
//
// Typical uses of the flow graph are to iterate over all the flow-relevant instructions:
//
-// for(f = graph->start; f != nil; f = f->link)
+// for f := graph.Start; f != nil; f = f.Link {}
//
// or, given an instruction f, to iterate over all the predecessors, which is
-// f->p1 and this list:
+// f.P1 and this list:
//
-// for(f2 = f->p2; f2 != nil; f2 = f2->p2link)
+// for f2 := f.P2; f2 != nil; f2 = f2.P2link {}
//
-// The size argument to flowstart specifies an amount of zeroed memory
-// to allocate in every f->data field, for use by the client.
-// If size == 0, f->data will be nil.
+// The second argument (newData) to Flowstart specifies a func to create object
+// for every f.Data field, for use by the client.
+// If newData is nil, f.Data will be nil.
var flowmark int
// will not have flow graphs and consequently will not be optimized.
const MaxFlowProg = 50000
+var ffcache []Flow // reusable []Flow, to reduce allocation
+
+func growffcache(n int) {
+ if n > cap(ffcache) {
+ n = (n * 5) / 4
+ if n > MaxFlowProg {
+ n = MaxFlowProg
+ }
+ ffcache = make([]Flow, n)
+ }
+ ffcache = ffcache[:n]
+}
+
func Flowstart(firstp *obj.Prog, newData func() interface{}) *Graph {
// Count and mark instructions to annotate.
nf := 0
// Allocate annotations and assign to instructions.
graph := new(Graph)
- ff := make([]Flow, nf)
+
+ growffcache(nf)
+ ff := ffcache
start := &ff[0]
id := 0
var last *Flow
f.Prog.Info.Flags = 0 // drop cached proginfo
f.Prog.Opt = nil
}
+ clear := ffcache[:graph.Num]
+ for i := range clear {
+ clear[i] = Flow{}
+ }
}
// find looping structure
me = r1.Rpo
d = -1
- // rpo2r[r->rpo] == r protects against considering dead code,
- // which has r->rpo == 0.
+ // rpo2r[r.Rpo] == r protects against considering dead code,
+ // which has r.Rpo == 0.
if r1.P1 != nil && rpo2r[r1.P1.Rpo] == r1.P1 && r1.P1.Rpo < me {
d = r1.P1.Rpo
}
// Build list of all mergeable variables.
var vars []*TempVar
- for l := Curfn.Func.Dcl; l != nil; l = l.Next {
- if n := l.N; canmerge(n) {
+ for _, n := range Curfn.Func.Dcl {
+ if canmerge(n) {
v := &TempVar{}
vars = append(vars, v)
n.SetOpt(v)
// Traverse live range of each variable to set start, end.
// Each flood uses a new value of gen so that we don't have
- // to clear all the r->active words after each variable.
+ // to clear all the r.Active words after each variable.
gen := uint32(0)
for _, v := range vars {
}
// Delete merged nodes from declaration list.
- for lp := &Curfn.Func.Dcl; ; {
- l := *lp
- if l == nil {
- break
- }
-
- Curfn.Func.Dcl.End = l
- n := l.N
+ dcl := make([]*Node, 0, len(Curfn.Func.Dcl)-nkill)
+ for _, n := range Curfn.Func.Dcl {
v, _ := n.Opt().(*TempVar)
if v != nil && (v.merge != nil || v.removed) {
- *lp = l.Next
continue
}
-
- lp = &l.Next
+ dcl = append(dcl, n)
}
+ Curfn.Func.Dcl = dcl
// Clear aux structures.
for _, v := range vars {
// from memory without being rechecked. Other variables need to be checked on
// each load.
-var killed int // f->data is either nil or &killed
+var killed int // f.Data is either nil or &killed
func nilopt(firstp *obj.Prog) {
g := Flowstart(firstp, nil)
//
// For flag_race it modifies the function as follows:
//
-// 1. It inserts a call to racefuncenter at the beginning of each function.
+// 1. It inserts a call to racefuncenterfp at the beginning of each function.
// 2. It inserts a call to racefuncexit at the end of each function.
// 3. It inserts a call to raceread before each memory read.
// 4. It inserts a call to racewrite before each memory write.
// at best instrumentation would cause infinite recursion.
var omit_pkgs = []string{"runtime/internal/atomic", "runtime/internal/sys", "runtime", "runtime/race", "runtime/msan"}
-// Only insert racefuncenter/racefuncexit into the following packages.
+// Only insert racefuncenterfp/racefuncexit into the following packages.
// Memory accesses in the packages are either uninteresting or will cause false positives.
var norace_inst_pkgs = []string{"sync", "sync/atomic"}
}
func instrument(fn *Node) {
- if ispkgin(omit_pkgs) || fn.Func.Norace {
+ if ispkgin(omit_pkgs) || fn.Func.Pragma&Norace != 0 {
return
}
if flag_race == 0 || !ispkgin(norace_inst_pkgs) {
- instrumentlist(fn.Nbody, nil)
+ instrumentslice(fn.Nbody.Slice(), nil)
// nothing interesting for race detector in fn->enter
- instrumentlist(fn.Func.Exit, nil)
+ instrumentslice(fn.Func.Exit.Slice(), nil)
}
if flag_race != 0 {
nodpc.Type = Types[TUINTPTR]
nodpc.Xoffset = int64(-Widthptr)
nd := mkcall("racefuncenter", nil, nil, nodpc)
- fn.Func.Enter = concat(list1(nd), fn.Func.Enter)
+ fn.Func.Enter.Set(append([]*Node{nd}, fn.Func.Enter.Slice()...))
nd = mkcall("racefuncexit", nil, nil)
- fn.Func.Exit = list(fn.Func.Exit, nd)
+ fn.Func.Exit.Append(nd)
}
if Debug['W'] != 0 {
s := fmt.Sprintf("after instrument %v", fn.Func.Nname.Sym)
- dumplist(s, fn.Nbody)
+ dumpslice(s, fn.Nbody.Slice())
s = fmt.Sprintf("enter %v", fn.Func.Nname.Sym)
- dumplist(s, fn.Func.Enter)
+ dumpslice(s, fn.Func.Enter.Slice())
s = fmt.Sprintf("exit %v", fn.Func.Nname.Sym)
- dumplist(s, fn.Func.Exit)
+ dumpslice(s, fn.Func.Exit.Slice())
}
}
}
}
+func instrumentslice(l []*Node, init **NodeList) {
+ for i := range l {
+ var instr *NodeList
+ instrumentnode(&l[i], &instr, 0, 0)
+ if init == nil {
+ l[i].Ninit = concat(l[i].Ninit, instr)
+ } else {
+ *init = concat(*init, instr)
+ }
+ }
+}
+
// walkexpr and walkstmt combined
// walks the tree and adds calls to the
// instrumentation code to top-level (statement) nodes' init
if n.Op != OBLOCK { // OBLOCK is handled above in a special way.
instrumentlist(n.List, init)
}
- instrumentlist(n.Nbody, nil)
+ instrumentslice(n.Nbody.Slice(), nil)
instrumentlist(n.Rlist, nil)
*np = n
}
}
}
+func foreachslice(l []*Node, f func(*Node, interface{}), c interface{}) {
+ for _, n := range l {
+ foreachnode(n, f, c)
+ }
+}
+
func foreach(n *Node, f func(*Node, interface{}), c interface{}) {
foreachlist(n.Ninit, f, c)
foreachnode(n.Left, f, c)
foreachnode(n.Right, f, c)
foreachlist(n.List, f, c)
- foreachlist(n.Nbody, f, c)
+ foreachslice(n.Nbody.Slice(), f, c)
foreachlist(n.Rlist, f, c)
}
}
decldepth++
- typechecklist(n.Nbody, Etop)
+ typecheckslice(n.Nbody.Slice(), Etop)
decldepth--
}
// to avoid erroneous processing by racewalk.
n.List = nil
- var body *NodeList
+ var body []*Node
var init *NodeList
switch t.Etype {
default:
if v1 == nil {
body = nil
} else if v2 == nil {
- body = list1(Nod(OAS, v1, hv1))
+ body = []*Node{Nod(OAS, v1, hv1)}
} else {
a := Nod(OAS2, nil, nil)
a.List = list(list1(v1), v2)
a.Rlist = list(list1(hv1), Nod(OIND, hp, nil))
- body = list1(a)
+ body = []*Node{a}
// Advance pointer as part of increment.
// We used to advance the pointer before executing the loop body,
if v1 == nil {
body = nil
} else if v2 == nil {
- body = list1(Nod(OAS, v1, key))
+ body = []*Node{Nod(OAS, v1, key)}
} else {
val := Nod(ODOT, hit, valname)
val = Nod(OIND, val, nil)
a := Nod(OAS2, nil, nil)
a.List = list(list1(v1), v2)
a.Rlist = list(list1(key), val)
- body = list1(a)
+ body = []*Node{a}
}
// orderstmt arranged for a copy of the channel variable.
if v1 == nil {
body = nil
} else {
- body = list1(Nod(OAS, v1, hv1))
+ body = []*Node{Nod(OAS, v1, hv1)}
}
// orderstmt arranged for a copy of the string variable.
body = nil
if v1 != nil {
- body = list1(Nod(OAS, v1, ohv1))
+ body = []*Node{Nod(OAS, v1, ohv1)}
}
if v2 != nil {
- body = list(body, Nod(OAS, v2, hv2))
+ body = append(body, Nod(OAS, v2, hv2))
}
}
typechecklist(n.Left.Ninit, Etop)
typecheck(&n.Left, Erv)
typecheck(&n.Right, Etop)
- typechecklist(body, Etop)
- n.Nbody = concat(body, n.Nbody)
+ typecheckslice(body, Etop)
+ n.Nbody.Set(append(body, n.Nbody.Slice()...))
walkstmt(&n)
lineno = int32(lno)
if v1 == nil || v2 != nil {
return false
}
- if n.Nbody == nil || n.Nbody.N == nil || n.Nbody.Next != nil {
+ if len(n.Nbody.Slice()) == 0 || n.Nbody.Slice()[0] == nil || len(n.Nbody.Slice()) > 1 {
return false
}
- stmt := n.Nbody.N // only stmt in body
+ stmt := n.Nbody.Slice()[0] // only stmt in body
if stmt.Op != OAS || stmt.Left.Op != OINDEX {
return false
}
// }
n.Op = OIF
- n.Nbody = nil
+ n.Nbody.Set(nil)
n.Left = Nod(ONE, Nod(OLEN, a, nil), Nodintconst(0))
// hp = &a[0]
tmp = Nod(OADDR, tmp, nil)
tmp = Nod(OCONVNOP, tmp, nil)
tmp.Type = Ptrto(Types[TUINT8])
- n.Nbody = list(n.Nbody, Nod(OAS, hp, tmp))
+ n.Nbody.Append(Nod(OAS, hp, tmp))
// hn = len(a) * sizeof(elem(a))
hn := temp(Types[TUINTPTR])
tmp = Nod(OLEN, a, nil)
tmp = Nod(OMUL, tmp, Nodintconst(elemsize))
tmp = conv(tmp, Types[TUINTPTR])
- n.Nbody = list(n.Nbody, Nod(OAS, hn, tmp))
+ n.Nbody.Append(Nod(OAS, hn, tmp))
// memclr(hp, hn)
fn := mkcall("memclr", nil, nil, hp, hn)
- n.Nbody = list(n.Nbody, fn)
+ n.Nbody.Append(fn)
// i = len(a) - 1
v1 = Nod(OAS, v1, Nod(OSUB, Nod(OLEN, a, nil), Nodintconst(1)))
- n.Nbody = list(n.Nbody, v1)
+ n.Nbody.Append(v1)
typecheck(&n.Left, Erv)
- typechecklist(n.Nbody, Etop)
+ typecheckslice(n.Nbody.Slice(), Etop)
walkstmt(&n)
return true
}
}
// Builds a type representing a Bucket structure for
-// the given map type. This type is not visible to users -
+// the given map type. This type is not visible to users -
// we include only enough information to generate a correct GC
// program for it.
-// Make sure this stays in sync with ../../runtime/hashmap.go!
+// Make sure this stays in sync with ../../../../runtime/hashmap.go!
const (
BUCKETSIZE = 8
MAXKEYSIZE = 128
func makefield(name string, t *Type) *Type {
f := typ(TFIELD)
f.Type = t
- f.Sym = new(Sym)
- f.Sym.Name = name
+ f.Sym = nopkg.Lookup(name)
return f
}
}
// Builds a type representing a Hmap structure for the given map type.
-// Make sure this stays in sync with ../../runtime/hashmap.go!
+// Make sure this stays in sync with ../../../../runtime/hashmap.go!
func hmap(t *Type) *Type {
if t.Hmap != nil {
return t.Hmap
}
// build a struct:
- // hash_iter {
+ // hiter {
// key *Key
// val *Value
// t *MapType
// bucket uintptr
// checkBucket uintptr
// }
- // must match ../../runtime/hashmap.go:hash_iter.
+ // must match ../../../../runtime/hashmap.go:hiter.
var field [12]*Type
field[0] = makefield("key", Ptrto(t.Down))
}
// If we are compiling the runtime package, there are two runtime packages around
- // -- localpkg and Runtimepkg. We don't want to produce import path symbols for
+ // -- localpkg and Runtimepkg. We don't want to produce import path symbols for
// both of them, so just produce one for localpkg.
if myimportpath == "runtime" && p == Runtimepkg {
return
}
// uncommonType
-// ../../runtime/type.go:/uncommonType
+// ../../../../runtime/type.go:/uncommonType
func dextratype(sym *Sym, off int, t *Type, ptroff int) int {
m := methods(t)
if t.Sym == nil && len(m) == 0 {
ot := off
s := sym
- if t.Sym != nil {
- ot = dgostringptr(s, ot, t.Sym.Name)
- if t != Types[t.Etype] && t != errortype {
- ot = dgopkgpath(s, ot, t.Sym.Pkg)
- } else {
- ot = dgostringptr(s, ot, "")
- }
+ if t.Sym != nil && t != Types[t.Etype] && t != errortype {
+ ot = dgopkgpath(s, ot, t.Sym.Pkg)
} else {
ot = dgostringptr(s, ot, "")
- ot = dgostringptr(s, ot, "")
}
// slice header
// methods
for _, a := range m {
// method
- // ../../runtime/type.go:/method
+ // ../../../../runtime/type.go:/method
ot = dgostringptr(s, ot, a.name)
ot = dgopkgpath(s, ot, a.pkg)
algsym = dalgsym(t)
}
- var sptr *Sym
tptr := Ptrto(t)
if !Isptr[t.Etype] && (t.Sym != nil || methods(tptr) != nil) {
- sptr = dtypesym(tptr)
- } else {
- sptr = weaktypesym(tptr)
+ sptr := dtypesym(tptr)
+ r := obj.Addrel(Linksym(s))
+ r.Off = 0
+ r.Siz = 0
+ r.Sym = sptr.Lsym
+ r.Type = obj.R_USETYPE
}
gcsym, useGCProg, ptrdata := dgcsym(t)
- // ../../pkg/reflect/type.go:/^type.commonType
+ // ../../../../reflect/type.go:/^type.rtype
// actual type structure
- // type commonType struct {
+ // type rtype struct {
// size uintptr
- // ptrsize uintptr
+ // ptrdata uintptr
// hash uint32
// _ uint8
// align uint8
// fieldAlign uint8
// kind uint8
- // alg unsafe.Pointer
- // gcdata unsafe.Pointer
+ // alg *typeAlg
+ // gcdata *byte
// string *string
- // *extraType
- // ptrToThis *Type
+ // *uncommonType
// }
ot = duintptr(s, ot, uint64(t.Width))
ot = duintptr(s, ot, uint64(ptrdata))
} else {
ot = dsymptr(s, ot, algsym, 0)
}
- ot = dsymptr(s, ot, gcsym, 0)
+ ot = dsymptr(s, ot, gcsym, 0) // gcdata
p := Tconv(t, obj.FmtLeft|obj.FmtUnsigned)
- //print("dcommontype: %s\n", p);
- ot = dgostringptr(s, ot, p) // string
+ _, symdata := stringsym(p) // string
+ ot = dsymptr(s, ot, symdata, 0)
+ ot = duintxx(s, ot, uint64(len(p)), Widthint)
+ //fmt.Printf("dcommontype: %s\n", p)
// skip pointer to extraType,
// which follows the rest of this type structure.
// otherwise linker will assume 0.
ot += Widthptr
- ot = dsymptr(s, ot, sptr, 0) // ptrto type
return ot
}
switch t.Etype {
default:
ot = dcommontype(s, ot, t)
- xt = ot - 2*Widthptr
+ xt = ot - 1*Widthptr
case TARRAY:
if t.Bound >= 0 {
- // ../../runtime/type.go:/ArrayType
+ // ../../../../runtime/type.go:/arrayType
s1 := dtypesym(t.Type)
t2 := typ(TARRAY)
t2.Bound = -1 // slice
s2 := dtypesym(t2)
ot = dcommontype(s, ot, t)
- xt = ot - 2*Widthptr
+ xt = ot - 1*Widthptr
ot = dsymptr(s, ot, s1, 0)
ot = dsymptr(s, ot, s2, 0)
ot = duintptr(s, ot, uint64(t.Bound))
} else {
- // ../../runtime/type.go:/SliceType
+ // ../../../../runtime/type.go:/sliceType
s1 := dtypesym(t.Type)
ot = dcommontype(s, ot, t)
- xt = ot - 2*Widthptr
+ xt = ot - 1*Widthptr
ot = dsymptr(s, ot, s1, 0)
}
- // ../../runtime/type.go:/ChanType
+ // ../../../../runtime/type.go:/chanType
case TCHAN:
s1 := dtypesym(t.Type)
ot = dcommontype(s, ot, t)
- xt = ot - 2*Widthptr
+ xt = ot - 1*Widthptr
ot = dsymptr(s, ot, s1, 0)
ot = duintptr(s, ot, uint64(t.Chan))
}
ot = dcommontype(s, ot, t)
- xt = ot - 2*Widthptr
+ xt = ot - 1*Widthptr
ot = duint8(s, ot, uint8(obj.Bool2int(isddd)))
// two slice headers: in and out.
dtypesym(a.type_)
}
- // ../../../runtime/type.go:/InterfaceType
+ // ../../../../runtime/type.go:/interfaceType
ot = dcommontype(s, ot, t)
- xt = ot - 2*Widthptr
+ xt = ot - 1*Widthptr
ot = dsymptr(s, ot, s, ot+Widthptr+2*Widthint)
ot = duintxx(s, ot, uint64(n), Widthint)
ot = duintxx(s, ot, uint64(n), Widthint)
for _, a := range m {
- // ../../../runtime/type.go:/imethod
+ // ../../../../runtime/type.go:/imethod
ot = dgostringptr(s, ot, a.name)
ot = dgopkgpath(s, ot, a.pkg)
ot = dsymptr(s, ot, dtypesym(a.type_), 0)
}
- // ../../../runtime/type.go:/MapType
+ // ../../../../runtime/type.go:/mapType
case TMAP:
s1 := dtypesym(t.Down)
s3 := dtypesym(mapbucket(t))
s4 := dtypesym(hmap(t))
ot = dcommontype(s, ot, t)
- xt = ot - 2*Widthptr
+ xt = ot - 1*Widthptr
ot = dsymptr(s, ot, s1, 0)
ot = dsymptr(s, ot, s2, 0)
ot = dsymptr(s, ot, s3, 0)
case TPTR32, TPTR64:
if t.Type.Etype == TANY {
- // ../../runtime/type.go:/UnsafePointerType
+ // ../../../../runtime/type.go:/UnsafePointerType
ot = dcommontype(s, ot, t)
break
}
- // ../../runtime/type.go:/PtrType
+ // ../../../../runtime/type.go:/ptrType
s1 := dtypesym(t.Type)
ot = dcommontype(s, ot, t)
- xt = ot - 2*Widthptr
+ xt = ot - 1*Widthptr
ot = dsymptr(s, ot, s1, 0)
- // ../../runtime/type.go:/StructType
+ // ../../../../runtime/type.go:/structType
// for security, only the exported fields.
case TSTRUCT:
n := 0
}
ot = dcommontype(s, ot, t)
- xt = ot - 2*Widthptr
+ xt = ot - 1*Widthptr
ot = dsymptr(s, ot, s, ot+Widthptr+2*Widthint)
ot = duintxx(s, ot, uint64(n), Widthint)
ot = duintxx(s, ot, uint64(n), Widthint)
for t1 := t.Type; t1 != nil; t1 = t1.Down {
- // ../../runtime/type.go:/structField
+ // ../../../../runtime/type.go:/structField
if t1.Sym != nil && t1.Embedded == 0 {
ot = dgostringptr(s, ot, t1.Sym.Name)
if exportname(t1.Sym.Name) {
// we want be able to find.
if t.Sym == nil {
switch t.Etype {
- case TPTR32, TPTR64:
- // The ptrto field of the type data cannot be relied on when
- // dynamic linking: a type T may be defined in a module that makes
- // no use of pointers to that type, but another module can contain
- // a package that imports the first one and does use *T pointers.
- // The second module will end up defining type data for *T and a
- // type.*T symbol pointing at it. It's important that calling
- // .PtrTo() on the reflect.Type for T returns this type data and
- // not some synthesized object, so we need reflect to be able to
- // find it!
- if !Ctxt.Flag_dynlink {
- break
- }
- fallthrough
- case TARRAY, TCHAN, TFUNC, TMAP:
+ case TPTR32, TPTR64, TARRAY, TCHAN, TFUNC, TMAP:
slink := typelinksym(t)
dsymptr(slink, 0, s, 0)
ggloblsym(slink, int32(Widthptr), int16(dupok|obj.RODATA))
ggloblsym(eqfunc, int32(Widthptr), obj.DUPOK|obj.RODATA)
}
- // ../../runtime/alg.go:/typeAlg
+ // ../../../../runtime/alg.go:/typeAlg
ot := 0
ot = dsymptr(s, ot, hashfunc, 0)
// be multiples of four words. On 32-bit systems that's 16 bytes, and
// all size classes >= 16 bytes are 16-byte aligned, so no real constraint.
// On 64-bit systems, that's 32 bytes, and 32-byte alignment is guaranteed
-// for size classes >= 256 bytes. On a 64-bit sytem, 256 bytes allocated
+// for size classes >= 256 bytes. On a 64-bit system, 256 bytes allocated
// is 32 pointers, the bits for which fit in 4 bytes. So maxPtrmaskBytes
// must be >= 4.
//
}
}
- typechecklist(ncase.Nbody, Etop)
+ typecheckslice(ncase.Nbody.Slice(), Etop)
}
sel.Xoffset = int64(count)
i := count(sel.List)
// optimization: zero-case select
- var init *NodeList
+ var init []*Node
var r *Node
var n *Node
var var_ *Node
var selv *Node
var cas *Node
if i == 0 {
- sel.Nbody = list1(mkcall("block", nil, nil))
+ sel.Nbody.Set([]*Node{mkcall("block", nil, nil)})
goto out
}
a := Nod(OIF, nil, nil)
a.Left = Nod(OEQ, ch, nodnil())
- a.Nbody = list1(mkcall("block", nil, &l))
+ a.Nbody.Set([]*Node{mkcall("block", nil, &l)})
typecheck(&a, Etop)
l = list(l, a)
l = list(l, n)
}
- l = concat(l, cas.Nbody)
- sel.Nbody = l
+ s := make([]*Node, 0, count(l))
+ for ll := l; ll != nil; ll = ll.Next {
+ s = append(s, ll.N)
+ }
+ s = append(s, cas.Nbody.Slice()...)
+ sel.Nbody.Set(s)
goto out
}
}
typecheck(&r.Left, Erv)
- r.Nbody = cas.Nbody
- r.Rlist = concat(dflt.Ninit, dflt.Nbody)
- sel.Nbody = list1(r)
+ r.Nbody.Set(cas.Nbody.Slice())
+ r.Rlist = concat(dflt.Ninit, dflt.Nbody.NodeList())
+ sel.Nbody.Set([]*Node{r})
goto out
}
- init = sel.Ninit
+ init = make([]*Node, 0, count(sel.Ninit))
+ for ll := sel.Ninit; ll != nil; ll = ll.Next {
+ init = append(init, ll.N)
+ }
sel.Ninit = nil
// generate sel-struct
selv = temp(selecttype(int32(sel.Xoffset)))
r = Nod(OAS, selv, nil)
typecheck(&r, Etop)
- init = list(init, r)
+ init = append(init, r)
var_ = conv(conv(Nod(OADDR, selv, nil), Types[TUNSAFEPTR]), Ptrto(Types[TUINT8]))
r = mkcall("newselect", nil, nil, var_, Nodintconst(selv.Type.Width), Nodintconst(sel.Xoffset))
typecheck(&r, Etop)
- init = list(init, r)
+ init = append(init, r)
// register cases
for l := sel.List; l != nil; l = l.Next {
}
// selv is no longer alive after use.
- r.Nbody = list(r.Nbody, Nod(OVARKILL, selv, nil))
+ r.Nbody.Append(Nod(OVARKILL, selv, nil))
- r.Nbody = concat(r.Nbody, cas.Nbody)
- r.Nbody = list(r.Nbody, Nod(OBREAK, nil, nil))
- init = list(init, r)
+ r.Nbody.Append(cas.Nbody.Slice()...)
+ r.Nbody.Append(Nod(OBREAK, nil, nil))
+ init = append(init, r)
}
// run the select
setlineno(sel)
- init = list(init, mkcall("selectgo", nil, nil, var_))
- sel.Nbody = init
+ init = append(init, mkcall("selectgo", nil, nil, var_))
+ sel.Nbody.Set(init)
out:
sel.List = nil
- walkstmtlist(sel.Nbody)
+ walkstmtslice(sel.Nbody.Slice())
lineno = int32(lno)
}
// init1 walks the AST starting at n, and accumulates in out
// the list of definitions needing init code in dependency order.
-func init1(n *Node, out **NodeList) {
+func init1(n *Node, out *[]*Node) {
if n == nil {
return
}
Fatalf("init1: bad defn")
case ODCLFUNC:
- init2list(defn.Nbody, out)
+ init2slice(defn.Nbody.Slice(), out)
case OAS:
if defn.Left != n {
if Debug['%'] != 0 {
Dump("nonstatic", defn)
}
- *out = list(*out, defn)
+ *out = append(*out, defn)
}
case OAS2FUNC, OAS2MAPR, OAS2DOTTYPE, OAS2RECV:
if Debug['%'] != 0 {
Dump("nonstatic", defn)
}
- *out = list(*out, defn)
+ *out = append(*out, defn)
defn.Initorder = InitDone
}
}
}
// recurse over n, doing init1 everywhere.
-func init2(n *Node, out **NodeList) {
+func init2(n *Node, out *[]*Node) {
if n == nil || n.Initorder == InitDone {
return
}
init2list(n.Ninit, out)
init2list(n.List, out)
init2list(n.Rlist, out)
- init2list(n.Nbody, out)
+ init2slice(n.Nbody.Slice(), out)
if n.Op == OCLOSURE {
- init2list(n.Func.Closure.Nbody, out)
+ init2slice(n.Func.Closure.Nbody.Slice(), out)
}
if n.Op == ODOTMETH || n.Op == OCALLPART {
init2(n.Type.Nname, out)
}
}
-func init2list(l *NodeList, out **NodeList) {
+func init2list(l *NodeList, out *[]*Node) {
for ; l != nil; l = l.Next {
init2(l.N, out)
}
}
-func initreorder(l *NodeList, out **NodeList) {
+func init2slice(l []*Node, out *[]*Node) {
+ for _, n := range l {
+ init2(n, out)
+ }
+}
+
+func initreorder(l *NodeList, out *[]*Node) {
var n *Node
for ; l != nil; l = l.Next {
// initfix computes initialization order for a list l of top-level
// declarations and outputs the corresponding list of statements
// to include in the init() function body.
-func initfix(l *NodeList) *NodeList {
- var lout *NodeList
+func initfix(l *NodeList) []*Node {
+ var lout []*Node
initplans = make(map[*Node]*InitPlan)
lno := int(lineno)
initreorder(l, &lout)
// compilation of top-level (static) assignments
// into DATA statements if at all possible.
-func staticinit(n *Node, out **NodeList) bool {
+func staticinit(n *Node, out *[]*Node) bool {
if n.Op != ONAME || n.Class != PEXTERN || n.Name.Defn == nil || n.Name.Defn.Op != OAS {
Fatalf("staticinit")
}
// like staticassign but we are copying an already
// initialized value r.
-func staticcopy(l *Node, r *Node, out **NodeList) bool {
+func staticcopy(l *Node, r *Node, out *[]*Node) bool {
if r.Op != ONAME {
return false
}
if staticcopy(l, r, out) {
return true
}
- *out = list(*out, Nod(OAS, l, r))
+ *out = append(*out, Nod(OAS, l, r))
return true
case OLITERAL:
rr.Type = ll.Type
rr.Xoffset += e.Xoffset
setlineno(rr)
- *out = list(*out, Nod(OAS, ll, rr))
+ *out = append(*out, Nod(OAS, ll, rr))
}
}
}
return false
}
-func staticassign(l *Node, r *Node, out **NodeList) bool {
+func staticassign(l *Node, r *Node, out *[]*Node) bool {
for r.Op == OCONVNOP {
r = r.Left
}
// Init underlying literal.
if !staticassign(a, r.Left, out) {
- *out = list(*out, Nod(OAS, a, r.Left))
+ *out = append(*out, Nod(OAS, a, r.Left))
}
return true
}
*a = n
a.Orig = a // completely separate copy
if !staticassign(a, e.Expr, out) {
- *out = list(*out, Nod(OAS, a, e.Expr))
+ *out = append(*out, Nod(OAS, a, e.Expr))
}
}
}
break
case OCLOSURE:
- if r.Func.Cvars == nil {
+ if len(r.Func.Cvars.Slice()) == 0 {
// Closures with no captured variables are globals,
// so the assignment can be done at link time.
n := *l
litas(var_, a, init)
// count the initializers
- b := int64(0)
-
+ b := 0
for l := n.List; l != nil; l = l.Next {
r := l.N
if r.Op != OKEY {
if b != 0 {
// build type [count]struct { a Tindex, b Tvalue }
t := n.Type
-
tk := t.Down
tv := t.Type
symb := Lookup("b")
- t = typ(TFIELD)
- t.Type = tv
- t.Sym = symb
+ fieldb := typ(TFIELD)
+ fieldb.Type = tv
+ fieldb.Sym = symb
syma := Lookup("a")
- t1 := t
- t = typ(TFIELD)
- t.Type = tk
- t.Sym = syma
- t.Down = t1
+ fielda := typ(TFIELD)
+ fielda.Type = tk
+ fielda.Sym = syma
+ fielda.Down = fieldb
- t1 = t
- t = typ(TSTRUCT)
- t.Type = t1
+ tstruct := typ(TSTRUCT)
+ tstruct.Type = fielda
- t1 = t
- t = typ(TARRAY)
- t.Bound = b
- t.Type = t1
+ tarr := typ(TARRAY)
+ tarr.Bound = int64(b)
+ tarr.Type = tstruct
- dowidth(t)
+ // TODO(josharian): suppress alg generation for these types?
+ dowidth(tarr)
// make and initialize static array
- vstat := staticname(t, ctxt)
+ vstat := staticname(tarr, ctxt)
b := int64(0)
for l := n.List; l != nil; l = l.Next {
r = Nod(OAS, r, a)
a = Nod(OFOR, nil, nil)
- a.Nbody = list1(r)
+ a.Nbody.Set([]*Node{r})
a.Ninit = list1(Nod(OAS, index, Nodintconst(0)))
- a.Left = Nod(OLT, index, Nodintconst(t.Bound))
+ a.Left = Nod(OLT, index, Nodintconst(tarr.Bound))
a.Right = Nod(OAS, index, Nod(OADD, index, Nodintconst(1)))
typecheck(&a, Etop)
}
// put in dynamic entries one-at-a-time
- var key *Node
-
- var val *Node
+ var key, val *Node
for l := n.List; l != nil; l = l.Next {
r := l.N
return -1
}
+// stataddr sets nam to the static address of n and reports whether it succeeeded.
func stataddr(nam *Node, n *Node) bool {
if n == nil {
return false
return &p.E[len(p.E)-1]
}
-func gen_as_init(n *Node) bool {
+// gen_as_init attempts to emit static data for n and reports whether it succeeded.
+// If reportOnly is true, it does not emit static data and does not modify the AST.
+func gen_as_init(n *Node, reportOnly bool) bool {
var nr *Node
var nl *Node
var nam Node
case OSLICEARR:
if nr.Right.Op == OKEY && nr.Right.Left == nil && nr.Right.Right == nil {
nr = nr.Left
- gused(nil) // in case the data is the dest of a goto
nl := nr
if nr == nil || nr.Op != OADDR {
goto no
goto no
}
- nam.Xoffset += int64(Array_array)
- gdata(&nam, nl, int(Types[Tptr].Width))
+ if !reportOnly {
+ nam.Xoffset += int64(Array_array)
+ gdata(&nam, nl, int(Types[Tptr].Width))
- nam.Xoffset += int64(Array_nel) - int64(Array_array)
- var nod1 Node
- Nodconst(&nod1, Types[TINT], nr.Type.Bound)
- gdata(&nam, &nod1, Widthint)
+ nam.Xoffset += int64(Array_nel) - int64(Array_array)
+ var nod1 Node
+ Nodconst(&nod1, Types[TINT], nr.Type.Bound)
+ gdata(&nam, &nod1, Widthint)
- nam.Xoffset += int64(Array_cap) - int64(Array_nel)
- gdata(&nam, &nod1, Widthint)
+ nam.Xoffset += int64(Array_cap) - int64(Array_nel)
+ gdata(&nam, &nod1, Widthint)
+ }
return true
}
TPTR64,
TFLOAT32,
TFLOAT64:
- gdata(&nam, nr, int(nr.Type.Width))
+ if !reportOnly {
+ gdata(&nam, nr, int(nr.Type.Width))
+ }
case TCOMPLEX64, TCOMPLEX128:
- gdatacomplex(&nam, nr.Val().U.(*Mpcplx))
+ if !reportOnly {
+ gdatacomplex(&nam, nr.Val().U.(*Mpcplx))
+ }
case TSTRING:
- gdatastring(&nam, nr.Val().U.(string))
+ if !reportOnly {
+ gdatastring(&nam, nr.Val().U.(string))
+ }
}
return true
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package gc
+
+import (
+ "bytes"
+ "fmt"
+ "html"
+ "math"
+ "os"
+ "strings"
+
+ "cmd/compile/internal/ssa"
+ "cmd/internal/obj"
+ "cmd/internal/obj/x86"
+)
+
+// Smallest possible faulting page at address zero.
+const minZeroPage = 4096
+
+var ssaConfig *ssa.Config
+var ssaExp ssaExport
+
+func initssa() *ssa.Config {
+ ssaExp.unimplemented = false
+ ssaExp.mustImplement = true
+ if ssaConfig == nil {
+ ssaConfig = ssa.NewConfig(Thearch.Thestring, &ssaExp, Ctxt, Debug['N'] == 0)
+ }
+ return ssaConfig
+}
+
+func shouldssa(fn *Node) bool {
+ if Thearch.Thestring != "amd64" {
+ return false
+ }
+
+ // Environment variable control of SSA CG
+ // 1. IF GOSSAFUNC == current function name THEN
+ // compile this function with SSA and log output to ssa.html
+
+ // 2. IF GOSSAHASH == "" THEN
+ // compile this function (and everything else) with SSA
+
+ // 3. IF GOSSAHASH == "n" or "N"
+ // IF GOSSAPKG == current package name THEN
+ // compile this function (and everything in this package) with SSA
+ // ELSE
+ // use the old back end for this function.
+ // This is for compatibility with existing test harness and should go away.
+
+ // 4. IF GOSSAHASH is a suffix of the binary-rendered SHA1 hash of the function name THEN
+ // compile this function with SSA
+ // ELSE
+ // compile this function with the old back end.
+
+ // Plan is for 3 to be removed when the tests are revised.
+ // SSA is now default, and is disabled by setting
+ // GOSSAHASH to n or N, or selectively with strings of
+ // 0 and 1.
+
+ name := fn.Func.Nname.Sym.Name
+
+ funcname := os.Getenv("GOSSAFUNC")
+ if funcname != "" {
+ // If GOSSAFUNC is set, compile only that function.
+ return name == funcname
+ }
+
+ pkg := os.Getenv("GOSSAPKG")
+ if pkg != "" {
+ // If GOSSAPKG is set, compile only that package.
+ return localpkg.Name == pkg
+ }
+
+ return initssa().DebugHashMatch("GOSSAHASH", name)
+}
+
+// buildssa builds an SSA function.
+func buildssa(fn *Node) *ssa.Func {
+ name := fn.Func.Nname.Sym.Name
+ printssa := name == os.Getenv("GOSSAFUNC")
+ if printssa {
+ fmt.Println("generating SSA for", name)
+ dumpslice("buildssa-enter", fn.Func.Enter.Slice())
+ dumpslice("buildssa-body", fn.Nbody.Slice())
+ dumpslice("buildssa-exit", fn.Func.Exit.Slice())
+ }
+
+ var s state
+ s.pushLine(fn.Lineno)
+ defer s.popLine()
+
+ if fn.Func.Pragma&CgoUnsafeArgs != 0 {
+ s.cgoUnsafeArgs = true
+ }
+ // TODO(khr): build config just once at the start of the compiler binary
+
+ ssaExp.log = printssa
+
+ s.config = initssa()
+ s.f = s.config.NewFunc()
+ s.f.Name = name
+ s.exitCode = fn.Func.Exit
+ s.panics = map[funcLine]*ssa.Block{}
+
+ if name == os.Getenv("GOSSAFUNC") {
+ // TODO: tempfile? it is handy to have the location
+ // of this file be stable, so you can just reload in the browser.
+ s.config.HTML = ssa.NewHTMLWriter("ssa.html", s.config, name)
+ // TODO: generate and print a mapping from nodes to values and blocks
+ }
+ defer func() {
+ if !printssa {
+ s.config.HTML.Close()
+ }
+ }()
+
+ // Allocate starting block
+ s.f.Entry = s.f.NewBlock(ssa.BlockPlain)
+
+ // Allocate starting values
+ s.labels = map[string]*ssaLabel{}
+ s.labeledNodes = map[*Node]*ssaLabel{}
+ s.startmem = s.entryNewValue0(ssa.OpInitMem, ssa.TypeMem)
+ s.sp = s.entryNewValue0(ssa.OpSP, Types[TUINTPTR]) // TODO: use generic pointer type (unsafe.Pointer?) instead
+ s.sb = s.entryNewValue0(ssa.OpSB, Types[TUINTPTR])
+
+ s.startBlock(s.f.Entry)
+ s.vars[&memVar] = s.startmem
+
+ s.varsyms = map[*Node]interface{}{}
+
+ // Generate addresses of local declarations
+ s.decladdrs = map[*Node]*ssa.Value{}
+ for _, n := range fn.Func.Dcl {
+ switch n.Class {
+ case PPARAM, PPARAMOUT:
+ aux := s.lookupSymbol(n, &ssa.ArgSymbol{Typ: n.Type, Node: n})
+ s.decladdrs[n] = s.entryNewValue1A(ssa.OpAddr, Ptrto(n.Type), aux, s.sp)
+ if n.Class == PPARAMOUT && s.canSSA(n) {
+ // Save ssa-able PPARAMOUT variables so we can
+ // store them back to the stack at the end of
+ // the function.
+ s.returns = append(s.returns, n)
+ }
+ case PAUTO | PHEAP:
+ // TODO this looks wrong for PAUTO|PHEAP, no vardef, but also no definition
+ aux := s.lookupSymbol(n, &ssa.AutoSymbol{Typ: n.Type, Node: n})
+ s.decladdrs[n] = s.entryNewValue1A(ssa.OpAddr, Ptrto(n.Type), aux, s.sp)
+ case PPARAM | PHEAP, PPARAMOUT | PHEAP:
+ // This ends up wrong, have to do it at the PARAM node instead.
+ case PAUTO:
+ // processed at each use, to prevent Addr coming
+ // before the decl.
+ case PFUNC:
+ // local function - already handled by frontend
+ default:
+ str := ""
+ if n.Class&PHEAP != 0 {
+ str = ",heap"
+ }
+ s.Unimplementedf("local variable with class %s%s unimplemented", classnames[n.Class&^PHEAP], str)
+ }
+ }
+
+ // Convert the AST-based IR to the SSA-based IR
+ s.stmts(fn.Func.Enter)
+ s.stmts(fn.Nbody)
+
+ // fallthrough to exit
+ if s.curBlock != nil {
+ s.stmts(s.exitCode)
+ m := s.mem()
+ b := s.endBlock()
+ b.Kind = ssa.BlockRet
+ b.Control = m
+ }
+
+ // Check that we used all labels
+ for name, lab := range s.labels {
+ if !lab.used() && !lab.reported {
+ yyerrorl(int(lab.defNode.Lineno), "label %v defined and not used", name)
+ lab.reported = true
+ }
+ if lab.used() && !lab.defined() && !lab.reported {
+ yyerrorl(int(lab.useNode.Lineno), "label %v not defined", name)
+ lab.reported = true
+ }
+ }
+
+ // Check any forward gotos. Non-forward gotos have already been checked.
+ for _, n := range s.fwdGotos {
+ lab := s.labels[n.Left.Sym.Name]
+ // If the label is undefined, we have already have printed an error.
+ if lab.defined() {
+ s.checkgoto(n, lab.defNode)
+ }
+ }
+
+ if nerrors > 0 {
+ s.f.Free()
+ return nil
+ }
+
+ // Link up variable uses to variable definitions
+ s.linkForwardReferences()
+
+ // Don't carry reference this around longer than necessary
+ s.exitCode = Nodes{}
+
+ // Main call to ssa package to compile function
+ ssa.Compile(s.f)
+
+ return s.f
+}
+
+type state struct {
+ // configuration (arch) information
+ config *ssa.Config
+
+ // function we're building
+ f *ssa.Func
+
+ // labels and labeled control flow nodes (OFOR, OSWITCH, OSELECT) in f
+ labels map[string]*ssaLabel
+ labeledNodes map[*Node]*ssaLabel
+
+ // gotos that jump forward; required for deferred checkgoto calls
+ fwdGotos []*Node
+ // Code that must precede any return
+ // (e.g., copying value of heap-escaped paramout back to true paramout)
+ exitCode Nodes
+
+ // unlabeled break and continue statement tracking
+ breakTo *ssa.Block // current target for plain break statement
+ continueTo *ssa.Block // current target for plain continue statement
+
+ // current location where we're interpreting the AST
+ curBlock *ssa.Block
+
+ // variable assignments in the current block (map from variable symbol to ssa value)
+ // *Node is the unique identifier (an ONAME Node) for the variable.
+ vars map[*Node]*ssa.Value
+
+ // all defined variables at the end of each block. Indexed by block ID.
+ defvars []map[*Node]*ssa.Value
+
+ // addresses of PPARAM and PPARAMOUT variables.
+ decladdrs map[*Node]*ssa.Value
+
+ // symbols for PEXTERN, PAUTO and PPARAMOUT variables so they can be reused.
+ varsyms map[*Node]interface{}
+
+ // starting values. Memory, stack pointer, and globals pointer
+ startmem *ssa.Value
+ sp *ssa.Value
+ sb *ssa.Value
+
+ // line number stack. The current line number is top of stack
+ line []int32
+
+ // list of panic calls by function name and line number.
+ // Used to deduplicate panic calls.
+ panics map[funcLine]*ssa.Block
+
+ // list of FwdRef values.
+ fwdRefs []*ssa.Value
+
+ // list of PPARAMOUT (return) variables. Does not include PPARAM|PHEAP vars.
+ returns []*Node
+
+ cgoUnsafeArgs bool
+}
+
+type funcLine struct {
+ f *Node
+ line int32
+}
+
+type ssaLabel struct {
+ target *ssa.Block // block identified by this label
+ breakTarget *ssa.Block // block to break to in control flow node identified by this label
+ continueTarget *ssa.Block // block to continue to in control flow node identified by this label
+ defNode *Node // label definition Node (OLABEL)
+ // Label use Node (OGOTO, OBREAK, OCONTINUE).
+ // Used only for error detection and reporting.
+ // There might be multiple uses, but we only need to track one.
+ useNode *Node
+ reported bool // reported indicates whether an error has already been reported for this label
+}
+
+// defined reports whether the label has a definition (OLABEL node).
+func (l *ssaLabel) defined() bool { return l.defNode != nil }
+
+// used reports whether the label has a use (OGOTO, OBREAK, or OCONTINUE node).
+func (l *ssaLabel) used() bool { return l.useNode != nil }
+
+// label returns the label associated with sym, creating it if necessary.
+func (s *state) label(sym *Sym) *ssaLabel {
+ lab := s.labels[sym.Name]
+ if lab == nil {
+ lab = new(ssaLabel)
+ s.labels[sym.Name] = lab
+ }
+ return lab
+}
+
+func (s *state) Logf(msg string, args ...interface{}) { s.config.Logf(msg, args...) }
+func (s *state) Log() bool { return s.config.Log() }
+func (s *state) Fatalf(msg string, args ...interface{}) { s.config.Fatalf(s.peekLine(), msg, args...) }
+func (s *state) Unimplementedf(msg string, args ...interface{}) {
+ s.config.Unimplementedf(s.peekLine(), msg, args...)
+}
+func (s *state) Warnl(line int, msg string, args ...interface{}) { s.config.Warnl(line, msg, args...) }
+func (s *state) Debug_checknil() bool { return s.config.Debug_checknil() }
+
+var (
+ // dummy node for the memory variable
+ memVar = Node{Op: ONAME, Class: Pxxx, Sym: &Sym{Name: "mem"}}
+
+ // dummy nodes for temporary variables
+ ptrVar = Node{Op: ONAME, Class: Pxxx, Sym: &Sym{Name: "ptr"}}
+ capVar = Node{Op: ONAME, Class: Pxxx, Sym: &Sym{Name: "cap"}}
+ typVar = Node{Op: ONAME, Class: Pxxx, Sym: &Sym{Name: "typ"}}
+ idataVar = Node{Op: ONAME, Class: Pxxx, Sym: &Sym{Name: "idata"}}
+ okVar = Node{Op: ONAME, Class: Pxxx, Sym: &Sym{Name: "ok"}}
+)
+
+// startBlock sets the current block we're generating code in to b.
+func (s *state) startBlock(b *ssa.Block) {
+ if s.curBlock != nil {
+ s.Fatalf("starting block %v when block %v has not ended", b, s.curBlock)
+ }
+ s.curBlock = b
+ s.vars = map[*Node]*ssa.Value{}
+}
+
+// endBlock marks the end of generating code for the current block.
+// Returns the (former) current block. Returns nil if there is no current
+// block, i.e. if no code flows to the current execution point.
+func (s *state) endBlock() *ssa.Block {
+ b := s.curBlock
+ if b == nil {
+ return nil
+ }
+ for len(s.defvars) <= int(b.ID) {
+ s.defvars = append(s.defvars, nil)
+ }
+ s.defvars[b.ID] = s.vars
+ s.curBlock = nil
+ s.vars = nil
+ b.Line = s.peekLine()
+ return b
+}
+
+// pushLine pushes a line number on the line number stack.
+func (s *state) pushLine(line int32) {
+ s.line = append(s.line, line)
+}
+
+// popLine pops the top of the line number stack.
+func (s *state) popLine() {
+ s.line = s.line[:len(s.line)-1]
+}
+
+// peekLine peek the top of the line number stack.
+func (s *state) peekLine() int32 {
+ return s.line[len(s.line)-1]
+}
+
+func (s *state) Error(msg string, args ...interface{}) {
+ yyerrorl(int(s.peekLine()), msg, args...)
+}
+
+// newValue0 adds a new value with no arguments to the current block.
+func (s *state) newValue0(op ssa.Op, t ssa.Type) *ssa.Value {
+ return s.curBlock.NewValue0(s.peekLine(), op, t)
+}
+
+// newValue0A adds a new value with no arguments and an aux value to the current block.
+func (s *state) newValue0A(op ssa.Op, t ssa.Type, aux interface{}) *ssa.Value {
+ return s.curBlock.NewValue0A(s.peekLine(), op, t, aux)
+}
+
+// newValue0I adds a new value with no arguments and an auxint value to the current block.
+func (s *state) newValue0I(op ssa.Op, t ssa.Type, auxint int64) *ssa.Value {
+ return s.curBlock.NewValue0I(s.peekLine(), op, t, auxint)
+}
+
+// newValue1 adds a new value with one argument to the current block.
+func (s *state) newValue1(op ssa.Op, t ssa.Type, arg *ssa.Value) *ssa.Value {
+ return s.curBlock.NewValue1(s.peekLine(), op, t, arg)
+}
+
+// newValue1A adds a new value with one argument and an aux value to the current block.
+func (s *state) newValue1A(op ssa.Op, t ssa.Type, aux interface{}, arg *ssa.Value) *ssa.Value {
+ return s.curBlock.NewValue1A(s.peekLine(), op, t, aux, arg)
+}
+
+// newValue1I adds a new value with one argument and an auxint value to the current block.
+func (s *state) newValue1I(op ssa.Op, t ssa.Type, aux int64, arg *ssa.Value) *ssa.Value {
+ return s.curBlock.NewValue1I(s.peekLine(), op, t, aux, arg)
+}
+
+// newValue2 adds a new value with two arguments to the current block.
+func (s *state) newValue2(op ssa.Op, t ssa.Type, arg0, arg1 *ssa.Value) *ssa.Value {
+ return s.curBlock.NewValue2(s.peekLine(), op, t, arg0, arg1)
+}
+
+// newValue2I adds a new value with two arguments and an auxint value to the current block.
+func (s *state) newValue2I(op ssa.Op, t ssa.Type, aux int64, arg0, arg1 *ssa.Value) *ssa.Value {
+ return s.curBlock.NewValue2I(s.peekLine(), op, t, aux, arg0, arg1)
+}
+
+// newValue3 adds a new value with three arguments to the current block.
+func (s *state) newValue3(op ssa.Op, t ssa.Type, arg0, arg1, arg2 *ssa.Value) *ssa.Value {
+ return s.curBlock.NewValue3(s.peekLine(), op, t, arg0, arg1, arg2)
+}
+
+// newValue3I adds a new value with three arguments and an auxint value to the current block.
+func (s *state) newValue3I(op ssa.Op, t ssa.Type, aux int64, arg0, arg1, arg2 *ssa.Value) *ssa.Value {
+ return s.curBlock.NewValue3I(s.peekLine(), op, t, aux, arg0, arg1, arg2)
+}
+
+// entryNewValue0 adds a new value with no arguments to the entry block.
+func (s *state) entryNewValue0(op ssa.Op, t ssa.Type) *ssa.Value {
+ return s.f.Entry.NewValue0(s.peekLine(), op, t)
+}
+
+// entryNewValue0A adds a new value with no arguments and an aux value to the entry block.
+func (s *state) entryNewValue0A(op ssa.Op, t ssa.Type, aux interface{}) *ssa.Value {
+ return s.f.Entry.NewValue0A(s.peekLine(), op, t, aux)
+}
+
+// entryNewValue0I adds a new value with no arguments and an auxint value to the entry block.
+func (s *state) entryNewValue0I(op ssa.Op, t ssa.Type, auxint int64) *ssa.Value {
+ return s.f.Entry.NewValue0I(s.peekLine(), op, t, auxint)
+}
+
+// entryNewValue1 adds a new value with one argument to the entry block.
+func (s *state) entryNewValue1(op ssa.Op, t ssa.Type, arg *ssa.Value) *ssa.Value {
+ return s.f.Entry.NewValue1(s.peekLine(), op, t, arg)
+}
+
+// entryNewValue1 adds a new value with one argument and an auxint value to the entry block.
+func (s *state) entryNewValue1I(op ssa.Op, t ssa.Type, auxint int64, arg *ssa.Value) *ssa.Value {
+ return s.f.Entry.NewValue1I(s.peekLine(), op, t, auxint, arg)
+}
+
+// entryNewValue1A adds a new value with one argument and an aux value to the entry block.
+func (s *state) entryNewValue1A(op ssa.Op, t ssa.Type, aux interface{}, arg *ssa.Value) *ssa.Value {
+ return s.f.Entry.NewValue1A(s.peekLine(), op, t, aux, arg)
+}
+
+// entryNewValue2 adds a new value with two arguments to the entry block.
+func (s *state) entryNewValue2(op ssa.Op, t ssa.Type, arg0, arg1 *ssa.Value) *ssa.Value {
+ return s.f.Entry.NewValue2(s.peekLine(), op, t, arg0, arg1)
+}
+
+// const* routines add a new const value to the entry block.
+func (s *state) constBool(c bool) *ssa.Value {
+ return s.f.ConstBool(s.peekLine(), Types[TBOOL], c)
+}
+func (s *state) constInt8(t ssa.Type, c int8) *ssa.Value {
+ return s.f.ConstInt8(s.peekLine(), t, c)
+}
+func (s *state) constInt16(t ssa.Type, c int16) *ssa.Value {
+ return s.f.ConstInt16(s.peekLine(), t, c)
+}
+func (s *state) constInt32(t ssa.Type, c int32) *ssa.Value {
+ return s.f.ConstInt32(s.peekLine(), t, c)
+}
+func (s *state) constInt64(t ssa.Type, c int64) *ssa.Value {
+ return s.f.ConstInt64(s.peekLine(), t, c)
+}
+func (s *state) constFloat32(t ssa.Type, c float64) *ssa.Value {
+ return s.f.ConstFloat32(s.peekLine(), t, c)
+}
+func (s *state) constFloat64(t ssa.Type, c float64) *ssa.Value {
+ return s.f.ConstFloat64(s.peekLine(), t, c)
+}
+func (s *state) constInt(t ssa.Type, c int64) *ssa.Value {
+ if s.config.IntSize == 8 {
+ return s.constInt64(t, c)
+ }
+ if int64(int32(c)) != c {
+ s.Fatalf("integer constant too big %d", c)
+ }
+ return s.constInt32(t, int32(c))
+}
+
+func (s *state) stmts(a Nodes) {
+ for _, x := range a.Slice() {
+ s.stmt(x)
+ }
+}
+
+// ssaStmtList converts the statement n to SSA and adds it to s.
+func (s *state) stmtList(l *NodeList) {
+ for ; l != nil; l = l.Next {
+ s.stmt(l.N)
+ }
+}
+
+// ssaStmt converts the statement n to SSA and adds it to s.
+func (s *state) stmt(n *Node) {
+ s.pushLine(n.Lineno)
+ defer s.popLine()
+
+ // If s.curBlock is nil, then we're about to generate dead code.
+ // We can't just short-circuit here, though,
+ // because we check labels and gotos as part of SSA generation.
+ // Provide a block for the dead code so that we don't have
+ // to add special cases everywhere else.
+ if s.curBlock == nil {
+ dead := s.f.NewBlock(ssa.BlockPlain)
+ s.startBlock(dead)
+ }
+
+ s.stmtList(n.Ninit)
+ switch n.Op {
+
+ case OBLOCK:
+ s.stmtList(n.List)
+
+ // No-ops
+ case OEMPTY, ODCLCONST, ODCLTYPE, OFALL:
+
+ // Expression statements
+ case OCALLFUNC, OCALLMETH, OCALLINTER:
+ s.call(n, callNormal)
+ if n.Op == OCALLFUNC && n.Left.Op == ONAME && n.Left.Class == PFUNC &&
+ (compiling_runtime != 0 && n.Left.Sym.Name == "throw" ||
+ n.Left.Sym.Pkg == Runtimepkg && (n.Left.Sym.Name == "gopanic" || n.Left.Sym.Name == "selectgo" || n.Left.Sym.Name == "block")) {
+ m := s.mem()
+ b := s.endBlock()
+ b.Kind = ssa.BlockExit
+ b.Control = m
+ // TODO: never rewrite OPANIC to OCALLFUNC in the
+ // first place. Need to wait until all backends
+ // go through SSA.
+ }
+ case ODEFER:
+ s.call(n.Left, callDefer)
+ case OPROC:
+ s.call(n.Left, callGo)
+
+ case OAS2DOTTYPE:
+ res, resok := s.dottype(n.Rlist.N, true)
+ s.assign(n.List.N, res, needwritebarrier(n.List.N, n.Rlist.N), false, n.Lineno)
+ s.assign(n.List.Next.N, resok, false, false, n.Lineno)
+ return
+
+ case ODCL:
+ if n.Left.Class&PHEAP == 0 {
+ return
+ }
+ if compiling_runtime != 0 {
+ Fatalf("%v escapes to heap, not allowed in runtime.", n)
+ }
+
+ // TODO: the old pass hides the details of PHEAP
+ // variables behind ONAME nodes. Figure out if it's better
+ // to rewrite the tree and make the heapaddr construct explicit
+ // or to keep this detail hidden behind the scenes.
+ palloc := prealloc[n.Left]
+ if palloc == nil {
+ palloc = callnew(n.Left.Type)
+ prealloc[n.Left] = palloc
+ }
+ r := s.expr(palloc)
+ s.assign(n.Left.Name.Heapaddr, r, false, false, n.Lineno)
+
+ case OLABEL:
+ sym := n.Left.Sym
+
+ if isblanksym(sym) {
+ // Empty identifier is valid but useless.
+ // See issues 11589, 11593.
+ return
+ }
+
+ lab := s.label(sym)
+
+ // Associate label with its control flow node, if any
+ if ctl := n.Name.Defn; ctl != nil {
+ switch ctl.Op {
+ case OFOR, OSWITCH, OSELECT:
+ s.labeledNodes[ctl] = lab
+ }
+ }
+
+ if !lab.defined() {
+ lab.defNode = n
+ } else {
+ s.Error("label %v already defined at %v", sym, Ctxt.Line(int(lab.defNode.Lineno)))
+ lab.reported = true
+ }
+ // The label might already have a target block via a goto.
+ if lab.target == nil {
+ lab.target = s.f.NewBlock(ssa.BlockPlain)
+ }
+
+ // go to that label (we pretend "label:" is preceded by "goto label")
+ b := s.endBlock()
+ b.AddEdgeTo(lab.target)
+ s.startBlock(lab.target)
+
+ case OGOTO:
+ sym := n.Left.Sym
+
+ lab := s.label(sym)
+ if lab.target == nil {
+ lab.target = s.f.NewBlock(ssa.BlockPlain)
+ }
+ if !lab.used() {
+ lab.useNode = n
+ }
+
+ if lab.defined() {
+ s.checkgoto(n, lab.defNode)
+ } else {
+ s.fwdGotos = append(s.fwdGotos, n)
+ }
+
+ b := s.endBlock()
+ b.AddEdgeTo(lab.target)
+
+ case OAS, OASWB:
+ // Check whether we can generate static data rather than code.
+ // If so, ignore n and defer data generation until codegen.
+ // Failure to do this causes writes to readonly symbols.
+ if gen_as_init(n, true) {
+ var data []*Node
+ if s.f.StaticData != nil {
+ data = s.f.StaticData.([]*Node)
+ }
+ s.f.StaticData = append(data, n)
+ return
+ }
+
+ var t *Type
+ if n.Right != nil {
+ t = n.Right.Type
+ } else {
+ t = n.Left.Type
+ }
+
+ // Evaluate RHS.
+ rhs := n.Right
+ if rhs != nil && (rhs.Op == OSTRUCTLIT || rhs.Op == OARRAYLIT) {
+ // All literals with nonzero fields have already been
+ // rewritten during walk. Any that remain are just T{}
+ // or equivalents. Use the zero value.
+ if !iszero(rhs) {
+ Fatalf("literal with nonzero value in SSA: %v", rhs)
+ }
+ rhs = nil
+ }
+ var r *ssa.Value
+ needwb := n.Op == OASWB && rhs != nil
+ deref := !canSSAType(t)
+ if deref {
+ if rhs == nil {
+ r = nil // Signal assign to use OpZero.
+ } else {
+ r = s.addr(rhs, false)
+ }
+ } else {
+ if rhs == nil {
+ r = s.zeroVal(t)
+ } else {
+ r = s.expr(rhs)
+ }
+ }
+ if rhs != nil && rhs.Op == OAPPEND {
+ // Yuck! The frontend gets rid of the write barrier, but we need it!
+ // At least, we need it in the case where growslice is called.
+ // TODO: Do the write barrier on just the growslice branch.
+ // TODO: just add a ptr graying to the end of growslice?
+ // TODO: check whether we need to do this for ODOTTYPE and ORECV also.
+ // They get similar wb-removal treatment in walk.go:OAS.
+ needwb = true
+ }
+
+ s.assign(n.Left, r, needwb, deref, n.Lineno)
+
+ case OIF:
+ bThen := s.f.NewBlock(ssa.BlockPlain)
+ bEnd := s.f.NewBlock(ssa.BlockPlain)
+ var bElse *ssa.Block
+ if n.Rlist != nil {
+ bElse = s.f.NewBlock(ssa.BlockPlain)
+ s.condBranch(n.Left, bThen, bElse, n.Likely)
+ } else {
+ s.condBranch(n.Left, bThen, bEnd, n.Likely)
+ }
+
+ s.startBlock(bThen)
+ s.stmts(n.Nbody)
+ if b := s.endBlock(); b != nil {
+ b.AddEdgeTo(bEnd)
+ }
+
+ if n.Rlist != nil {
+ s.startBlock(bElse)
+ s.stmtList(n.Rlist)
+ if b := s.endBlock(); b != nil {
+ b.AddEdgeTo(bEnd)
+ }
+ }
+ s.startBlock(bEnd)
+
+ case ORETURN:
+ s.stmtList(n.List)
+ s.exit()
+ case ORETJMP:
+ s.stmtList(n.List)
+ b := s.exit()
+ b.Kind = ssa.BlockRetJmp // override BlockRet
+ b.Aux = n.Left.Sym
+
+ case OCONTINUE, OBREAK:
+ var op string
+ var to *ssa.Block
+ switch n.Op {
+ case OCONTINUE:
+ op = "continue"
+ to = s.continueTo
+ case OBREAK:
+ op = "break"
+ to = s.breakTo
+ }
+ if n.Left == nil {
+ // plain break/continue
+ if to == nil {
+ s.Error("%s is not in a loop", op)
+ return
+ }
+ // nothing to do; "to" is already the correct target
+ } else {
+ // labeled break/continue; look up the target
+ sym := n.Left.Sym
+ lab := s.label(sym)
+ if !lab.used() {
+ lab.useNode = n.Left
+ }
+ if !lab.defined() {
+ s.Error("%s label not defined: %v", op, sym)
+ lab.reported = true
+ return
+ }
+ switch n.Op {
+ case OCONTINUE:
+ to = lab.continueTarget
+ case OBREAK:
+ to = lab.breakTarget
+ }
+ if to == nil {
+ // Valid label but not usable with a break/continue here, e.g.:
+ // for {
+ // continue abc
+ // }
+ // abc:
+ // for {}
+ s.Error("invalid %s label %v", op, sym)
+ lab.reported = true
+ return
+ }
+ }
+
+ b := s.endBlock()
+ b.AddEdgeTo(to)
+
+ case OFOR:
+ // OFOR: for Ninit; Left; Right { Nbody }
+ bCond := s.f.NewBlock(ssa.BlockPlain)
+ bBody := s.f.NewBlock(ssa.BlockPlain)
+ bIncr := s.f.NewBlock(ssa.BlockPlain)
+ bEnd := s.f.NewBlock(ssa.BlockPlain)
+
+ // first, jump to condition test
+ b := s.endBlock()
+ b.AddEdgeTo(bCond)
+
+ // generate code to test condition
+ s.startBlock(bCond)
+ if n.Left != nil {
+ s.condBranch(n.Left, bBody, bEnd, 1)
+ } else {
+ b := s.endBlock()
+ b.Kind = ssa.BlockPlain
+ b.AddEdgeTo(bBody)
+ }
+
+ // set up for continue/break in body
+ prevContinue := s.continueTo
+ prevBreak := s.breakTo
+ s.continueTo = bIncr
+ s.breakTo = bEnd
+ lab := s.labeledNodes[n]
+ if lab != nil {
+ // labeled for loop
+ lab.continueTarget = bIncr
+ lab.breakTarget = bEnd
+ }
+
+ // generate body
+ s.startBlock(bBody)
+ s.stmts(n.Nbody)
+
+ // tear down continue/break
+ s.continueTo = prevContinue
+ s.breakTo = prevBreak
+ if lab != nil {
+ lab.continueTarget = nil
+ lab.breakTarget = nil
+ }
+
+ // done with body, goto incr
+ if b := s.endBlock(); b != nil {
+ b.AddEdgeTo(bIncr)
+ }
+
+ // generate incr
+ s.startBlock(bIncr)
+ if n.Right != nil {
+ s.stmt(n.Right)
+ }
+ if b := s.endBlock(); b != nil {
+ b.AddEdgeTo(bCond)
+ }
+ s.startBlock(bEnd)
+
+ case OSWITCH, OSELECT:
+ // These have been mostly rewritten by the front end into their Nbody fields.
+ // Our main task is to correctly hook up any break statements.
+ bEnd := s.f.NewBlock(ssa.BlockPlain)
+
+ prevBreak := s.breakTo
+ s.breakTo = bEnd
+ lab := s.labeledNodes[n]
+ if lab != nil {
+ // labeled
+ lab.breakTarget = bEnd
+ }
+
+ // generate body code
+ s.stmts(n.Nbody)
+
+ s.breakTo = prevBreak
+ if lab != nil {
+ lab.breakTarget = nil
+ }
+
+ // OSWITCH never falls through (s.curBlock == nil here).
+ // OSELECT does not fall through if we're calling selectgo.
+ // OSELECT does fall through if we're calling selectnb{send,recv}[2].
+ // In those latter cases, go to the code after the select.
+ if b := s.endBlock(); b != nil {
+ b.AddEdgeTo(bEnd)
+ }
+ s.startBlock(bEnd)
+
+ case OVARKILL:
+ // Insert a varkill op to record that a variable is no longer live.
+ // We only care about liveness info at call sites, so putting the
+ // varkill in the store chain is enough to keep it correctly ordered
+ // with respect to call ops.
+ if !s.canSSA(n.Left) {
+ s.vars[&memVar] = s.newValue1A(ssa.OpVarKill, ssa.TypeMem, n.Left, s.mem())
+ }
+
+ case OVARLIVE:
+ // Insert a varlive op to record that a variable is still live.
+ if !n.Left.Addrtaken {
+ s.Fatalf("VARLIVE variable %s must have Addrtaken set", n.Left)
+ }
+ s.vars[&memVar] = s.newValue1A(ssa.OpVarLive, ssa.TypeMem, n.Left, s.mem())
+
+ case OCHECKNIL:
+ p := s.expr(n.Left)
+ s.nilCheck(p)
+
+ default:
+ s.Unimplementedf("unhandled stmt %s", opnames[n.Op])
+ }
+}
+
+// exit processes any code that needs to be generated just before returning.
+// It returns a BlockRet block that ends the control flow. Its control value
+// will be set to the final memory state.
+func (s *state) exit() *ssa.Block {
+ // Run exit code. Typically, this code copies heap-allocated PPARAMOUT
+ // variables back to the stack.
+ s.stmts(s.exitCode)
+
+ // Store SSAable PPARAMOUT variables back to stack locations.
+ for _, n := range s.returns {
+ aux := &ssa.ArgSymbol{Typ: n.Type, Node: n}
+ addr := s.newValue1A(ssa.OpAddr, Ptrto(n.Type), aux, s.sp)
+ val := s.variable(n, n.Type)
+ s.vars[&memVar] = s.newValue1A(ssa.OpVarDef, ssa.TypeMem, n, s.mem())
+ s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, n.Type.Size(), addr, val, s.mem())
+ // TODO: if val is ever spilled, we'd like to use the
+ // PPARAMOUT slot for spilling it. That won't happen
+ // currently.
+ }
+
+ // Do actual return.
+ m := s.mem()
+ b := s.endBlock()
+ b.Kind = ssa.BlockRet
+ b.Control = m
+ return b
+}
+
+type opAndType struct {
+ op Op
+ etype EType
+}
+
+var opToSSA = map[opAndType]ssa.Op{
+ opAndType{OADD, TINT8}: ssa.OpAdd8,
+ opAndType{OADD, TUINT8}: ssa.OpAdd8,
+ opAndType{OADD, TINT16}: ssa.OpAdd16,
+ opAndType{OADD, TUINT16}: ssa.OpAdd16,
+ opAndType{OADD, TINT32}: ssa.OpAdd32,
+ opAndType{OADD, TUINT32}: ssa.OpAdd32,
+ opAndType{OADD, TPTR32}: ssa.OpAdd32,
+ opAndType{OADD, TINT64}: ssa.OpAdd64,
+ opAndType{OADD, TUINT64}: ssa.OpAdd64,
+ opAndType{OADD, TPTR64}: ssa.OpAdd64,
+ opAndType{OADD, TFLOAT32}: ssa.OpAdd32F,
+ opAndType{OADD, TFLOAT64}: ssa.OpAdd64F,
+
+ opAndType{OSUB, TINT8}: ssa.OpSub8,
+ opAndType{OSUB, TUINT8}: ssa.OpSub8,
+ opAndType{OSUB, TINT16}: ssa.OpSub16,
+ opAndType{OSUB, TUINT16}: ssa.OpSub16,
+ opAndType{OSUB, TINT32}: ssa.OpSub32,
+ opAndType{OSUB, TUINT32}: ssa.OpSub32,
+ opAndType{OSUB, TINT64}: ssa.OpSub64,
+ opAndType{OSUB, TUINT64}: ssa.OpSub64,
+ opAndType{OSUB, TFLOAT32}: ssa.OpSub32F,
+ opAndType{OSUB, TFLOAT64}: ssa.OpSub64F,
+
+ opAndType{ONOT, TBOOL}: ssa.OpNot,
+
+ opAndType{OMINUS, TINT8}: ssa.OpNeg8,
+ opAndType{OMINUS, TUINT8}: ssa.OpNeg8,
+ opAndType{OMINUS, TINT16}: ssa.OpNeg16,
+ opAndType{OMINUS, TUINT16}: ssa.OpNeg16,
+ opAndType{OMINUS, TINT32}: ssa.OpNeg32,
+ opAndType{OMINUS, TUINT32}: ssa.OpNeg32,
+ opAndType{OMINUS, TINT64}: ssa.OpNeg64,
+ opAndType{OMINUS, TUINT64}: ssa.OpNeg64,
+ opAndType{OMINUS, TFLOAT32}: ssa.OpNeg32F,
+ opAndType{OMINUS, TFLOAT64}: ssa.OpNeg64F,
+
+ opAndType{OCOM, TINT8}: ssa.OpCom8,
+ opAndType{OCOM, TUINT8}: ssa.OpCom8,
+ opAndType{OCOM, TINT16}: ssa.OpCom16,
+ opAndType{OCOM, TUINT16}: ssa.OpCom16,
+ opAndType{OCOM, TINT32}: ssa.OpCom32,
+ opAndType{OCOM, TUINT32}: ssa.OpCom32,
+ opAndType{OCOM, TINT64}: ssa.OpCom64,
+ opAndType{OCOM, TUINT64}: ssa.OpCom64,
+
+ opAndType{OIMAG, TCOMPLEX64}: ssa.OpComplexImag,
+ opAndType{OIMAG, TCOMPLEX128}: ssa.OpComplexImag,
+ opAndType{OREAL, TCOMPLEX64}: ssa.OpComplexReal,
+ opAndType{OREAL, TCOMPLEX128}: ssa.OpComplexReal,
+
+ opAndType{OMUL, TINT8}: ssa.OpMul8,
+ opAndType{OMUL, TUINT8}: ssa.OpMul8,
+ opAndType{OMUL, TINT16}: ssa.OpMul16,
+ opAndType{OMUL, TUINT16}: ssa.OpMul16,
+ opAndType{OMUL, TINT32}: ssa.OpMul32,
+ opAndType{OMUL, TUINT32}: ssa.OpMul32,
+ opAndType{OMUL, TINT64}: ssa.OpMul64,
+ opAndType{OMUL, TUINT64}: ssa.OpMul64,
+ opAndType{OMUL, TFLOAT32}: ssa.OpMul32F,
+ opAndType{OMUL, TFLOAT64}: ssa.OpMul64F,
+
+ opAndType{ODIV, TFLOAT32}: ssa.OpDiv32F,
+ opAndType{ODIV, TFLOAT64}: ssa.OpDiv64F,
+
+ opAndType{OHMUL, TINT8}: ssa.OpHmul8,
+ opAndType{OHMUL, TUINT8}: ssa.OpHmul8u,
+ opAndType{OHMUL, TINT16}: ssa.OpHmul16,
+ opAndType{OHMUL, TUINT16}: ssa.OpHmul16u,
+ opAndType{OHMUL, TINT32}: ssa.OpHmul32,
+ opAndType{OHMUL, TUINT32}: ssa.OpHmul32u,
+
+ opAndType{ODIV, TINT8}: ssa.OpDiv8,
+ opAndType{ODIV, TUINT8}: ssa.OpDiv8u,
+ opAndType{ODIV, TINT16}: ssa.OpDiv16,
+ opAndType{ODIV, TUINT16}: ssa.OpDiv16u,
+ opAndType{ODIV, TINT32}: ssa.OpDiv32,
+ opAndType{ODIV, TUINT32}: ssa.OpDiv32u,
+ opAndType{ODIV, TINT64}: ssa.OpDiv64,
+ opAndType{ODIV, TUINT64}: ssa.OpDiv64u,
+
+ opAndType{OMOD, TINT8}: ssa.OpMod8,
+ opAndType{OMOD, TUINT8}: ssa.OpMod8u,
+ opAndType{OMOD, TINT16}: ssa.OpMod16,
+ opAndType{OMOD, TUINT16}: ssa.OpMod16u,
+ opAndType{OMOD, TINT32}: ssa.OpMod32,
+ opAndType{OMOD, TUINT32}: ssa.OpMod32u,
+ opAndType{OMOD, TINT64}: ssa.OpMod64,
+ opAndType{OMOD, TUINT64}: ssa.OpMod64u,
+
+ opAndType{OAND, TINT8}: ssa.OpAnd8,
+ opAndType{OAND, TUINT8}: ssa.OpAnd8,
+ opAndType{OAND, TINT16}: ssa.OpAnd16,
+ opAndType{OAND, TUINT16}: ssa.OpAnd16,
+ opAndType{OAND, TINT32}: ssa.OpAnd32,
+ opAndType{OAND, TUINT32}: ssa.OpAnd32,
+ opAndType{OAND, TINT64}: ssa.OpAnd64,
+ opAndType{OAND, TUINT64}: ssa.OpAnd64,
+
+ opAndType{OOR, TINT8}: ssa.OpOr8,
+ opAndType{OOR, TUINT8}: ssa.OpOr8,
+ opAndType{OOR, TINT16}: ssa.OpOr16,
+ opAndType{OOR, TUINT16}: ssa.OpOr16,
+ opAndType{OOR, TINT32}: ssa.OpOr32,
+ opAndType{OOR, TUINT32}: ssa.OpOr32,
+ opAndType{OOR, TINT64}: ssa.OpOr64,
+ opAndType{OOR, TUINT64}: ssa.OpOr64,
+
+ opAndType{OXOR, TINT8}: ssa.OpXor8,
+ opAndType{OXOR, TUINT8}: ssa.OpXor8,
+ opAndType{OXOR, TINT16}: ssa.OpXor16,
+ opAndType{OXOR, TUINT16}: ssa.OpXor16,
+ opAndType{OXOR, TINT32}: ssa.OpXor32,
+ opAndType{OXOR, TUINT32}: ssa.OpXor32,
+ opAndType{OXOR, TINT64}: ssa.OpXor64,
+ opAndType{OXOR, TUINT64}: ssa.OpXor64,
+
+ opAndType{OEQ, TBOOL}: ssa.OpEq8,
+ opAndType{OEQ, TINT8}: ssa.OpEq8,
+ opAndType{OEQ, TUINT8}: ssa.OpEq8,
+ opAndType{OEQ, TINT16}: ssa.OpEq16,
+ opAndType{OEQ, TUINT16}: ssa.OpEq16,
+ opAndType{OEQ, TINT32}: ssa.OpEq32,
+ opAndType{OEQ, TUINT32}: ssa.OpEq32,
+ opAndType{OEQ, TINT64}: ssa.OpEq64,
+ opAndType{OEQ, TUINT64}: ssa.OpEq64,
+ opAndType{OEQ, TINTER}: ssa.OpEqInter,
+ opAndType{OEQ, TARRAY}: ssa.OpEqSlice,
+ opAndType{OEQ, TFUNC}: ssa.OpEqPtr,
+ opAndType{OEQ, TMAP}: ssa.OpEqPtr,
+ opAndType{OEQ, TCHAN}: ssa.OpEqPtr,
+ opAndType{OEQ, TPTR64}: ssa.OpEqPtr,
+ opAndType{OEQ, TUINTPTR}: ssa.OpEqPtr,
+ opAndType{OEQ, TUNSAFEPTR}: ssa.OpEqPtr,
+ opAndType{OEQ, TFLOAT64}: ssa.OpEq64F,
+ opAndType{OEQ, TFLOAT32}: ssa.OpEq32F,
+
+ opAndType{ONE, TBOOL}: ssa.OpNeq8,
+ opAndType{ONE, TINT8}: ssa.OpNeq8,
+ opAndType{ONE, TUINT8}: ssa.OpNeq8,
+ opAndType{ONE, TINT16}: ssa.OpNeq16,
+ opAndType{ONE, TUINT16}: ssa.OpNeq16,
+ opAndType{ONE, TINT32}: ssa.OpNeq32,
+ opAndType{ONE, TUINT32}: ssa.OpNeq32,
+ opAndType{ONE, TINT64}: ssa.OpNeq64,
+ opAndType{ONE, TUINT64}: ssa.OpNeq64,
+ opAndType{ONE, TINTER}: ssa.OpNeqInter,
+ opAndType{ONE, TARRAY}: ssa.OpNeqSlice,
+ opAndType{ONE, TFUNC}: ssa.OpNeqPtr,
+ opAndType{ONE, TMAP}: ssa.OpNeqPtr,
+ opAndType{ONE, TCHAN}: ssa.OpNeqPtr,
+ opAndType{ONE, TPTR64}: ssa.OpNeqPtr,
+ opAndType{ONE, TUINTPTR}: ssa.OpNeqPtr,
+ opAndType{ONE, TUNSAFEPTR}: ssa.OpNeqPtr,
+ opAndType{ONE, TFLOAT64}: ssa.OpNeq64F,
+ opAndType{ONE, TFLOAT32}: ssa.OpNeq32F,
+
+ opAndType{OLT, TINT8}: ssa.OpLess8,
+ opAndType{OLT, TUINT8}: ssa.OpLess8U,
+ opAndType{OLT, TINT16}: ssa.OpLess16,
+ opAndType{OLT, TUINT16}: ssa.OpLess16U,
+ opAndType{OLT, TINT32}: ssa.OpLess32,
+ opAndType{OLT, TUINT32}: ssa.OpLess32U,
+ opAndType{OLT, TINT64}: ssa.OpLess64,
+ opAndType{OLT, TUINT64}: ssa.OpLess64U,
+ opAndType{OLT, TFLOAT64}: ssa.OpLess64F,
+ opAndType{OLT, TFLOAT32}: ssa.OpLess32F,
+
+ opAndType{OGT, TINT8}: ssa.OpGreater8,
+ opAndType{OGT, TUINT8}: ssa.OpGreater8U,
+ opAndType{OGT, TINT16}: ssa.OpGreater16,
+ opAndType{OGT, TUINT16}: ssa.OpGreater16U,
+ opAndType{OGT, TINT32}: ssa.OpGreater32,
+ opAndType{OGT, TUINT32}: ssa.OpGreater32U,
+ opAndType{OGT, TINT64}: ssa.OpGreater64,
+ opAndType{OGT, TUINT64}: ssa.OpGreater64U,
+ opAndType{OGT, TFLOAT64}: ssa.OpGreater64F,
+ opAndType{OGT, TFLOAT32}: ssa.OpGreater32F,
+
+ opAndType{OLE, TINT8}: ssa.OpLeq8,
+ opAndType{OLE, TUINT8}: ssa.OpLeq8U,
+ opAndType{OLE, TINT16}: ssa.OpLeq16,
+ opAndType{OLE, TUINT16}: ssa.OpLeq16U,
+ opAndType{OLE, TINT32}: ssa.OpLeq32,
+ opAndType{OLE, TUINT32}: ssa.OpLeq32U,
+ opAndType{OLE, TINT64}: ssa.OpLeq64,
+ opAndType{OLE, TUINT64}: ssa.OpLeq64U,
+ opAndType{OLE, TFLOAT64}: ssa.OpLeq64F,
+ opAndType{OLE, TFLOAT32}: ssa.OpLeq32F,
+
+ opAndType{OGE, TINT8}: ssa.OpGeq8,
+ opAndType{OGE, TUINT8}: ssa.OpGeq8U,
+ opAndType{OGE, TINT16}: ssa.OpGeq16,
+ opAndType{OGE, TUINT16}: ssa.OpGeq16U,
+ opAndType{OGE, TINT32}: ssa.OpGeq32,
+ opAndType{OGE, TUINT32}: ssa.OpGeq32U,
+ opAndType{OGE, TINT64}: ssa.OpGeq64,
+ opAndType{OGE, TUINT64}: ssa.OpGeq64U,
+ opAndType{OGE, TFLOAT64}: ssa.OpGeq64F,
+ opAndType{OGE, TFLOAT32}: ssa.OpGeq32F,
+
+ opAndType{OLROT, TUINT8}: ssa.OpLrot8,
+ opAndType{OLROT, TUINT16}: ssa.OpLrot16,
+ opAndType{OLROT, TUINT32}: ssa.OpLrot32,
+ opAndType{OLROT, TUINT64}: ssa.OpLrot64,
+
+ opAndType{OSQRT, TFLOAT64}: ssa.OpSqrt,
+}
+
+func (s *state) concreteEtype(t *Type) EType {
+ e := t.Etype
+ switch e {
+ default:
+ return e
+ case TINT:
+ if s.config.IntSize == 8 {
+ return TINT64
+ }
+ return TINT32
+ case TUINT:
+ if s.config.IntSize == 8 {
+ return TUINT64
+ }
+ return TUINT32
+ case TUINTPTR:
+ if s.config.PtrSize == 8 {
+ return TUINT64
+ }
+ return TUINT32
+ }
+}
+
+func (s *state) ssaOp(op Op, t *Type) ssa.Op {
+ etype := s.concreteEtype(t)
+ x, ok := opToSSA[opAndType{op, etype}]
+ if !ok {
+ s.Unimplementedf("unhandled binary op %s %s", opnames[op], Econv(etype))
+ }
+ return x
+}
+
+func floatForComplex(t *Type) *Type {
+ if t.Size() == 8 {
+ return Types[TFLOAT32]
+ } else {
+ return Types[TFLOAT64]
+ }
+}
+
+type opAndTwoTypes struct {
+ op Op
+ etype1 EType
+ etype2 EType
+}
+
+type twoTypes struct {
+ etype1 EType
+ etype2 EType
+}
+
+type twoOpsAndType struct {
+ op1 ssa.Op
+ op2 ssa.Op
+ intermediateType EType
+}
+
+var fpConvOpToSSA = map[twoTypes]twoOpsAndType{
+
+ twoTypes{TINT8, TFLOAT32}: twoOpsAndType{ssa.OpSignExt8to32, ssa.OpCvt32to32F, TINT32},
+ twoTypes{TINT16, TFLOAT32}: twoOpsAndType{ssa.OpSignExt16to32, ssa.OpCvt32to32F, TINT32},
+ twoTypes{TINT32, TFLOAT32}: twoOpsAndType{ssa.OpCopy, ssa.OpCvt32to32F, TINT32},
+ twoTypes{TINT64, TFLOAT32}: twoOpsAndType{ssa.OpCopy, ssa.OpCvt64to32F, TINT64},
+
+ twoTypes{TINT8, TFLOAT64}: twoOpsAndType{ssa.OpSignExt8to32, ssa.OpCvt32to64F, TINT32},
+ twoTypes{TINT16, TFLOAT64}: twoOpsAndType{ssa.OpSignExt16to32, ssa.OpCvt32to64F, TINT32},
+ twoTypes{TINT32, TFLOAT64}: twoOpsAndType{ssa.OpCopy, ssa.OpCvt32to64F, TINT32},
+ twoTypes{TINT64, TFLOAT64}: twoOpsAndType{ssa.OpCopy, ssa.OpCvt64to64F, TINT64},
+
+ twoTypes{TFLOAT32, TINT8}: twoOpsAndType{ssa.OpCvt32Fto32, ssa.OpTrunc32to8, TINT32},
+ twoTypes{TFLOAT32, TINT16}: twoOpsAndType{ssa.OpCvt32Fto32, ssa.OpTrunc32to16, TINT32},
+ twoTypes{TFLOAT32, TINT32}: twoOpsAndType{ssa.OpCvt32Fto32, ssa.OpCopy, TINT32},
+ twoTypes{TFLOAT32, TINT64}: twoOpsAndType{ssa.OpCvt32Fto64, ssa.OpCopy, TINT64},
+
+ twoTypes{TFLOAT64, TINT8}: twoOpsAndType{ssa.OpCvt64Fto32, ssa.OpTrunc32to8, TINT32},
+ twoTypes{TFLOAT64, TINT16}: twoOpsAndType{ssa.OpCvt64Fto32, ssa.OpTrunc32to16, TINT32},
+ twoTypes{TFLOAT64, TINT32}: twoOpsAndType{ssa.OpCvt64Fto32, ssa.OpCopy, TINT32},
+ twoTypes{TFLOAT64, TINT64}: twoOpsAndType{ssa.OpCvt64Fto64, ssa.OpCopy, TINT64},
+ // unsigned
+ twoTypes{TUINT8, TFLOAT32}: twoOpsAndType{ssa.OpZeroExt8to32, ssa.OpCvt32to32F, TINT32},
+ twoTypes{TUINT16, TFLOAT32}: twoOpsAndType{ssa.OpZeroExt16to32, ssa.OpCvt32to32F, TINT32},
+ twoTypes{TUINT32, TFLOAT32}: twoOpsAndType{ssa.OpZeroExt32to64, ssa.OpCvt64to32F, TINT64}, // go wide to dodge unsigned
+ twoTypes{TUINT64, TFLOAT32}: twoOpsAndType{ssa.OpCopy, ssa.OpInvalid, TUINT64}, // Cvt64Uto32F, branchy code expansion instead
+
+ twoTypes{TUINT8, TFLOAT64}: twoOpsAndType{ssa.OpZeroExt8to32, ssa.OpCvt32to64F, TINT32},
+ twoTypes{TUINT16, TFLOAT64}: twoOpsAndType{ssa.OpZeroExt16to32, ssa.OpCvt32to64F, TINT32},
+ twoTypes{TUINT32, TFLOAT64}: twoOpsAndType{ssa.OpZeroExt32to64, ssa.OpCvt64to64F, TINT64}, // go wide to dodge unsigned
+ twoTypes{TUINT64, TFLOAT64}: twoOpsAndType{ssa.OpCopy, ssa.OpInvalid, TUINT64}, // Cvt64Uto64F, branchy code expansion instead
+
+ twoTypes{TFLOAT32, TUINT8}: twoOpsAndType{ssa.OpCvt32Fto32, ssa.OpTrunc32to8, TINT32},
+ twoTypes{TFLOAT32, TUINT16}: twoOpsAndType{ssa.OpCvt32Fto32, ssa.OpTrunc32to16, TINT32},
+ twoTypes{TFLOAT32, TUINT32}: twoOpsAndType{ssa.OpCvt32Fto64, ssa.OpTrunc64to32, TINT64}, // go wide to dodge unsigned
+ twoTypes{TFLOAT32, TUINT64}: twoOpsAndType{ssa.OpInvalid, ssa.OpCopy, TUINT64}, // Cvt32Fto64U, branchy code expansion instead
+
+ twoTypes{TFLOAT64, TUINT8}: twoOpsAndType{ssa.OpCvt64Fto32, ssa.OpTrunc32to8, TINT32},
+ twoTypes{TFLOAT64, TUINT16}: twoOpsAndType{ssa.OpCvt64Fto32, ssa.OpTrunc32to16, TINT32},
+ twoTypes{TFLOAT64, TUINT32}: twoOpsAndType{ssa.OpCvt64Fto64, ssa.OpTrunc64to32, TINT64}, // go wide to dodge unsigned
+ twoTypes{TFLOAT64, TUINT64}: twoOpsAndType{ssa.OpInvalid, ssa.OpCopy, TUINT64}, // Cvt64Fto64U, branchy code expansion instead
+
+ // float
+ twoTypes{TFLOAT64, TFLOAT32}: twoOpsAndType{ssa.OpCvt64Fto32F, ssa.OpCopy, TFLOAT32},
+ twoTypes{TFLOAT64, TFLOAT64}: twoOpsAndType{ssa.OpCopy, ssa.OpCopy, TFLOAT64},
+ twoTypes{TFLOAT32, TFLOAT32}: twoOpsAndType{ssa.OpCopy, ssa.OpCopy, TFLOAT32},
+ twoTypes{TFLOAT32, TFLOAT64}: twoOpsAndType{ssa.OpCvt32Fto64F, ssa.OpCopy, TFLOAT64},
+}
+
+var shiftOpToSSA = map[opAndTwoTypes]ssa.Op{
+ opAndTwoTypes{OLSH, TINT8, TUINT8}: ssa.OpLsh8x8,
+ opAndTwoTypes{OLSH, TUINT8, TUINT8}: ssa.OpLsh8x8,
+ opAndTwoTypes{OLSH, TINT8, TUINT16}: ssa.OpLsh8x16,
+ opAndTwoTypes{OLSH, TUINT8, TUINT16}: ssa.OpLsh8x16,
+ opAndTwoTypes{OLSH, TINT8, TUINT32}: ssa.OpLsh8x32,
+ opAndTwoTypes{OLSH, TUINT8, TUINT32}: ssa.OpLsh8x32,
+ opAndTwoTypes{OLSH, TINT8, TUINT64}: ssa.OpLsh8x64,
+ opAndTwoTypes{OLSH, TUINT8, TUINT64}: ssa.OpLsh8x64,
+
+ opAndTwoTypes{OLSH, TINT16, TUINT8}: ssa.OpLsh16x8,
+ opAndTwoTypes{OLSH, TUINT16, TUINT8}: ssa.OpLsh16x8,
+ opAndTwoTypes{OLSH, TINT16, TUINT16}: ssa.OpLsh16x16,
+ opAndTwoTypes{OLSH, TUINT16, TUINT16}: ssa.OpLsh16x16,
+ opAndTwoTypes{OLSH, TINT16, TUINT32}: ssa.OpLsh16x32,
+ opAndTwoTypes{OLSH, TUINT16, TUINT32}: ssa.OpLsh16x32,
+ opAndTwoTypes{OLSH, TINT16, TUINT64}: ssa.OpLsh16x64,
+ opAndTwoTypes{OLSH, TUINT16, TUINT64}: ssa.OpLsh16x64,
+
+ opAndTwoTypes{OLSH, TINT32, TUINT8}: ssa.OpLsh32x8,
+ opAndTwoTypes{OLSH, TUINT32, TUINT8}: ssa.OpLsh32x8,
+ opAndTwoTypes{OLSH, TINT32, TUINT16}: ssa.OpLsh32x16,
+ opAndTwoTypes{OLSH, TUINT32, TUINT16}: ssa.OpLsh32x16,
+ opAndTwoTypes{OLSH, TINT32, TUINT32}: ssa.OpLsh32x32,
+ opAndTwoTypes{OLSH, TUINT32, TUINT32}: ssa.OpLsh32x32,
+ opAndTwoTypes{OLSH, TINT32, TUINT64}: ssa.OpLsh32x64,
+ opAndTwoTypes{OLSH, TUINT32, TUINT64}: ssa.OpLsh32x64,
+
+ opAndTwoTypes{OLSH, TINT64, TUINT8}: ssa.OpLsh64x8,
+ opAndTwoTypes{OLSH, TUINT64, TUINT8}: ssa.OpLsh64x8,
+ opAndTwoTypes{OLSH, TINT64, TUINT16}: ssa.OpLsh64x16,
+ opAndTwoTypes{OLSH, TUINT64, TUINT16}: ssa.OpLsh64x16,
+ opAndTwoTypes{OLSH, TINT64, TUINT32}: ssa.OpLsh64x32,
+ opAndTwoTypes{OLSH, TUINT64, TUINT32}: ssa.OpLsh64x32,
+ opAndTwoTypes{OLSH, TINT64, TUINT64}: ssa.OpLsh64x64,
+ opAndTwoTypes{OLSH, TUINT64, TUINT64}: ssa.OpLsh64x64,
+
+ opAndTwoTypes{ORSH, TINT8, TUINT8}: ssa.OpRsh8x8,
+ opAndTwoTypes{ORSH, TUINT8, TUINT8}: ssa.OpRsh8Ux8,
+ opAndTwoTypes{ORSH, TINT8, TUINT16}: ssa.OpRsh8x16,
+ opAndTwoTypes{ORSH, TUINT8, TUINT16}: ssa.OpRsh8Ux16,
+ opAndTwoTypes{ORSH, TINT8, TUINT32}: ssa.OpRsh8x32,
+ opAndTwoTypes{ORSH, TUINT8, TUINT32}: ssa.OpRsh8Ux32,
+ opAndTwoTypes{ORSH, TINT8, TUINT64}: ssa.OpRsh8x64,
+ opAndTwoTypes{ORSH, TUINT8, TUINT64}: ssa.OpRsh8Ux64,
+
+ opAndTwoTypes{ORSH, TINT16, TUINT8}: ssa.OpRsh16x8,
+ opAndTwoTypes{ORSH, TUINT16, TUINT8}: ssa.OpRsh16Ux8,
+ opAndTwoTypes{ORSH, TINT16, TUINT16}: ssa.OpRsh16x16,
+ opAndTwoTypes{ORSH, TUINT16, TUINT16}: ssa.OpRsh16Ux16,
+ opAndTwoTypes{ORSH, TINT16, TUINT32}: ssa.OpRsh16x32,
+ opAndTwoTypes{ORSH, TUINT16, TUINT32}: ssa.OpRsh16Ux32,
+ opAndTwoTypes{ORSH, TINT16, TUINT64}: ssa.OpRsh16x64,
+ opAndTwoTypes{ORSH, TUINT16, TUINT64}: ssa.OpRsh16Ux64,
+
+ opAndTwoTypes{ORSH, TINT32, TUINT8}: ssa.OpRsh32x8,
+ opAndTwoTypes{ORSH, TUINT32, TUINT8}: ssa.OpRsh32Ux8,
+ opAndTwoTypes{ORSH, TINT32, TUINT16}: ssa.OpRsh32x16,
+ opAndTwoTypes{ORSH, TUINT32, TUINT16}: ssa.OpRsh32Ux16,
+ opAndTwoTypes{ORSH, TINT32, TUINT32}: ssa.OpRsh32x32,
+ opAndTwoTypes{ORSH, TUINT32, TUINT32}: ssa.OpRsh32Ux32,
+ opAndTwoTypes{ORSH, TINT32, TUINT64}: ssa.OpRsh32x64,
+ opAndTwoTypes{ORSH, TUINT32, TUINT64}: ssa.OpRsh32Ux64,
+
+ opAndTwoTypes{ORSH, TINT64, TUINT8}: ssa.OpRsh64x8,
+ opAndTwoTypes{ORSH, TUINT64, TUINT8}: ssa.OpRsh64Ux8,
+ opAndTwoTypes{ORSH, TINT64, TUINT16}: ssa.OpRsh64x16,
+ opAndTwoTypes{ORSH, TUINT64, TUINT16}: ssa.OpRsh64Ux16,
+ opAndTwoTypes{ORSH, TINT64, TUINT32}: ssa.OpRsh64x32,
+ opAndTwoTypes{ORSH, TUINT64, TUINT32}: ssa.OpRsh64Ux32,
+ opAndTwoTypes{ORSH, TINT64, TUINT64}: ssa.OpRsh64x64,
+ opAndTwoTypes{ORSH, TUINT64, TUINT64}: ssa.OpRsh64Ux64,
+}
+
+func (s *state) ssaShiftOp(op Op, t *Type, u *Type) ssa.Op {
+ etype1 := s.concreteEtype(t)
+ etype2 := s.concreteEtype(u)
+ x, ok := shiftOpToSSA[opAndTwoTypes{op, etype1, etype2}]
+ if !ok {
+ s.Unimplementedf("unhandled shift op %s etype=%s/%s", opnames[op], Econv(etype1), Econv(etype2))
+ }
+ return x
+}
+
+func (s *state) ssaRotateOp(op Op, t *Type) ssa.Op {
+ etype1 := s.concreteEtype(t)
+ x, ok := opToSSA[opAndType{op, etype1}]
+ if !ok {
+ s.Unimplementedf("unhandled rotate op %s etype=%s", opnames[op], Econv(etype1))
+ }
+ return x
+}
+
+// expr converts the expression n to ssa, adds it to s and returns the ssa result.
+func (s *state) expr(n *Node) *ssa.Value {
+ s.pushLine(n.Lineno)
+ defer s.popLine()
+
+ s.stmtList(n.Ninit)
+ switch n.Op {
+ case OCFUNC:
+ aux := s.lookupSymbol(n, &ssa.ExternSymbol{n.Type, n.Left.Sym})
+ return s.entryNewValue1A(ssa.OpAddr, n.Type, aux, s.sb)
+ case OPARAM:
+ addr := s.addr(n, false)
+ return s.newValue2(ssa.OpLoad, n.Left.Type, addr, s.mem())
+ case ONAME:
+ if n.Class == PFUNC {
+ // "value" of a function is the address of the function's closure
+ sym := funcsym(n.Sym)
+ aux := &ssa.ExternSymbol{n.Type, sym}
+ return s.entryNewValue1A(ssa.OpAddr, Ptrto(n.Type), aux, s.sb)
+ }
+ if s.canSSA(n) {
+ return s.variable(n, n.Type)
+ }
+ addr := s.addr(n, false)
+ return s.newValue2(ssa.OpLoad, n.Type, addr, s.mem())
+ case OCLOSUREVAR:
+ addr := s.addr(n, false)
+ return s.newValue2(ssa.OpLoad, n.Type, addr, s.mem())
+ case OLITERAL:
+ switch n.Val().Ctype() {
+ case CTINT:
+ i := Mpgetfix(n.Val().U.(*Mpint))
+ switch n.Type.Size() {
+ case 1:
+ return s.constInt8(n.Type, int8(i))
+ case 2:
+ return s.constInt16(n.Type, int16(i))
+ case 4:
+ return s.constInt32(n.Type, int32(i))
+ case 8:
+ return s.constInt64(n.Type, i)
+ default:
+ s.Fatalf("bad integer size %d", n.Type.Size())
+ return nil
+ }
+ case CTSTR:
+ return s.entryNewValue0A(ssa.OpConstString, n.Type, n.Val().U)
+ case CTBOOL:
+ v := s.constBool(n.Val().U.(bool))
+ // For some reason the frontend gets the line numbers of
+ // CTBOOL literals totally wrong. Fix it here by grabbing
+ // the line number of the enclosing AST node.
+ if len(s.line) >= 2 {
+ v.Line = s.line[len(s.line)-2]
+ }
+ return v
+ case CTNIL:
+ t := n.Type
+ switch {
+ case t.IsSlice():
+ return s.entryNewValue0(ssa.OpConstSlice, t)
+ case t.IsInterface():
+ return s.entryNewValue0(ssa.OpConstInterface, t)
+ default:
+ return s.entryNewValue0(ssa.OpConstNil, t)
+ }
+ case CTFLT:
+ f := n.Val().U.(*Mpflt)
+ switch n.Type.Size() {
+ case 4:
+ return s.constFloat32(n.Type, mpgetflt32(f))
+ case 8:
+ return s.constFloat64(n.Type, mpgetflt(f))
+ default:
+ s.Fatalf("bad float size %d", n.Type.Size())
+ return nil
+ }
+ case CTCPLX:
+ c := n.Val().U.(*Mpcplx)
+ r := &c.Real
+ i := &c.Imag
+ switch n.Type.Size() {
+ case 8:
+ {
+ pt := Types[TFLOAT32]
+ return s.newValue2(ssa.OpComplexMake, n.Type,
+ s.constFloat32(pt, mpgetflt32(r)),
+ s.constFloat32(pt, mpgetflt32(i)))
+ }
+ case 16:
+ {
+ pt := Types[TFLOAT64]
+ return s.newValue2(ssa.OpComplexMake, n.Type,
+ s.constFloat64(pt, mpgetflt(r)),
+ s.constFloat64(pt, mpgetflt(i)))
+ }
+ default:
+ s.Fatalf("bad float size %d", n.Type.Size())
+ return nil
+ }
+
+ default:
+ s.Unimplementedf("unhandled OLITERAL %v", n.Val().Ctype())
+ return nil
+ }
+ case OCONVNOP:
+ to := n.Type
+ from := n.Left.Type
+
+ // Assume everything will work out, so set up our return value.
+ // Anything interesting that happens from here is a fatal.
+ x := s.expr(n.Left)
+
+ // Special case for not confusing GC and liveness.
+ // We don't want pointers accidentally classified
+ // as not-pointers or vice-versa because of copy
+ // elision.
+ if to.IsPtr() != from.IsPtr() {
+ return s.newValue2(ssa.OpConvert, to, x, s.mem())
+ }
+
+ v := s.newValue1(ssa.OpCopy, to, x) // ensure that v has the right type
+
+ // CONVNOP closure
+ if to.Etype == TFUNC && from.IsPtr() {
+ return v
+ }
+
+ // named <--> unnamed type or typed <--> untyped const
+ if from.Etype == to.Etype {
+ return v
+ }
+
+ // unsafe.Pointer <--> *T
+ if to.Etype == TUNSAFEPTR && from.IsPtr() || from.Etype == TUNSAFEPTR && to.IsPtr() {
+ return v
+ }
+
+ dowidth(from)
+ dowidth(to)
+ if from.Width != to.Width {
+ s.Fatalf("CONVNOP width mismatch %v (%d) -> %v (%d)\n", from, from.Width, to, to.Width)
+ return nil
+ }
+ if etypesign(from.Etype) != etypesign(to.Etype) {
+ s.Fatalf("CONVNOP sign mismatch %v (%s) -> %v (%s)\n", from, Econv(from.Etype), to, Econv(to.Etype))
+ return nil
+ }
+
+ if instrumenting {
+ // These appear to be fine, but they fail the
+ // integer constraint below, so okay them here.
+ // Sample non-integer conversion: map[string]string -> *uint8
+ return v
+ }
+
+ if etypesign(from.Etype) == 0 {
+ s.Fatalf("CONVNOP unrecognized non-integer %v -> %v\n", from, to)
+ return nil
+ }
+
+ // integer, same width, same sign
+ return v
+
+ case OCONV:
+ x := s.expr(n.Left)
+ ft := n.Left.Type // from type
+ tt := n.Type // to type
+ if ft.IsInteger() && tt.IsInteger() {
+ var op ssa.Op
+ if tt.Size() == ft.Size() {
+ op = ssa.OpCopy
+ } else if tt.Size() < ft.Size() {
+ // truncation
+ switch 10*ft.Size() + tt.Size() {
+ case 21:
+ op = ssa.OpTrunc16to8
+ case 41:
+ op = ssa.OpTrunc32to8
+ case 42:
+ op = ssa.OpTrunc32to16
+ case 81:
+ op = ssa.OpTrunc64to8
+ case 82:
+ op = ssa.OpTrunc64to16
+ case 84:
+ op = ssa.OpTrunc64to32
+ default:
+ s.Fatalf("weird integer truncation %s -> %s", ft, tt)
+ }
+ } else if ft.IsSigned() {
+ // sign extension
+ switch 10*ft.Size() + tt.Size() {
+ case 12:
+ op = ssa.OpSignExt8to16
+ case 14:
+ op = ssa.OpSignExt8to32
+ case 18:
+ op = ssa.OpSignExt8to64
+ case 24:
+ op = ssa.OpSignExt16to32
+ case 28:
+ op = ssa.OpSignExt16to64
+ case 48:
+ op = ssa.OpSignExt32to64
+ default:
+ s.Fatalf("bad integer sign extension %s -> %s", ft, tt)
+ }
+ } else {
+ // zero extension
+ switch 10*ft.Size() + tt.Size() {
+ case 12:
+ op = ssa.OpZeroExt8to16
+ case 14:
+ op = ssa.OpZeroExt8to32
+ case 18:
+ op = ssa.OpZeroExt8to64
+ case 24:
+ op = ssa.OpZeroExt16to32
+ case 28:
+ op = ssa.OpZeroExt16to64
+ case 48:
+ op = ssa.OpZeroExt32to64
+ default:
+ s.Fatalf("weird integer sign extension %s -> %s", ft, tt)
+ }
+ }
+ return s.newValue1(op, n.Type, x)
+ }
+
+ if ft.IsFloat() || tt.IsFloat() {
+ conv, ok := fpConvOpToSSA[twoTypes{s.concreteEtype(ft), s.concreteEtype(tt)}]
+ if !ok {
+ s.Fatalf("weird float conversion %s -> %s", ft, tt)
+ }
+ op1, op2, it := conv.op1, conv.op2, conv.intermediateType
+
+ if op1 != ssa.OpInvalid && op2 != ssa.OpInvalid {
+ // normal case, not tripping over unsigned 64
+ if op1 == ssa.OpCopy {
+ if op2 == ssa.OpCopy {
+ return x
+ }
+ return s.newValue1(op2, n.Type, x)
+ }
+ if op2 == ssa.OpCopy {
+ return s.newValue1(op1, n.Type, x)
+ }
+ return s.newValue1(op2, n.Type, s.newValue1(op1, Types[it], x))
+ }
+ // Tricky 64-bit unsigned cases.
+ if ft.IsInteger() {
+ // therefore tt is float32 or float64, and ft is also unsigned
+ if tt.Size() == 4 {
+ return s.uint64Tofloat32(n, x, ft, tt)
+ }
+ if tt.Size() == 8 {
+ return s.uint64Tofloat64(n, x, ft, tt)
+ }
+ s.Fatalf("weird unsigned integer to float conversion %s -> %s", ft, tt)
+ }
+ // therefore ft is float32 or float64, and tt is unsigned integer
+ if ft.Size() == 4 {
+ return s.float32ToUint64(n, x, ft, tt)
+ }
+ if ft.Size() == 8 {
+ return s.float64ToUint64(n, x, ft, tt)
+ }
+ s.Fatalf("weird float to unsigned integer conversion %s -> %s", ft, tt)
+ return nil
+ }
+
+ if ft.IsComplex() && tt.IsComplex() {
+ var op ssa.Op
+ if ft.Size() == tt.Size() {
+ op = ssa.OpCopy
+ } else if ft.Size() == 8 && tt.Size() == 16 {
+ op = ssa.OpCvt32Fto64F
+ } else if ft.Size() == 16 && tt.Size() == 8 {
+ op = ssa.OpCvt64Fto32F
+ } else {
+ s.Fatalf("weird complex conversion %s -> %s", ft, tt)
+ }
+ ftp := floatForComplex(ft)
+ ttp := floatForComplex(tt)
+ return s.newValue2(ssa.OpComplexMake, tt,
+ s.newValue1(op, ttp, s.newValue1(ssa.OpComplexReal, ftp, x)),
+ s.newValue1(op, ttp, s.newValue1(ssa.OpComplexImag, ftp, x)))
+ }
+
+ s.Unimplementedf("unhandled OCONV %s -> %s", Econv(n.Left.Type.Etype), Econv(n.Type.Etype))
+ return nil
+
+ case ODOTTYPE:
+ res, _ := s.dottype(n, false)
+ return res
+
+ // binary ops
+ case OLT, OEQ, ONE, OLE, OGE, OGT:
+ a := s.expr(n.Left)
+ b := s.expr(n.Right)
+ if n.Left.Type.IsComplex() {
+ pt := floatForComplex(n.Left.Type)
+ op := s.ssaOp(OEQ, pt)
+ r := s.newValue2(op, Types[TBOOL], s.newValue1(ssa.OpComplexReal, pt, a), s.newValue1(ssa.OpComplexReal, pt, b))
+ i := s.newValue2(op, Types[TBOOL], s.newValue1(ssa.OpComplexImag, pt, a), s.newValue1(ssa.OpComplexImag, pt, b))
+ c := s.newValue2(ssa.OpAnd8, Types[TBOOL], r, i)
+ switch n.Op {
+ case OEQ:
+ return c
+ case ONE:
+ return s.newValue1(ssa.OpNot, Types[TBOOL], c)
+ default:
+ s.Fatalf("ordered complex compare %s", opnames[n.Op])
+ }
+ }
+ return s.newValue2(s.ssaOp(n.Op, n.Left.Type), Types[TBOOL], a, b)
+ case OMUL:
+ a := s.expr(n.Left)
+ b := s.expr(n.Right)
+ if n.Type.IsComplex() {
+ mulop := ssa.OpMul64F
+ addop := ssa.OpAdd64F
+ subop := ssa.OpSub64F
+ pt := floatForComplex(n.Type) // Could be Float32 or Float64
+ wt := Types[TFLOAT64] // Compute in Float64 to minimize cancellation error
+
+ areal := s.newValue1(ssa.OpComplexReal, pt, a)
+ breal := s.newValue1(ssa.OpComplexReal, pt, b)
+ aimag := s.newValue1(ssa.OpComplexImag, pt, a)
+ bimag := s.newValue1(ssa.OpComplexImag, pt, b)
+
+ if pt != wt { // Widen for calculation
+ areal = s.newValue1(ssa.OpCvt32Fto64F, wt, areal)
+ breal = s.newValue1(ssa.OpCvt32Fto64F, wt, breal)
+ aimag = s.newValue1(ssa.OpCvt32Fto64F, wt, aimag)
+ bimag = s.newValue1(ssa.OpCvt32Fto64F, wt, bimag)
+ }
+
+ xreal := s.newValue2(subop, wt, s.newValue2(mulop, wt, areal, breal), s.newValue2(mulop, wt, aimag, bimag))
+ ximag := s.newValue2(addop, wt, s.newValue2(mulop, wt, areal, bimag), s.newValue2(mulop, wt, aimag, breal))
+
+ if pt != wt { // Narrow to store back
+ xreal = s.newValue1(ssa.OpCvt64Fto32F, pt, xreal)
+ ximag = s.newValue1(ssa.OpCvt64Fto32F, pt, ximag)
+ }
+
+ return s.newValue2(ssa.OpComplexMake, n.Type, xreal, ximag)
+ }
+ return s.newValue2(s.ssaOp(n.Op, n.Type), a.Type, a, b)
+
+ case ODIV:
+ a := s.expr(n.Left)
+ b := s.expr(n.Right)
+ if n.Type.IsComplex() {
+ // TODO this is not executed because the front-end substitutes a runtime call.
+ // That probably ought to change; with modest optimization the widen/narrow
+ // conversions could all be elided in larger expression trees.
+ mulop := ssa.OpMul64F
+ addop := ssa.OpAdd64F
+ subop := ssa.OpSub64F
+ divop := ssa.OpDiv64F
+ pt := floatForComplex(n.Type) // Could be Float32 or Float64
+ wt := Types[TFLOAT64] // Compute in Float64 to minimize cancellation error
+
+ areal := s.newValue1(ssa.OpComplexReal, pt, a)
+ breal := s.newValue1(ssa.OpComplexReal, pt, b)
+ aimag := s.newValue1(ssa.OpComplexImag, pt, a)
+ bimag := s.newValue1(ssa.OpComplexImag, pt, b)
+
+ if pt != wt { // Widen for calculation
+ areal = s.newValue1(ssa.OpCvt32Fto64F, wt, areal)
+ breal = s.newValue1(ssa.OpCvt32Fto64F, wt, breal)
+ aimag = s.newValue1(ssa.OpCvt32Fto64F, wt, aimag)
+ bimag = s.newValue1(ssa.OpCvt32Fto64F, wt, bimag)
+ }
+
+ denom := s.newValue2(addop, wt, s.newValue2(mulop, wt, breal, breal), s.newValue2(mulop, wt, bimag, bimag))
+ xreal := s.newValue2(addop, wt, s.newValue2(mulop, wt, areal, breal), s.newValue2(mulop, wt, aimag, bimag))
+ ximag := s.newValue2(subop, wt, s.newValue2(mulop, wt, aimag, breal), s.newValue2(mulop, wt, areal, bimag))
+
+ // TODO not sure if this is best done in wide precision or narrow
+ // Double-rounding might be an issue.
+ // Note that the pre-SSA implementation does the entire calculation
+ // in wide format, so wide is compatible.
+ xreal = s.newValue2(divop, wt, xreal, denom)
+ ximag = s.newValue2(divop, wt, ximag, denom)
+
+ if pt != wt { // Narrow to store back
+ xreal = s.newValue1(ssa.OpCvt64Fto32F, pt, xreal)
+ ximag = s.newValue1(ssa.OpCvt64Fto32F, pt, ximag)
+ }
+ return s.newValue2(ssa.OpComplexMake, n.Type, xreal, ximag)
+ }
+ if n.Type.IsFloat() {
+ return s.newValue2(s.ssaOp(n.Op, n.Type), a.Type, a, b)
+ } else {
+ // do a size-appropriate check for zero
+ cmp := s.newValue2(s.ssaOp(ONE, n.Type), Types[TBOOL], b, s.zeroVal(n.Type))
+ s.check(cmp, panicdivide)
+ return s.newValue2(s.ssaOp(n.Op, n.Type), a.Type, a, b)
+ }
+ case OMOD:
+ a := s.expr(n.Left)
+ b := s.expr(n.Right)
+ // do a size-appropriate check for zero
+ cmp := s.newValue2(s.ssaOp(ONE, n.Type), Types[TBOOL], b, s.zeroVal(n.Type))
+ s.check(cmp, panicdivide)
+ return s.newValue2(s.ssaOp(n.Op, n.Type), a.Type, a, b)
+ case OADD, OSUB:
+ a := s.expr(n.Left)
+ b := s.expr(n.Right)
+ if n.Type.IsComplex() {
+ pt := floatForComplex(n.Type)
+ op := s.ssaOp(n.Op, pt)
+ return s.newValue2(ssa.OpComplexMake, n.Type,
+ s.newValue2(op, pt, s.newValue1(ssa.OpComplexReal, pt, a), s.newValue1(ssa.OpComplexReal, pt, b)),
+ s.newValue2(op, pt, s.newValue1(ssa.OpComplexImag, pt, a), s.newValue1(ssa.OpComplexImag, pt, b)))
+ }
+ return s.newValue2(s.ssaOp(n.Op, n.Type), a.Type, a, b)
+ case OAND, OOR, OHMUL, OXOR:
+ a := s.expr(n.Left)
+ b := s.expr(n.Right)
+ return s.newValue2(s.ssaOp(n.Op, n.Type), a.Type, a, b)
+ case OLSH, ORSH:
+ a := s.expr(n.Left)
+ b := s.expr(n.Right)
+ return s.newValue2(s.ssaShiftOp(n.Op, n.Type, n.Right.Type), a.Type, a, b)
+ case OLROT:
+ a := s.expr(n.Left)
+ i := n.Right.Int()
+ if i <= 0 || i >= n.Type.Size()*8 {
+ s.Fatalf("Wrong rotate distance for LROT, expected 1 through %d, saw %d", n.Type.Size()*8-1, i)
+ }
+ return s.newValue1I(s.ssaRotateOp(n.Op, n.Type), a.Type, i, a)
+ case OANDAND, OOROR:
+ // To implement OANDAND (and OOROR), we introduce a
+ // new temporary variable to hold the result. The
+ // variable is associated with the OANDAND node in the
+ // s.vars table (normally variables are only
+ // associated with ONAME nodes). We convert
+ // A && B
+ // to
+ // var = A
+ // if var {
+ // var = B
+ // }
+ // Using var in the subsequent block introduces the
+ // necessary phi variable.
+ el := s.expr(n.Left)
+ s.vars[n] = el
+
+ b := s.endBlock()
+ b.Kind = ssa.BlockIf
+ b.Control = el
+ // In theory, we should set b.Likely here based on context.
+ // However, gc only gives us likeliness hints
+ // in a single place, for plain OIF statements,
+ // and passing around context is finnicky, so don't bother for now.
+
+ bRight := s.f.NewBlock(ssa.BlockPlain)
+ bResult := s.f.NewBlock(ssa.BlockPlain)
+ if n.Op == OANDAND {
+ b.AddEdgeTo(bRight)
+ b.AddEdgeTo(bResult)
+ } else if n.Op == OOROR {
+ b.AddEdgeTo(bResult)
+ b.AddEdgeTo(bRight)
+ }
+
+ s.startBlock(bRight)
+ er := s.expr(n.Right)
+ s.vars[n] = er
+
+ b = s.endBlock()
+ b.AddEdgeTo(bResult)
+
+ s.startBlock(bResult)
+ return s.variable(n, Types[TBOOL])
+ case OCOMPLEX:
+ r := s.expr(n.Left)
+ i := s.expr(n.Right)
+ return s.newValue2(ssa.OpComplexMake, n.Type, r, i)
+
+ // unary ops
+ case OMINUS:
+ a := s.expr(n.Left)
+ if n.Type.IsComplex() {
+ tp := floatForComplex(n.Type)
+ negop := s.ssaOp(n.Op, tp)
+ return s.newValue2(ssa.OpComplexMake, n.Type,
+ s.newValue1(negop, tp, s.newValue1(ssa.OpComplexReal, tp, a)),
+ s.newValue1(negop, tp, s.newValue1(ssa.OpComplexImag, tp, a)))
+ }
+ return s.newValue1(s.ssaOp(n.Op, n.Type), a.Type, a)
+ case ONOT, OCOM, OSQRT:
+ a := s.expr(n.Left)
+ return s.newValue1(s.ssaOp(n.Op, n.Type), a.Type, a)
+ case OIMAG, OREAL:
+ a := s.expr(n.Left)
+ return s.newValue1(s.ssaOp(n.Op, n.Left.Type), n.Type, a)
+ case OPLUS:
+ return s.expr(n.Left)
+
+ case OADDR:
+ return s.addr(n.Left, n.Bounded)
+
+ case OINDREG:
+ if int(n.Reg) != Thearch.REGSP {
+ s.Unimplementedf("OINDREG of non-SP register %s in expr: %v", obj.Rconv(int(n.Reg)), n)
+ return nil
+ }
+ addr := s.entryNewValue1I(ssa.OpOffPtr, Ptrto(n.Type), n.Xoffset, s.sp)
+ return s.newValue2(ssa.OpLoad, n.Type, addr, s.mem())
+
+ case OIND:
+ p := s.expr(n.Left)
+ s.nilCheck(p)
+ return s.newValue2(ssa.OpLoad, n.Type, p, s.mem())
+
+ case ODOT:
+ t := n.Left.Type
+ if canSSAType(t) {
+ v := s.expr(n.Left)
+ return s.newValue1I(ssa.OpStructSelect, n.Type, fieldIdx(n), v)
+ }
+ p := s.addr(n, false)
+ return s.newValue2(ssa.OpLoad, n.Type, p, s.mem())
+
+ case ODOTPTR:
+ p := s.expr(n.Left)
+ s.nilCheck(p)
+ p = s.newValue2(ssa.OpAddPtr, p.Type, p, s.constInt(Types[TINT], n.Xoffset))
+ return s.newValue2(ssa.OpLoad, n.Type, p, s.mem())
+
+ case OINDEX:
+ switch {
+ case n.Left.Type.IsString():
+ a := s.expr(n.Left)
+ i := s.expr(n.Right)
+ i = s.extendIndex(i)
+ if !n.Bounded {
+ len := s.newValue1(ssa.OpStringLen, Types[TINT], a)
+ s.boundsCheck(i, len)
+ }
+ ptrtyp := Ptrto(Types[TUINT8])
+ ptr := s.newValue1(ssa.OpStringPtr, ptrtyp, a)
+ ptr = s.newValue2(ssa.OpAddPtr, ptrtyp, ptr, i)
+ return s.newValue2(ssa.OpLoad, Types[TUINT8], ptr, s.mem())
+ case n.Left.Type.IsSlice():
+ p := s.addr(n, false)
+ return s.newValue2(ssa.OpLoad, n.Left.Type.Type, p, s.mem())
+ case n.Left.Type.IsArray():
+ // TODO: fix when we can SSA arrays of length 1.
+ p := s.addr(n, false)
+ return s.newValue2(ssa.OpLoad, n.Left.Type.Type, p, s.mem())
+ default:
+ s.Fatalf("bad type for index %v", n.Left.Type)
+ return nil
+ }
+
+ case OLEN, OCAP:
+ switch {
+ case n.Left.Type.IsSlice():
+ op := ssa.OpSliceLen
+ if n.Op == OCAP {
+ op = ssa.OpSliceCap
+ }
+ return s.newValue1(op, Types[TINT], s.expr(n.Left))
+ case n.Left.Type.IsString(): // string; not reachable for OCAP
+ return s.newValue1(ssa.OpStringLen, Types[TINT], s.expr(n.Left))
+ case n.Left.Type.IsMap(), n.Left.Type.IsChan():
+ return s.referenceTypeBuiltin(n, s.expr(n.Left))
+ default: // array
+ return s.constInt(Types[TINT], n.Left.Type.Bound)
+ }
+
+ case OSPTR:
+ a := s.expr(n.Left)
+ if n.Left.Type.IsSlice() {
+ return s.newValue1(ssa.OpSlicePtr, n.Type, a)
+ } else {
+ return s.newValue1(ssa.OpStringPtr, n.Type, a)
+ }
+
+ case OITAB:
+ a := s.expr(n.Left)
+ return s.newValue1(ssa.OpITab, n.Type, a)
+
+ case OEFACE:
+ tab := s.expr(n.Left)
+ data := s.expr(n.Right)
+ // The frontend allows putting things like struct{*byte} in
+ // the data portion of an eface. But we don't want struct{*byte}
+ // as a register type because (among other reasons) the liveness
+ // analysis is confused by the "fat" variables that result from
+ // such types being spilled.
+ // So here we ensure that we are selecting the underlying pointer
+ // when we build an eface.
+ // TODO: get rid of this now that structs can be SSA'd?
+ for !data.Type.IsPtr() {
+ switch {
+ case data.Type.IsArray():
+ data = s.newValue1I(ssa.OpArrayIndex, data.Type.Elem(), 0, data)
+ case data.Type.IsStruct():
+ for i := data.Type.NumFields() - 1; i >= 0; i-- {
+ f := data.Type.FieldType(i)
+ if f.Size() == 0 {
+ // eface type could also be struct{p *byte; q [0]int}
+ continue
+ }
+ data = s.newValue1I(ssa.OpStructSelect, f, i, data)
+ break
+ }
+ default:
+ s.Fatalf("type being put into an eface isn't a pointer")
+ }
+ }
+ return s.newValue2(ssa.OpIMake, n.Type, tab, data)
+
+ case OSLICE, OSLICEARR:
+ v := s.expr(n.Left)
+ var i, j *ssa.Value
+ if n.Right.Left != nil {
+ i = s.extendIndex(s.expr(n.Right.Left))
+ }
+ if n.Right.Right != nil {
+ j = s.extendIndex(s.expr(n.Right.Right))
+ }
+ p, l, c := s.slice(n.Left.Type, v, i, j, nil)
+ return s.newValue3(ssa.OpSliceMake, n.Type, p, l, c)
+ case OSLICESTR:
+ v := s.expr(n.Left)
+ var i, j *ssa.Value
+ if n.Right.Left != nil {
+ i = s.extendIndex(s.expr(n.Right.Left))
+ }
+ if n.Right.Right != nil {
+ j = s.extendIndex(s.expr(n.Right.Right))
+ }
+ p, l, _ := s.slice(n.Left.Type, v, i, j, nil)
+ return s.newValue2(ssa.OpStringMake, n.Type, p, l)
+ case OSLICE3, OSLICE3ARR:
+ v := s.expr(n.Left)
+ var i *ssa.Value
+ if n.Right.Left != nil {
+ i = s.extendIndex(s.expr(n.Right.Left))
+ }
+ j := s.extendIndex(s.expr(n.Right.Right.Left))
+ k := s.extendIndex(s.expr(n.Right.Right.Right))
+ p, l, c := s.slice(n.Left.Type, v, i, j, k)
+ return s.newValue3(ssa.OpSliceMake, n.Type, p, l, c)
+
+ case OCALLFUNC, OCALLINTER, OCALLMETH:
+ a := s.call(n, callNormal)
+ return s.newValue2(ssa.OpLoad, n.Type, a, s.mem())
+
+ case OGETG:
+ return s.newValue1(ssa.OpGetG, n.Type, s.mem())
+
+ case OAPPEND:
+ // append(s, e1, e2, e3). Compile like:
+ // ptr,len,cap := s
+ // newlen := len + 3
+ // if newlen > s.cap {
+ // ptr,_,cap = growslice(s, newlen)
+ // }
+ // *(ptr+len) = e1
+ // *(ptr+len+1) = e2
+ // *(ptr+len+2) = e3
+ // makeslice(ptr,newlen,cap)
+
+ et := n.Type.Type
+ pt := Ptrto(et)
+
+ // Evaluate slice
+ slice := s.expr(n.List.N)
+
+ // Allocate new blocks
+ grow := s.f.NewBlock(ssa.BlockPlain)
+ assign := s.f.NewBlock(ssa.BlockPlain)
+
+ // Decide if we need to grow
+ nargs := int64(count(n.List) - 1)
+ p := s.newValue1(ssa.OpSlicePtr, pt, slice)
+ l := s.newValue1(ssa.OpSliceLen, Types[TINT], slice)
+ c := s.newValue1(ssa.OpSliceCap, Types[TINT], slice)
+ nl := s.newValue2(s.ssaOp(OADD, Types[TINT]), Types[TINT], l, s.constInt(Types[TINT], nargs))
+ cmp := s.newValue2(s.ssaOp(OGT, Types[TINT]), Types[TBOOL], nl, c)
+ s.vars[&ptrVar] = p
+ s.vars[&capVar] = c
+ b := s.endBlock()
+ b.Kind = ssa.BlockIf
+ b.Likely = ssa.BranchUnlikely
+ b.Control = cmp
+ b.AddEdgeTo(grow)
+ b.AddEdgeTo(assign)
+
+ // Call growslice
+ s.startBlock(grow)
+ taddr := s.newValue1A(ssa.OpAddr, Types[TUINTPTR], &ssa.ExternSymbol{Types[TUINTPTR], typenamesym(n.Type)}, s.sb)
+
+ r := s.rtcall(growslice, true, []*Type{pt, Types[TINT], Types[TINT]}, taddr, p, l, c, nl)
+
+ s.vars[&ptrVar] = r[0]
+ // Note: we don't need to read r[1], the result's length. It will be nl.
+ // (or maybe we should, we just have to spill/restore nl otherwise?)
+ s.vars[&capVar] = r[2]
+ b = s.endBlock()
+ b.AddEdgeTo(assign)
+
+ // assign new elements to slots
+ s.startBlock(assign)
+
+ // Evaluate args
+ args := make([]*ssa.Value, 0, nargs)
+ store := make([]bool, 0, nargs)
+ for l := n.List.Next; l != nil; l = l.Next {
+ if canSSAType(l.N.Type) {
+ args = append(args, s.expr(l.N))
+ store = append(store, true)
+ } else {
+ args = append(args, s.addr(l.N, false))
+ store = append(store, false)
+ }
+ }
+
+ p = s.variable(&ptrVar, pt) // generates phi for ptr
+ c = s.variable(&capVar, Types[TINT]) // generates phi for cap
+ p2 := s.newValue2(ssa.OpPtrIndex, pt, p, l)
+ // TODO: just one write barrier call for all of these writes?
+ // TODO: maybe just one writeBarrier.enabled check?
+ for i, arg := range args {
+ addr := s.newValue2(ssa.OpPtrIndex, pt, p2, s.constInt(Types[TINT], int64(i)))
+ if store[i] {
+ if haspointers(et) {
+ s.insertWBstore(et, addr, arg, n.Lineno)
+ } else {
+ s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, et.Size(), addr, arg, s.mem())
+ }
+ } else {
+ if haspointers(et) {
+ s.insertWBmove(et, addr, arg, n.Lineno)
+ } else {
+ s.vars[&memVar] = s.newValue3I(ssa.OpMove, ssa.TypeMem, et.Size(), addr, arg, s.mem())
+ }
+ }
+ }
+
+ // make result
+ delete(s.vars, &ptrVar)
+ delete(s.vars, &capVar)
+ return s.newValue3(ssa.OpSliceMake, n.Type, p, nl, c)
+
+ default:
+ s.Unimplementedf("unhandled expr %s", opnames[n.Op])
+ return nil
+ }
+}
+
+// condBranch evaluates the boolean expression cond and branches to yes
+// if cond is true and no if cond is false.
+// This function is intended to handle && and || better than just calling
+// s.expr(cond) and branching on the result.
+func (s *state) condBranch(cond *Node, yes, no *ssa.Block, likely int8) {
+ if cond.Op == OANDAND {
+ mid := s.f.NewBlock(ssa.BlockPlain)
+ s.stmtList(cond.Ninit)
+ s.condBranch(cond.Left, mid, no, max8(likely, 0))
+ s.startBlock(mid)
+ s.condBranch(cond.Right, yes, no, likely)
+ return
+ // Note: if likely==1, then both recursive calls pass 1.
+ // If likely==-1, then we don't have enough information to decide
+ // whether the first branch is likely or not. So we pass 0 for
+ // the likeliness of the first branch.
+ // TODO: have the frontend give us branch prediction hints for
+ // OANDAND and OOROR nodes (if it ever has such info).
+ }
+ if cond.Op == OOROR {
+ mid := s.f.NewBlock(ssa.BlockPlain)
+ s.stmtList(cond.Ninit)
+ s.condBranch(cond.Left, yes, mid, min8(likely, 0))
+ s.startBlock(mid)
+ s.condBranch(cond.Right, yes, no, likely)
+ return
+ // Note: if likely==-1, then both recursive calls pass -1.
+ // If likely==1, then we don't have enough info to decide
+ // the likelihood of the first branch.
+ }
+ if cond.Op == ONOT {
+ s.stmtList(cond.Ninit)
+ s.condBranch(cond.Left, no, yes, -likely)
+ return
+ }
+ c := s.expr(cond)
+ b := s.endBlock()
+ b.Kind = ssa.BlockIf
+ b.Control = c
+ b.Likely = ssa.BranchPrediction(likely) // gc and ssa both use -1/0/+1 for likeliness
+ b.AddEdgeTo(yes)
+ b.AddEdgeTo(no)
+}
+
+// assign does left = right.
+// Right has already been evaluated to ssa, left has not.
+// If deref is true, then we do left = *right instead (and right has already been nil-checked).
+// If deref is true and right == nil, just do left = 0.
+// Include a write barrier if wb is true.
+func (s *state) assign(left *Node, right *ssa.Value, wb, deref bool, line int32) {
+ if left.Op == ONAME && isblank(left) {
+ return
+ }
+ t := left.Type
+ dowidth(t)
+ if s.canSSA(left) {
+ if deref {
+ s.Fatalf("can SSA LHS %s but not RHS %s", left, right)
+ }
+ if left.Op == ODOT {
+ // We're assigning to a field of an ssa-able value.
+ // We need to build a new structure with the new value for the
+ // field we're assigning and the old values for the other fields.
+ // For instance:
+ // type T struct {a, b, c int}
+ // var T x
+ // x.b = 5
+ // For the x.b = 5 assignment we want to generate x = T{x.a, 5, x.c}
+
+ // Grab information about the structure type.
+ t := left.Left.Type
+ nf := t.NumFields()
+ idx := fieldIdx(left)
+
+ // Grab old value of structure.
+ old := s.expr(left.Left)
+
+ // Make new structure.
+ new := s.newValue0(ssa.StructMakeOp(t.NumFields()), t)
+
+ // Add fields as args.
+ for i := int64(0); i < nf; i++ {
+ if i == idx {
+ new.AddArg(right)
+ } else {
+ new.AddArg(s.newValue1I(ssa.OpStructSelect, t.FieldType(i), i, old))
+ }
+ }
+
+ // Recursively assign the new value we've made to the base of the dot op.
+ s.assign(left.Left, new, false, false, line)
+ // TODO: do we need to update named values here?
+ return
+ }
+ // Update variable assignment.
+ s.vars[left] = right
+ s.addNamedValue(left, right)
+ return
+ }
+ // Left is not ssa-able. Compute its address.
+ addr := s.addr(left, false)
+ if left.Op == ONAME {
+ s.vars[&memVar] = s.newValue1A(ssa.OpVarDef, ssa.TypeMem, left, s.mem())
+ }
+ if deref {
+ // Treat as a mem->mem move.
+ if right == nil {
+ s.vars[&memVar] = s.newValue2I(ssa.OpZero, ssa.TypeMem, t.Size(), addr, s.mem())
+ return
+ }
+ if wb {
+ s.insertWBmove(t, addr, right, line)
+ return
+ }
+ s.vars[&memVar] = s.newValue3I(ssa.OpMove, ssa.TypeMem, t.Size(), addr, right, s.mem())
+ return
+ }
+ // Treat as a store.
+ if wb {
+ s.insertWBstore(t, addr, right, line)
+ return
+ }
+ s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, t.Size(), addr, right, s.mem())
+}
+
+// zeroVal returns the zero value for type t.
+func (s *state) zeroVal(t *Type) *ssa.Value {
+ switch {
+ case t.IsInteger():
+ switch t.Size() {
+ case 1:
+ return s.constInt8(t, 0)
+ case 2:
+ return s.constInt16(t, 0)
+ case 4:
+ return s.constInt32(t, 0)
+ case 8:
+ return s.constInt64(t, 0)
+ default:
+ s.Fatalf("bad sized integer type %s", t)
+ }
+ case t.IsFloat():
+ switch t.Size() {
+ case 4:
+ return s.constFloat32(t, 0)
+ case 8:
+ return s.constFloat64(t, 0)
+ default:
+ s.Fatalf("bad sized float type %s", t)
+ }
+ case t.IsComplex():
+ switch t.Size() {
+ case 8:
+ z := s.constFloat32(Types[TFLOAT32], 0)
+ return s.entryNewValue2(ssa.OpComplexMake, t, z, z)
+ case 16:
+ z := s.constFloat64(Types[TFLOAT64], 0)
+ return s.entryNewValue2(ssa.OpComplexMake, t, z, z)
+ default:
+ s.Fatalf("bad sized complex type %s", t)
+ }
+
+ case t.IsString():
+ return s.entryNewValue0A(ssa.OpConstString, t, "")
+ case t.IsPtr():
+ return s.entryNewValue0(ssa.OpConstNil, t)
+ case t.IsBoolean():
+ return s.constBool(false)
+ case t.IsInterface():
+ return s.entryNewValue0(ssa.OpConstInterface, t)
+ case t.IsSlice():
+ return s.entryNewValue0(ssa.OpConstSlice, t)
+ case t.IsStruct():
+ n := t.NumFields()
+ v := s.entryNewValue0(ssa.StructMakeOp(t.NumFields()), t)
+ for i := int64(0); i < n; i++ {
+ v.AddArg(s.zeroVal(t.FieldType(i).(*Type)))
+ }
+ return v
+ }
+ s.Unimplementedf("zero for type %v not implemented", t)
+ return nil
+}
+
+type callKind int8
+
+const (
+ callNormal callKind = iota
+ callDefer
+ callGo
+)
+
+// Calls the function n using the specified call type.
+// Returns the address of the return value (or nil if none).
+func (s *state) call(n *Node, k callKind) *ssa.Value {
+ var sym *Sym // target symbol (if static)
+ var closure *ssa.Value // ptr to closure to run (if dynamic)
+ var codeptr *ssa.Value // ptr to target code (if dynamic)
+ var rcvr *ssa.Value // receiver to set
+ fn := n.Left
+ switch n.Op {
+ case OCALLFUNC:
+ if k == callNormal && fn.Op == ONAME && fn.Class == PFUNC {
+ sym = fn.Sym
+ break
+ }
+ closure = s.expr(fn)
+ case OCALLMETH:
+ if fn.Op != ODOTMETH {
+ Fatalf("OCALLMETH: n.Left not an ODOTMETH: %v", fn)
+ }
+ if fn.Right.Op != ONAME {
+ Fatalf("OCALLMETH: n.Left.Right not a ONAME: %v", fn.Right)
+ }
+ if k == callNormal {
+ sym = fn.Right.Sym
+ break
+ }
+ n2 := *fn.Right
+ n2.Class = PFUNC
+ closure = s.expr(&n2)
+ // Note: receiver is already assigned in n.List, so we don't
+ // want to set it here.
+ case OCALLINTER:
+ if fn.Op != ODOTINTER {
+ Fatalf("OCALLINTER: n.Left not an ODOTINTER: %v", Oconv(int(fn.Op), 0))
+ }
+ i := s.expr(fn.Left)
+ itab := s.newValue1(ssa.OpITab, Types[TUINTPTR], i)
+ itabidx := fn.Xoffset + 3*int64(Widthptr) + 8 // offset of fun field in runtime.itab
+ itab = s.newValue1I(ssa.OpOffPtr, Types[TUINTPTR], itabidx, itab)
+ if k == callNormal {
+ codeptr = s.newValue2(ssa.OpLoad, Types[TUINTPTR], itab, s.mem())
+ } else {
+ closure = itab
+ }
+ rcvr = s.newValue1(ssa.OpIData, Types[TUINTPTR], i)
+ }
+ dowidth(fn.Type)
+ stksize := fn.Type.Argwid // includes receiver
+
+ // Run all argument assignments. The arg slots have already
+ // been offset by the appropriate amount (+2*widthptr for go/defer,
+ // +widthptr for interface calls).
+ // For OCALLMETH, the receiver is set in these statements.
+ s.stmtList(n.List)
+
+ // Set receiver (for interface calls)
+ if rcvr != nil {
+ argStart := Ctxt.FixedFrameSize()
+ if k != callNormal {
+ argStart += int64(2 * Widthptr)
+ }
+ addr := s.entryNewValue1I(ssa.OpOffPtr, Types[TUINTPTR], argStart, s.sp)
+ s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, int64(Widthptr), addr, rcvr, s.mem())
+ }
+
+ // Defer/go args
+ if k != callNormal {
+ // Write argsize and closure (args to Newproc/Deferproc).
+ argsize := s.constInt32(Types[TUINT32], int32(stksize))
+ s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, 4, s.sp, argsize, s.mem())
+ addr := s.entryNewValue1I(ssa.OpOffPtr, Ptrto(Types[TUINTPTR]), int64(Widthptr), s.sp)
+ s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, int64(Widthptr), addr, closure, s.mem())
+ stksize += 2 * int64(Widthptr)
+ }
+
+ // call target
+ bNext := s.f.NewBlock(ssa.BlockPlain)
+ var call *ssa.Value
+ switch {
+ case k == callDefer:
+ call = s.newValue1(ssa.OpDeferCall, ssa.TypeMem, s.mem())
+ case k == callGo:
+ call = s.newValue1(ssa.OpGoCall, ssa.TypeMem, s.mem())
+ case closure != nil:
+ codeptr = s.newValue2(ssa.OpLoad, Types[TUINTPTR], closure, s.mem())
+ call = s.newValue3(ssa.OpClosureCall, ssa.TypeMem, codeptr, closure, s.mem())
+ case codeptr != nil:
+ call = s.newValue2(ssa.OpInterCall, ssa.TypeMem, codeptr, s.mem())
+ case sym != nil:
+ call = s.newValue1A(ssa.OpStaticCall, ssa.TypeMem, sym, s.mem())
+ default:
+ Fatalf("bad call type %s %v", opnames[n.Op], n)
+ }
+ call.AuxInt = stksize // Call operations carry the argsize of the callee along with them
+
+ // Finish call block
+ s.vars[&memVar] = call
+ b := s.endBlock()
+ b.Kind = ssa.BlockCall
+ b.Control = call
+ b.AddEdgeTo(bNext)
+
+ // Start exit block, find address of result.
+ s.startBlock(bNext)
+ var titer Iter
+ fp := Structfirst(&titer, Getoutarg(n.Left.Type))
+ if fp == nil || k != callNormal {
+ // call has no return value. Continue with the next statement.
+ return nil
+ }
+ return s.entryNewValue1I(ssa.OpOffPtr, Ptrto(fp.Type), fp.Width, s.sp)
+}
+
+// etypesign returns the signed-ness of e, for integer/pointer etypes.
+// -1 means signed, +1 means unsigned, 0 means non-integer/non-pointer.
+func etypesign(e EType) int8 {
+ switch e {
+ case TINT8, TINT16, TINT32, TINT64, TINT:
+ return -1
+ case TUINT8, TUINT16, TUINT32, TUINT64, TUINT, TUINTPTR, TUNSAFEPTR:
+ return +1
+ }
+ return 0
+}
+
+// lookupSymbol is used to retrieve the symbol (Extern, Arg or Auto) used for a particular node.
+// This improves the effectiveness of cse by using the same Aux values for the
+// same symbols.
+func (s *state) lookupSymbol(n *Node, sym interface{}) interface{} {
+ switch sym.(type) {
+ default:
+ s.Fatalf("sym %v is of uknown type %T", sym, sym)
+ case *ssa.ExternSymbol, *ssa.ArgSymbol, *ssa.AutoSymbol:
+ // these are the only valid types
+ }
+
+ if lsym, ok := s.varsyms[n]; ok {
+ return lsym
+ } else {
+ s.varsyms[n] = sym
+ return sym
+ }
+}
+
+// addr converts the address of the expression n to SSA, adds it to s and returns the SSA result.
+// The value that the returned Value represents is guaranteed to be non-nil.
+// If bounded is true then this address does not require a nil check for its operand
+// even if that would otherwise be implied.
+func (s *state) addr(n *Node, bounded bool) *ssa.Value {
+ t := Ptrto(n.Type)
+ switch n.Op {
+ case ONAME:
+ switch n.Class {
+ case PEXTERN:
+ // global variable
+ aux := s.lookupSymbol(n, &ssa.ExternSymbol{n.Type, n.Sym})
+ v := s.entryNewValue1A(ssa.OpAddr, t, aux, s.sb)
+ // TODO: Make OpAddr use AuxInt as well as Aux.
+ if n.Xoffset != 0 {
+ v = s.entryNewValue1I(ssa.OpOffPtr, v.Type, n.Xoffset, v)
+ }
+ return v
+ case PPARAM:
+ // parameter slot
+ v := s.decladdrs[n]
+ if v != nil {
+ return v
+ }
+ if n.String() == ".fp" {
+ // Special arg that points to the frame pointer.
+ // (Used by the race detector, others?)
+ aux := s.lookupSymbol(n, &ssa.ArgSymbol{Typ: n.Type, Node: n})
+ return s.entryNewValue1A(ssa.OpAddr, t, aux, s.sp)
+ }
+ s.Fatalf("addr of undeclared ONAME %v. declared: %v", n, s.decladdrs)
+ return nil
+ case PAUTO:
+ // We need to regenerate the address of autos
+ // at every use. This prevents LEA instructions
+ // from occurring before the corresponding VarDef
+ // op and confusing the liveness analysis into thinking
+ // the variable is live at function entry.
+ // TODO: I'm not sure if this really works or we're just
+ // getting lucky. We might need a real dependency edge
+ // between vardef and addr ops.
+ aux := &ssa.AutoSymbol{Typ: n.Type, Node: n}
+ return s.newValue1A(ssa.OpAddr, t, aux, s.sp)
+ case PPARAMOUT: // Same as PAUTO -- cannot generate LEA early.
+ // ensure that we reuse symbols for out parameters so
+ // that cse works on their addresses
+ aux := s.lookupSymbol(n, &ssa.ArgSymbol{Typ: n.Type, Node: n})
+ return s.newValue1A(ssa.OpAddr, t, aux, s.sp)
+ case PAUTO | PHEAP, PPARAM | PHEAP, PPARAMOUT | PHEAP, PPARAMREF:
+ return s.expr(n.Name.Heapaddr)
+ default:
+ s.Unimplementedf("variable address class %v not implemented", n.Class)
+ return nil
+ }
+ case OINDREG:
+ // indirect off a register
+ // used for storing/loading arguments/returns to/from callees
+ if int(n.Reg) != Thearch.REGSP {
+ s.Unimplementedf("OINDREG of non-SP register %s in addr: %v", obj.Rconv(int(n.Reg)), n)
+ return nil
+ }
+ return s.entryNewValue1I(ssa.OpOffPtr, t, n.Xoffset, s.sp)
+ case OINDEX:
+ if n.Left.Type.IsSlice() {
+ a := s.expr(n.Left)
+ i := s.expr(n.Right)
+ i = s.extendIndex(i)
+ len := s.newValue1(ssa.OpSliceLen, Types[TINT], a)
+ if !n.Bounded {
+ s.boundsCheck(i, len)
+ }
+ p := s.newValue1(ssa.OpSlicePtr, t, a)
+ return s.newValue2(ssa.OpPtrIndex, t, p, i)
+ } else { // array
+ a := s.addr(n.Left, bounded)
+ i := s.expr(n.Right)
+ i = s.extendIndex(i)
+ len := s.constInt(Types[TINT], n.Left.Type.Bound)
+ if !n.Bounded {
+ s.boundsCheck(i, len)
+ }
+ return s.newValue2(ssa.OpPtrIndex, Ptrto(n.Left.Type.Type), a, i)
+ }
+ case OIND:
+ p := s.expr(n.Left)
+ if !bounded {
+ s.nilCheck(p)
+ }
+ return p
+ case ODOT:
+ p := s.addr(n.Left, bounded)
+ return s.newValue2(ssa.OpAddPtr, t, p, s.constInt(Types[TINT], n.Xoffset))
+ case ODOTPTR:
+ p := s.expr(n.Left)
+ if !bounded {
+ s.nilCheck(p)
+ }
+ return s.newValue2(ssa.OpAddPtr, t, p, s.constInt(Types[TINT], n.Xoffset))
+ case OCLOSUREVAR:
+ return s.newValue2(ssa.OpAddPtr, t,
+ s.entryNewValue0(ssa.OpGetClosurePtr, Ptrto(Types[TUINT8])),
+ s.constInt(Types[TINT], n.Xoffset))
+ case OPARAM:
+ p := n.Left
+ if p.Op != ONAME || !(p.Class == PPARAM|PHEAP || p.Class == PPARAMOUT|PHEAP) {
+ s.Fatalf("OPARAM not of ONAME,{PPARAM,PPARAMOUT}|PHEAP, instead %s", nodedump(p, 0))
+ }
+
+ // Recover original offset to address passed-in param value.
+ original_p := *p
+ original_p.Xoffset = n.Xoffset
+ aux := &ssa.ArgSymbol{Typ: n.Type, Node: &original_p}
+ return s.entryNewValue1A(ssa.OpAddr, t, aux, s.sp)
+ case OCONVNOP:
+ addr := s.addr(n.Left, bounded)
+ return s.newValue1(ssa.OpCopy, t, addr) // ensure that addr has the right type
+ case OCALLFUNC, OCALLINTER, OCALLMETH:
+ return s.call(n, callNormal)
+
+ default:
+ s.Unimplementedf("unhandled addr %v", Oconv(int(n.Op), 0))
+ return nil
+ }
+}
+
+// canSSA reports whether n is SSA-able.
+// n must be an ONAME (or an ODOT sequence with an ONAME base).
+func (s *state) canSSA(n *Node) bool {
+ for n.Op == ODOT {
+ n = n.Left
+ }
+ if n.Op != ONAME {
+ return false
+ }
+ if n.Addrtaken {
+ return false
+ }
+ if n.Class&PHEAP != 0 {
+ return false
+ }
+ switch n.Class {
+ case PEXTERN, PPARAMREF:
+ // TODO: maybe treat PPARAMREF with an Arg-like op to read from closure?
+ return false
+ case PPARAMOUT:
+ if hasdefer {
+ // TODO: handle this case? Named return values must be
+ // in memory so that the deferred function can see them.
+ // Maybe do: if !strings.HasPrefix(n.String(), "~") { return false }
+ return false
+ }
+ if s.cgoUnsafeArgs {
+ // Cgo effectively takes the address of all result args,
+ // but the compiler can't see that.
+ return false
+ }
+ }
+ if n.Class == PPARAM && n.String() == ".this" {
+ // wrappers generated by genwrapper need to update
+ // the .this pointer in place.
+ // TODO: treat as a PPARMOUT?
+ return false
+ }
+ return canSSAType(n.Type)
+ // TODO: try to make more variables SSAable?
+}
+
+// canSSA reports whether variables of type t are SSA-able.
+func canSSAType(t *Type) bool {
+ dowidth(t)
+ if t.Width > int64(4*Widthptr) {
+ // 4*Widthptr is an arbitrary constant. We want it
+ // to be at least 3*Widthptr so slices can be registerized.
+ // Too big and we'll introduce too much register pressure.
+ return false
+ }
+ switch t.Etype {
+ case TARRAY:
+ if Isslice(t) {
+ return true
+ }
+ // We can't do arrays because dynamic indexing is
+ // not supported on SSA variables.
+ // TODO: maybe allow if length is <=1? All indexes
+ // are constant? Might be good for the arrays
+ // introduced by the compiler for variadic functions.
+ return false
+ case TSTRUCT:
+ if countfield(t) > ssa.MaxStruct {
+ return false
+ }
+ for t1 := t.Type; t1 != nil; t1 = t1.Down {
+ if !canSSAType(t1.Type) {
+ return false
+ }
+ }
+ return true
+ default:
+ return true
+ }
+}
+
+// nilCheck generates nil pointer checking code.
+// Starts a new block on return, unless nil checks are disabled.
+// Used only for automatically inserted nil checks,
+// not for user code like 'x != nil'.
+func (s *state) nilCheck(ptr *ssa.Value) {
+ if Disable_checknil != 0 {
+ return
+ }
+ chk := s.newValue2(ssa.OpNilCheck, ssa.TypeVoid, ptr, s.mem())
+ b := s.endBlock()
+ b.Kind = ssa.BlockCheck
+ b.Control = chk
+ bNext := s.f.NewBlock(ssa.BlockPlain)
+ b.AddEdgeTo(bNext)
+ s.startBlock(bNext)
+}
+
+// boundsCheck generates bounds checking code. Checks if 0 <= idx < len, branches to exit if not.
+// Starts a new block on return.
+func (s *state) boundsCheck(idx, len *ssa.Value) {
+ if Debug['B'] != 0 {
+ return
+ }
+ // TODO: convert index to full width?
+ // TODO: if index is 64-bit and we're compiling to 32-bit, check that high 32 bits are zero.
+
+ // bounds check
+ cmp := s.newValue2(ssa.OpIsInBounds, Types[TBOOL], idx, len)
+ s.check(cmp, Panicindex)
+}
+
+// sliceBoundsCheck generates slice bounds checking code. Checks if 0 <= idx <= len, branches to exit if not.
+// Starts a new block on return.
+func (s *state) sliceBoundsCheck(idx, len *ssa.Value) {
+ if Debug['B'] != 0 {
+ return
+ }
+ // TODO: convert index to full width?
+ // TODO: if index is 64-bit and we're compiling to 32-bit, check that high 32 bits are zero.
+
+ // bounds check
+ cmp := s.newValue2(ssa.OpIsSliceInBounds, Types[TBOOL], idx, len)
+ s.check(cmp, panicslice)
+}
+
+// If cmp (a bool) is true, panic using the given function.
+func (s *state) check(cmp *ssa.Value, fn *Node) {
+ b := s.endBlock()
+ b.Kind = ssa.BlockIf
+ b.Control = cmp
+ b.Likely = ssa.BranchLikely
+ bNext := s.f.NewBlock(ssa.BlockPlain)
+ line := s.peekLine()
+ bPanic := s.panics[funcLine{fn, line}]
+ if bPanic == nil {
+ bPanic = s.f.NewBlock(ssa.BlockPlain)
+ s.panics[funcLine{fn, line}] = bPanic
+ s.startBlock(bPanic)
+ // The panic call takes/returns memory to ensure that the right
+ // memory state is observed if the panic happens.
+ s.rtcall(fn, false, nil)
+ }
+ b.AddEdgeTo(bNext)
+ b.AddEdgeTo(bPanic)
+ s.startBlock(bNext)
+}
+
+// rtcall issues a call to the given runtime function fn with the listed args.
+// Returns a slice of results of the given result types.
+// The call is added to the end of the current block.
+// If returns is false, the block is marked as an exit block.
+// If returns is true, the block is marked as a call block. A new block
+// is started to load the return values.
+func (s *state) rtcall(fn *Node, returns bool, results []*Type, args ...*ssa.Value) []*ssa.Value {
+ // Write args to the stack
+ var off int64 // TODO: arch-dependent starting offset?
+ for _, arg := range args {
+ t := arg.Type
+ off = Rnd(off, t.Alignment())
+ ptr := s.sp
+ if off != 0 {
+ ptr = s.newValue1I(ssa.OpOffPtr, Types[TUINTPTR], off, s.sp)
+ }
+ size := t.Size()
+ s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, size, ptr, arg, s.mem())
+ off += size
+ }
+ off = Rnd(off, int64(Widthptr))
+
+ // Issue call
+ call := s.newValue1A(ssa.OpStaticCall, ssa.TypeMem, fn.Sym, s.mem())
+ s.vars[&memVar] = call
+
+ // Finish block
+ b := s.endBlock()
+ if !returns {
+ b.Kind = ssa.BlockExit
+ b.Control = call
+ call.AuxInt = off
+ if len(results) > 0 {
+ Fatalf("panic call can't have results")
+ }
+ return nil
+ }
+ b.Kind = ssa.BlockCall
+ b.Control = call
+ bNext := s.f.NewBlock(ssa.BlockPlain)
+ b.AddEdgeTo(bNext)
+ s.startBlock(bNext)
+
+ // Load results
+ res := make([]*ssa.Value, len(results))
+ for i, t := range results {
+ off = Rnd(off, t.Alignment())
+ ptr := s.sp
+ if off != 0 {
+ ptr = s.newValue1I(ssa.OpOffPtr, Types[TUINTPTR], off, s.sp)
+ }
+ res[i] = s.newValue2(ssa.OpLoad, t, ptr, s.mem())
+ off += t.Size()
+ }
+ off = Rnd(off, int64(Widthptr))
+
+ // Remember how much callee stack space we needed.
+ call.AuxInt = off
+
+ return res
+}
+
+// insertWBmove inserts the assignment *left = *right including a write barrier.
+// t is the type being assigned.
+func (s *state) insertWBmove(t *Type, left, right *ssa.Value, line int32) {
+ // if writeBarrier.enabled {
+ // typedmemmove(&t, left, right)
+ // } else {
+ // *left = *right
+ // }
+ bThen := s.f.NewBlock(ssa.BlockPlain)
+ bElse := s.f.NewBlock(ssa.BlockPlain)
+ bEnd := s.f.NewBlock(ssa.BlockPlain)
+
+ aux := &ssa.ExternSymbol{Types[TBOOL], syslook("writeBarrier", 0).Sym}
+ flagaddr := s.newValue1A(ssa.OpAddr, Ptrto(Types[TUINT32]), aux, s.sb)
+ // TODO: select the .enabled field. It is currently first, so not needed for now.
+ // Load word, test byte, avoiding partial register write from load byte.
+ flag := s.newValue2(ssa.OpLoad, Types[TUINT32], flagaddr, s.mem())
+ flag = s.newValue1(ssa.OpTrunc64to8, Types[TBOOL], flag)
+ b := s.endBlock()
+ b.Kind = ssa.BlockIf
+ b.Likely = ssa.BranchUnlikely
+ b.Control = flag
+ b.AddEdgeTo(bThen)
+ b.AddEdgeTo(bElse)
+
+ s.startBlock(bThen)
+ taddr := s.newValue1A(ssa.OpAddr, Types[TUINTPTR], &ssa.ExternSymbol{Types[TUINTPTR], typenamesym(t)}, s.sb)
+ s.rtcall(typedmemmove, true, nil, taddr, left, right)
+ s.endBlock().AddEdgeTo(bEnd)
+
+ s.startBlock(bElse)
+ s.vars[&memVar] = s.newValue3I(ssa.OpMove, ssa.TypeMem, t.Size(), left, right, s.mem())
+ s.endBlock().AddEdgeTo(bEnd)
+
+ s.startBlock(bEnd)
+
+ if Debug_wb > 0 {
+ Warnl(int(line), "write barrier")
+ }
+}
+
+// insertWBstore inserts the assignment *left = right including a write barrier.
+// t is the type being assigned.
+func (s *state) insertWBstore(t *Type, left, right *ssa.Value, line int32) {
+ // store scalar fields
+ // if writeBarrier.enabled {
+ // writebarrierptr for pointer fields
+ // } else {
+ // store pointer fields
+ // }
+
+ s.storeTypeScalars(t, left, right)
+
+ bThen := s.f.NewBlock(ssa.BlockPlain)
+ bElse := s.f.NewBlock(ssa.BlockPlain)
+ bEnd := s.f.NewBlock(ssa.BlockPlain)
+
+ aux := &ssa.ExternSymbol{Types[TBOOL], syslook("writeBarrier", 0).Sym}
+ flagaddr := s.newValue1A(ssa.OpAddr, Ptrto(Types[TUINT32]), aux, s.sb)
+ // TODO: select the .enabled field. It is currently first, so not needed for now.
+ // Load word, test byte, avoiding partial register write from load byte.
+ flag := s.newValue2(ssa.OpLoad, Types[TUINT32], flagaddr, s.mem())
+ flag = s.newValue1(ssa.OpTrunc64to8, Types[TBOOL], flag)
+ b := s.endBlock()
+ b.Kind = ssa.BlockIf
+ b.Likely = ssa.BranchUnlikely
+ b.Control = flag
+ b.AddEdgeTo(bThen)
+ b.AddEdgeTo(bElse)
+
+ // Issue write barriers for pointer writes.
+ s.startBlock(bThen)
+ s.storeTypePtrsWB(t, left, right)
+ s.endBlock().AddEdgeTo(bEnd)
+
+ // Issue regular stores for pointer writes.
+ s.startBlock(bElse)
+ s.storeTypePtrs(t, left, right)
+ s.endBlock().AddEdgeTo(bEnd)
+
+ s.startBlock(bEnd)
+
+ if Debug_wb > 0 {
+ Warnl(int(line), "write barrier")
+ }
+}
+
+// do *left = right for all scalar (non-pointer) parts of t.
+func (s *state) storeTypeScalars(t *Type, left, right *ssa.Value) {
+ switch {
+ case t.IsBoolean() || t.IsInteger() || t.IsFloat() || t.IsComplex():
+ s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, t.Size(), left, right, s.mem())
+ case t.IsPtr() || t.IsMap() || t.IsChan():
+ // no scalar fields.
+ case t.IsString():
+ len := s.newValue1(ssa.OpStringLen, Types[TINT], right)
+ lenAddr := s.newValue1I(ssa.OpOffPtr, Ptrto(Types[TINT]), s.config.IntSize, left)
+ s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, s.config.IntSize, lenAddr, len, s.mem())
+ case t.IsSlice():
+ len := s.newValue1(ssa.OpSliceLen, Types[TINT], right)
+ cap := s.newValue1(ssa.OpSliceCap, Types[TINT], right)
+ lenAddr := s.newValue1I(ssa.OpOffPtr, Ptrto(Types[TINT]), s.config.IntSize, left)
+ s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, s.config.IntSize, lenAddr, len, s.mem())
+ capAddr := s.newValue1I(ssa.OpOffPtr, Ptrto(Types[TINT]), 2*s.config.IntSize, left)
+ s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, s.config.IntSize, capAddr, cap, s.mem())
+ case t.IsInterface():
+ // itab field doesn't need a write barrier (even though it is a pointer).
+ itab := s.newValue1(ssa.OpITab, Ptrto(Types[TUINT8]), right)
+ s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, s.config.IntSize, left, itab, s.mem())
+ case t.IsStruct():
+ n := t.NumFields()
+ for i := int64(0); i < n; i++ {
+ ft := t.FieldType(i)
+ addr := s.newValue1I(ssa.OpOffPtr, ft.PtrTo(), t.FieldOff(i), left)
+ val := s.newValue1I(ssa.OpStructSelect, ft, i, right)
+ s.storeTypeScalars(ft.(*Type), addr, val)
+ }
+ default:
+ s.Fatalf("bad write barrier type %s", t)
+ }
+}
+
+// do *left = right for all pointer parts of t.
+func (s *state) storeTypePtrs(t *Type, left, right *ssa.Value) {
+ switch {
+ case t.IsPtr() || t.IsMap() || t.IsChan():
+ s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, s.config.PtrSize, left, right, s.mem())
+ case t.IsString():
+ ptr := s.newValue1(ssa.OpStringPtr, Ptrto(Types[TUINT8]), right)
+ s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, s.config.PtrSize, left, ptr, s.mem())
+ case t.IsSlice():
+ ptr := s.newValue1(ssa.OpSlicePtr, Ptrto(Types[TUINT8]), right)
+ s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, s.config.PtrSize, left, ptr, s.mem())
+ case t.IsInterface():
+ // itab field is treated as a scalar.
+ idata := s.newValue1(ssa.OpIData, Ptrto(Types[TUINT8]), right)
+ idataAddr := s.newValue1I(ssa.OpOffPtr, Ptrto(Types[TUINT8]), s.config.PtrSize, left)
+ s.vars[&memVar] = s.newValue3I(ssa.OpStore, ssa.TypeMem, s.config.PtrSize, idataAddr, idata, s.mem())
+ case t.IsStruct():
+ n := t.NumFields()
+ for i := int64(0); i < n; i++ {
+ ft := t.FieldType(i)
+ if !haspointers(ft.(*Type)) {
+ continue
+ }
+ addr := s.newValue1I(ssa.OpOffPtr, ft.PtrTo(), t.FieldOff(i), left)
+ val := s.newValue1I(ssa.OpStructSelect, ft, i, right)
+ s.storeTypePtrs(ft.(*Type), addr, val)
+ }
+ default:
+ s.Fatalf("bad write barrier type %s", t)
+ }
+}
+
+// do *left = right with a write barrier for all pointer parts of t.
+func (s *state) storeTypePtrsWB(t *Type, left, right *ssa.Value) {
+ switch {
+ case t.IsPtr() || t.IsMap() || t.IsChan():
+ s.rtcall(writebarrierptr, true, nil, left, right)
+ case t.IsString():
+ ptr := s.newValue1(ssa.OpStringPtr, Ptrto(Types[TUINT8]), right)
+ s.rtcall(writebarrierptr, true, nil, left, ptr)
+ case t.IsSlice():
+ ptr := s.newValue1(ssa.OpSlicePtr, Ptrto(Types[TUINT8]), right)
+ s.rtcall(writebarrierptr, true, nil, left, ptr)
+ case t.IsInterface():
+ idata := s.newValue1(ssa.OpIData, Ptrto(Types[TUINT8]), right)
+ idataAddr := s.newValue1I(ssa.OpOffPtr, Ptrto(Types[TUINT8]), s.config.PtrSize, left)
+ s.rtcall(writebarrierptr, true, nil, idataAddr, idata)
+ case t.IsStruct():
+ n := t.NumFields()
+ for i := int64(0); i < n; i++ {
+ ft := t.FieldType(i)
+ if !haspointers(ft.(*Type)) {
+ continue
+ }
+ addr := s.newValue1I(ssa.OpOffPtr, ft.PtrTo(), t.FieldOff(i), left)
+ val := s.newValue1I(ssa.OpStructSelect, ft, i, right)
+ s.storeTypePtrsWB(ft.(*Type), addr, val)
+ }
+ default:
+ s.Fatalf("bad write barrier type %s", t)
+ }
+}
+
+// slice computes the slice v[i:j:k] and returns ptr, len, and cap of result.
+// i,j,k may be nil, in which case they are set to their default value.
+// t is a slice, ptr to array, or string type.
+func (s *state) slice(t *Type, v, i, j, k *ssa.Value) (p, l, c *ssa.Value) {
+ var elemtype *Type
+ var ptrtype *Type
+ var ptr *ssa.Value
+ var len *ssa.Value
+ var cap *ssa.Value
+ zero := s.constInt(Types[TINT], 0)
+ switch {
+ case t.IsSlice():
+ elemtype = t.Type
+ ptrtype = Ptrto(elemtype)
+ ptr = s.newValue1(ssa.OpSlicePtr, ptrtype, v)
+ len = s.newValue1(ssa.OpSliceLen, Types[TINT], v)
+ cap = s.newValue1(ssa.OpSliceCap, Types[TINT], v)
+ case t.IsString():
+ elemtype = Types[TUINT8]
+ ptrtype = Ptrto(elemtype)
+ ptr = s.newValue1(ssa.OpStringPtr, ptrtype, v)
+ len = s.newValue1(ssa.OpStringLen, Types[TINT], v)
+ cap = len
+ case t.IsPtr():
+ if !t.Type.IsArray() {
+ s.Fatalf("bad ptr to array in slice %v\n", t)
+ }
+ elemtype = t.Type.Type
+ ptrtype = Ptrto(elemtype)
+ s.nilCheck(v)
+ ptr = v
+ len = s.constInt(Types[TINT], t.Type.Bound)
+ cap = len
+ default:
+ s.Fatalf("bad type in slice %v\n", t)
+ }
+
+ // Set default values
+ if i == nil {
+ i = zero
+ }
+ if j == nil {
+ j = len
+ }
+ if k == nil {
+ k = cap
+ }
+
+ // Panic if slice indices are not in bounds.
+ s.sliceBoundsCheck(i, j)
+ if j != k {
+ s.sliceBoundsCheck(j, k)
+ }
+ if k != cap {
+ s.sliceBoundsCheck(k, cap)
+ }
+
+ // Generate the following code assuming that indexes are in bounds.
+ // The conditional is to make sure that we don't generate a slice
+ // that points to the next object in memory.
+ // rlen = (Sub64 j i)
+ // rcap = (Sub64 k i)
+ // p = ptr
+ // if rcap != 0 {
+ // p = (AddPtr ptr (Mul64 low (Const64 size)))
+ // }
+ // result = (SliceMake p size)
+ subOp := s.ssaOp(OSUB, Types[TINT])
+ neqOp := s.ssaOp(ONE, Types[TINT])
+ mulOp := s.ssaOp(OMUL, Types[TINT])
+ rlen := s.newValue2(subOp, Types[TINT], j, i)
+ var rcap *ssa.Value
+ switch {
+ case t.IsString():
+ // Capacity of the result is unimportant. However, we use
+ // rcap to test if we've generated a zero-length slice.
+ // Use length of strings for that.
+ rcap = rlen
+ case j == k:
+ rcap = rlen
+ default:
+ rcap = s.newValue2(subOp, Types[TINT], k, i)
+ }
+
+ s.vars[&ptrVar] = ptr
+
+ // Generate code to test the resulting slice length.
+ cmp := s.newValue2(neqOp, Types[TBOOL], rcap, s.constInt(Types[TINT], 0))
+
+ b := s.endBlock()
+ b.Kind = ssa.BlockIf
+ b.Likely = ssa.BranchLikely
+ b.Control = cmp
+
+ // Generate code for non-zero length slice case.
+ nz := s.f.NewBlock(ssa.BlockPlain)
+ b.AddEdgeTo(nz)
+ s.startBlock(nz)
+ var inc *ssa.Value
+ if elemtype.Width == 1 {
+ inc = i
+ } else {
+ inc = s.newValue2(mulOp, Types[TINT], i, s.constInt(Types[TINT], elemtype.Width))
+ }
+ s.vars[&ptrVar] = s.newValue2(ssa.OpAddPtr, ptrtype, ptr, inc)
+ s.endBlock()
+
+ // All done.
+ merge := s.f.NewBlock(ssa.BlockPlain)
+ b.AddEdgeTo(merge)
+ nz.AddEdgeTo(merge)
+ s.startBlock(merge)
+ rptr := s.variable(&ptrVar, ptrtype)
+ delete(s.vars, &ptrVar)
+ return rptr, rlen, rcap
+}
+
+type u2fcvtTab struct {
+ geq, cvt2F, and, rsh, or, add ssa.Op
+ one func(*state, ssa.Type, int64) *ssa.Value
+}
+
+var u64_f64 u2fcvtTab = u2fcvtTab{
+ geq: ssa.OpGeq64,
+ cvt2F: ssa.OpCvt64to64F,
+ and: ssa.OpAnd64,
+ rsh: ssa.OpRsh64Ux64,
+ or: ssa.OpOr64,
+ add: ssa.OpAdd64F,
+ one: (*state).constInt64,
+}
+
+var u64_f32 u2fcvtTab = u2fcvtTab{
+ geq: ssa.OpGeq64,
+ cvt2F: ssa.OpCvt64to32F,
+ and: ssa.OpAnd64,
+ rsh: ssa.OpRsh64Ux64,
+ or: ssa.OpOr64,
+ add: ssa.OpAdd32F,
+ one: (*state).constInt64,
+}
+
+// Excess generality on a machine with 64-bit integer registers.
+// Not used on AMD64.
+var u32_f32 u2fcvtTab = u2fcvtTab{
+ geq: ssa.OpGeq32,
+ cvt2F: ssa.OpCvt32to32F,
+ and: ssa.OpAnd32,
+ rsh: ssa.OpRsh32Ux32,
+ or: ssa.OpOr32,
+ add: ssa.OpAdd32F,
+ one: func(s *state, t ssa.Type, x int64) *ssa.Value {
+ return s.constInt32(t, int32(x))
+ },
+}
+
+func (s *state) uint64Tofloat64(n *Node, x *ssa.Value, ft, tt *Type) *ssa.Value {
+ return s.uintTofloat(&u64_f64, n, x, ft, tt)
+}
+
+func (s *state) uint64Tofloat32(n *Node, x *ssa.Value, ft, tt *Type) *ssa.Value {
+ return s.uintTofloat(&u64_f32, n, x, ft, tt)
+}
+
+func (s *state) uintTofloat(cvttab *u2fcvtTab, n *Node, x *ssa.Value, ft, tt *Type) *ssa.Value {
+ // if x >= 0 {
+ // result = (floatY) x
+ // } else {
+ // y = uintX(x) ; y = x & 1
+ // z = uintX(x) ; z = z >> 1
+ // z = z >> 1
+ // z = z | y
+ // result = floatY(z)
+ // result = result + result
+ // }
+ //
+ // Code borrowed from old code generator.
+ // What's going on: large 64-bit "unsigned" looks like
+ // negative number to hardware's integer-to-float
+ // conversion. However, because the mantissa is only
+ // 63 bits, we don't need the LSB, so instead we do an
+ // unsigned right shift (divide by two), convert, and
+ // double. However, before we do that, we need to be
+ // sure that we do not lose a "1" if that made the
+ // difference in the resulting rounding. Therefore, we
+ // preserve it, and OR (not ADD) it back in. The case
+ // that matters is when the eleven discarded bits are
+ // equal to 10000000001; that rounds up, and the 1 cannot
+ // be lost else it would round down if the LSB of the
+ // candidate mantissa is 0.
+ cmp := s.newValue2(cvttab.geq, Types[TBOOL], x, s.zeroVal(ft))
+ b := s.endBlock()
+ b.Kind = ssa.BlockIf
+ b.Control = cmp
+ b.Likely = ssa.BranchLikely
+
+ bThen := s.f.NewBlock(ssa.BlockPlain)
+ bElse := s.f.NewBlock(ssa.BlockPlain)
+ bAfter := s.f.NewBlock(ssa.BlockPlain)
+
+ b.AddEdgeTo(bThen)
+ s.startBlock(bThen)
+ a0 := s.newValue1(cvttab.cvt2F, tt, x)
+ s.vars[n] = a0
+ s.endBlock()
+ bThen.AddEdgeTo(bAfter)
+
+ b.AddEdgeTo(bElse)
+ s.startBlock(bElse)
+ one := cvttab.one(s, ft, 1)
+ y := s.newValue2(cvttab.and, ft, x, one)
+ z := s.newValue2(cvttab.rsh, ft, x, one)
+ z = s.newValue2(cvttab.or, ft, z, y)
+ a := s.newValue1(cvttab.cvt2F, tt, z)
+ a1 := s.newValue2(cvttab.add, tt, a, a)
+ s.vars[n] = a1
+ s.endBlock()
+ bElse.AddEdgeTo(bAfter)
+
+ s.startBlock(bAfter)
+ return s.variable(n, n.Type)
+}
+
+// referenceTypeBuiltin generates code for the len/cap builtins for maps and channels.
+func (s *state) referenceTypeBuiltin(n *Node, x *ssa.Value) *ssa.Value {
+ if !n.Left.Type.IsMap() && !n.Left.Type.IsChan() {
+ s.Fatalf("node must be a map or a channel")
+ }
+ // if n == nil {
+ // return 0
+ // } else {
+ // // len
+ // return *((*int)n)
+ // // cap
+ // return *(((*int)n)+1)
+ // }
+ lenType := n.Type
+ nilValue := s.newValue0(ssa.OpConstNil, Types[TUINTPTR])
+ cmp := s.newValue2(ssa.OpEqPtr, Types[TBOOL], x, nilValue)
+ b := s.endBlock()
+ b.Kind = ssa.BlockIf
+ b.Control = cmp
+ b.Likely = ssa.BranchUnlikely
+
+ bThen := s.f.NewBlock(ssa.BlockPlain)
+ bElse := s.f.NewBlock(ssa.BlockPlain)
+ bAfter := s.f.NewBlock(ssa.BlockPlain)
+
+ // length/capacity of a nil map/chan is zero
+ b.AddEdgeTo(bThen)
+ s.startBlock(bThen)
+ s.vars[n] = s.zeroVal(lenType)
+ s.endBlock()
+ bThen.AddEdgeTo(bAfter)
+
+ b.AddEdgeTo(bElse)
+ s.startBlock(bElse)
+ if n.Op == OLEN {
+ // length is stored in the first word for map/chan
+ s.vars[n] = s.newValue2(ssa.OpLoad, lenType, x, s.mem())
+ } else if n.Op == OCAP {
+ // capacity is stored in the second word for chan
+ sw := s.newValue1I(ssa.OpOffPtr, lenType.PtrTo(), lenType.Width, x)
+ s.vars[n] = s.newValue2(ssa.OpLoad, lenType, sw, s.mem())
+ } else {
+ s.Fatalf("op must be OLEN or OCAP")
+ }
+ s.endBlock()
+ bElse.AddEdgeTo(bAfter)
+
+ s.startBlock(bAfter)
+ return s.variable(n, lenType)
+}
+
+type f2uCvtTab struct {
+ ltf, cvt2U, subf ssa.Op
+ value func(*state, ssa.Type, float64) *ssa.Value
+}
+
+var f32_u64 f2uCvtTab = f2uCvtTab{
+ ltf: ssa.OpLess32F,
+ cvt2U: ssa.OpCvt32Fto64,
+ subf: ssa.OpSub32F,
+ value: (*state).constFloat32,
+}
+
+var f64_u64 f2uCvtTab = f2uCvtTab{
+ ltf: ssa.OpLess64F,
+ cvt2U: ssa.OpCvt64Fto64,
+ subf: ssa.OpSub64F,
+ value: (*state).constFloat64,
+}
+
+func (s *state) float32ToUint64(n *Node, x *ssa.Value, ft, tt *Type) *ssa.Value {
+ return s.floatToUint(&f32_u64, n, x, ft, tt)
+}
+func (s *state) float64ToUint64(n *Node, x *ssa.Value, ft, tt *Type) *ssa.Value {
+ return s.floatToUint(&f64_u64, n, x, ft, tt)
+}
+
+func (s *state) floatToUint(cvttab *f2uCvtTab, n *Node, x *ssa.Value, ft, tt *Type) *ssa.Value {
+ // if x < 9223372036854775808.0 {
+ // result = uintY(x)
+ // } else {
+ // y = x - 9223372036854775808.0
+ // z = uintY(y)
+ // result = z | -9223372036854775808
+ // }
+ twoToThe63 := cvttab.value(s, ft, 9223372036854775808.0)
+ cmp := s.newValue2(cvttab.ltf, Types[TBOOL], x, twoToThe63)
+ b := s.endBlock()
+ b.Kind = ssa.BlockIf
+ b.Control = cmp
+ b.Likely = ssa.BranchLikely
+
+ bThen := s.f.NewBlock(ssa.BlockPlain)
+ bElse := s.f.NewBlock(ssa.BlockPlain)
+ bAfter := s.f.NewBlock(ssa.BlockPlain)
+
+ b.AddEdgeTo(bThen)
+ s.startBlock(bThen)
+ a0 := s.newValue1(cvttab.cvt2U, tt, x)
+ s.vars[n] = a0
+ s.endBlock()
+ bThen.AddEdgeTo(bAfter)
+
+ b.AddEdgeTo(bElse)
+ s.startBlock(bElse)
+ y := s.newValue2(cvttab.subf, ft, x, twoToThe63)
+ y = s.newValue1(cvttab.cvt2U, tt, y)
+ z := s.constInt64(tt, -9223372036854775808)
+ a1 := s.newValue2(ssa.OpOr64, tt, y, z)
+ s.vars[n] = a1
+ s.endBlock()
+ bElse.AddEdgeTo(bAfter)
+
+ s.startBlock(bAfter)
+ return s.variable(n, n.Type)
+}
+
+// ifaceType returns the value for the word containing the type.
+// n is the node for the interface expression.
+// v is the corresponding value.
+func (s *state) ifaceType(n *Node, v *ssa.Value) *ssa.Value {
+ byteptr := Ptrto(Types[TUINT8]) // type used in runtime prototypes for runtime type (*byte)
+
+ if isnilinter(n.Type) {
+ // Have *eface. The type is the first word in the struct.
+ return s.newValue1(ssa.OpITab, byteptr, v)
+ }
+
+ // Have *iface.
+ // The first word in the struct is the *itab.
+ // If the *itab is nil, return 0.
+ // Otherwise, the second word in the *itab is the type.
+
+ tab := s.newValue1(ssa.OpITab, byteptr, v)
+ s.vars[&typVar] = tab
+ isnonnil := s.newValue2(ssa.OpNeqPtr, Types[TBOOL], tab, s.entryNewValue0(ssa.OpConstNil, byteptr))
+ b := s.endBlock()
+ b.Kind = ssa.BlockIf
+ b.Control = isnonnil
+ b.Likely = ssa.BranchLikely
+
+ bLoad := s.f.NewBlock(ssa.BlockPlain)
+ bEnd := s.f.NewBlock(ssa.BlockPlain)
+
+ b.AddEdgeTo(bLoad)
+ b.AddEdgeTo(bEnd)
+ bLoad.AddEdgeTo(bEnd)
+
+ s.startBlock(bLoad)
+ off := s.newValue1I(ssa.OpOffPtr, byteptr, int64(Widthptr), tab)
+ s.vars[&typVar] = s.newValue2(ssa.OpLoad, byteptr, off, s.mem())
+ s.endBlock()
+
+ s.startBlock(bEnd)
+ typ := s.variable(&typVar, byteptr)
+ delete(s.vars, &typVar)
+ return typ
+}
+
+// dottype generates SSA for a type assertion node.
+// commaok indicates whether to panic or return a bool.
+// If commaok is false, resok will be nil.
+func (s *state) dottype(n *Node, commaok bool) (res, resok *ssa.Value) {
+ iface := s.expr(n.Left)
+ typ := s.ifaceType(n.Left, iface) // actual concrete type
+ target := s.expr(typename(n.Type)) // target type
+ if !isdirectiface(n.Type) {
+ // walk rewrites ODOTTYPE/OAS2DOTTYPE into runtime calls except for this case.
+ Fatalf("dottype needs a direct iface type %s", n.Type)
+ }
+
+ if Debug_typeassert > 0 {
+ Warnl(int(n.Lineno), "type assertion inlined")
+ }
+
+ // TODO: If we have a nonempty interface and its itab field is nil,
+ // then this test is redundant and ifaceType should just branch directly to bFail.
+ cond := s.newValue2(ssa.OpEqPtr, Types[TBOOL], typ, target)
+ b := s.endBlock()
+ b.Kind = ssa.BlockIf
+ b.Control = cond
+ b.Likely = ssa.BranchLikely
+
+ byteptr := Ptrto(Types[TUINT8])
+
+ bOk := s.f.NewBlock(ssa.BlockPlain)
+ bFail := s.f.NewBlock(ssa.BlockPlain)
+ b.AddEdgeTo(bOk)
+ b.AddEdgeTo(bFail)
+
+ if !commaok {
+ // on failure, panic by calling panicdottype
+ s.startBlock(bFail)
+ taddr := s.newValue1A(ssa.OpAddr, byteptr, &ssa.ExternSymbol{byteptr, typenamesym(n.Left.Type)}, s.sb)
+ s.rtcall(panicdottype, false, nil, typ, target, taddr)
+
+ // on success, return idata field
+ s.startBlock(bOk)
+ return s.newValue1(ssa.OpIData, n.Type, iface), nil
+ }
+
+ // commaok is the more complicated case because we have
+ // a control flow merge point.
+ bEnd := s.f.NewBlock(ssa.BlockPlain)
+
+ // type assertion succeeded
+ s.startBlock(bOk)
+ s.vars[&idataVar] = s.newValue1(ssa.OpIData, n.Type, iface)
+ s.vars[&okVar] = s.constBool(true)
+ s.endBlock()
+ bOk.AddEdgeTo(bEnd)
+
+ // type assertion failed
+ s.startBlock(bFail)
+ s.vars[&idataVar] = s.entryNewValue0(ssa.OpConstNil, byteptr)
+ s.vars[&okVar] = s.constBool(false)
+ s.endBlock()
+ bFail.AddEdgeTo(bEnd)
+
+ // merge point
+ s.startBlock(bEnd)
+ res = s.variable(&idataVar, byteptr)
+ resok = s.variable(&okVar, Types[TBOOL])
+ delete(s.vars, &idataVar)
+ delete(s.vars, &okVar)
+ return res, resok
+}
+
+// checkgoto checks that a goto from from to to does not
+// jump into a block or jump over variable declarations.
+// It is a copy of checkgoto in the pre-SSA backend,
+// modified only for line number handling.
+// TODO: document how this works and why it is designed the way it is.
+func (s *state) checkgoto(from *Node, to *Node) {
+ if from.Sym == to.Sym {
+ return
+ }
+
+ nf := 0
+ for fs := from.Sym; fs != nil; fs = fs.Link {
+ nf++
+ }
+ nt := 0
+ for fs := to.Sym; fs != nil; fs = fs.Link {
+ nt++
+ }
+ fs := from.Sym
+ for ; nf > nt; nf-- {
+ fs = fs.Link
+ }
+ if fs != to.Sym {
+ // decide what to complain about.
+ // prefer to complain about 'into block' over declarations,
+ // so scan backward to find most recent block or else dcl.
+ var block *Sym
+
+ var dcl *Sym
+ ts := to.Sym
+ for ; nt > nf; nt-- {
+ if ts.Pkg == nil {
+ block = ts
+ } else {
+ dcl = ts
+ }
+ ts = ts.Link
+ }
+
+ for ts != fs {
+ if ts.Pkg == nil {
+ block = ts
+ } else {
+ dcl = ts
+ }
+ ts = ts.Link
+ fs = fs.Link
+ }
+
+ lno := int(from.Left.Lineno)
+ if block != nil {
+ yyerrorl(lno, "goto %v jumps into block starting at %v", from.Left.Sym, Ctxt.Line(int(block.Lastlineno)))
+ } else {
+ yyerrorl(lno, "goto %v jumps over declaration of %v at %v", from.Left.Sym, dcl, Ctxt.Line(int(dcl.Lastlineno)))
+ }
+ }
+}
+
+// variable returns the value of a variable at the current location.
+func (s *state) variable(name *Node, t ssa.Type) *ssa.Value {
+ v := s.vars[name]
+ if v == nil {
+ v = s.newValue0A(ssa.OpFwdRef, t, name)
+ s.fwdRefs = append(s.fwdRefs, v)
+ s.vars[name] = v
+ s.addNamedValue(name, v)
+ }
+ return v
+}
+
+func (s *state) mem() *ssa.Value {
+ return s.variable(&memVar, ssa.TypeMem)
+}
+
+func (s *state) linkForwardReferences() {
+ // Build SSA graph. Each variable on its first use in a basic block
+ // leaves a FwdRef in that block representing the incoming value
+ // of that variable. This function links that ref up with possible definitions,
+ // inserting Phi values as needed. This is essentially the algorithm
+ // described by Braun, Buchwald, Hack, Leißa, Mallon, and Zwinkau:
+ // http://pp.info.uni-karlsruhe.de/uploads/publikationen/braun13cc.pdf
+ // Differences:
+ // - We use FwdRef nodes to postpone phi building until the CFG is
+ // completely built. That way we can avoid the notion of "sealed"
+ // blocks.
+ // - Phi optimization is a separate pass (in ../ssa/phielim.go).
+ for len(s.fwdRefs) > 0 {
+ v := s.fwdRefs[len(s.fwdRefs)-1]
+ s.fwdRefs = s.fwdRefs[:len(s.fwdRefs)-1]
+ s.resolveFwdRef(v)
+ }
+}
+
+// resolveFwdRef modifies v to be the variable's value at the start of its block.
+// v must be a FwdRef op.
+func (s *state) resolveFwdRef(v *ssa.Value) {
+ b := v.Block
+ name := v.Aux.(*Node)
+ v.Aux = nil
+ if b == s.f.Entry {
+ // Live variable at start of function.
+ if s.canSSA(name) {
+ v.Op = ssa.OpArg
+ v.Aux = name
+ return
+ }
+ // Not SSAable. Load it.
+ addr := s.decladdrs[name]
+ if addr == nil {
+ // TODO: closure args reach here.
+ s.Unimplementedf("unhandled closure arg %s at entry to function %s", name, b.Func.Name)
+ }
+ if _, ok := addr.Aux.(*ssa.ArgSymbol); !ok {
+ s.Fatalf("variable live at start of function %s is not an argument %s", b.Func.Name, name)
+ }
+ v.Op = ssa.OpLoad
+ v.AddArgs(addr, s.startmem)
+ return
+ }
+ if len(b.Preds) == 0 {
+ // This block is dead; we have no predecessors and we're not the entry block.
+ // It doesn't matter what we use here as long as it is well-formed.
+ v.Op = ssa.OpUnknown
+ return
+ }
+ // Find variable value on each predecessor.
+ var argstore [4]*ssa.Value
+ args := argstore[:0]
+ for _, p := range b.Preds {
+ args = append(args, s.lookupVarOutgoing(p, v.Type, name, v.Line))
+ }
+
+ // Decide if we need a phi or not. We need a phi if there
+ // are two different args (which are both not v).
+ var w *ssa.Value
+ for _, a := range args {
+ if a == v {
+ continue // self-reference
+ }
+ if a == w {
+ continue // already have this witness
+ }
+ if w != nil {
+ // two witnesses, need a phi value
+ v.Op = ssa.OpPhi
+ v.AddArgs(args...)
+ return
+ }
+ w = a // save witness
+ }
+ if w == nil {
+ s.Fatalf("no witness for reachable phi %s", v)
+ }
+ // One witness. Make v a copy of w.
+ v.Op = ssa.OpCopy
+ v.AddArg(w)
+}
+
+// lookupVarOutgoing finds the variable's value at the end of block b.
+func (s *state) lookupVarOutgoing(b *ssa.Block, t ssa.Type, name *Node, line int32) *ssa.Value {
+ m := s.defvars[b.ID]
+ if v, ok := m[name]; ok {
+ return v
+ }
+ // The variable is not defined by b and we haven't
+ // looked it up yet. Generate a FwdRef for the variable and return that.
+ v := b.NewValue0A(line, ssa.OpFwdRef, t, name)
+ s.fwdRefs = append(s.fwdRefs, v)
+ m[name] = v
+ s.addNamedValue(name, v)
+ return v
+}
+
+func (s *state) addNamedValue(n *Node, v *ssa.Value) {
+ if n.Class == Pxxx {
+ // Don't track our dummy nodes (&memVar etc.).
+ return
+ }
+ if strings.HasPrefix(n.Sym.Name, "autotmp_") {
+ // Don't track autotmp_ variables.
+ return
+ }
+ if n.Class == PAUTO && (v.Type.IsString() || v.Type.IsSlice() || v.Type.IsInterface()) {
+ // TODO: can't handle auto compound objects with pointers yet.
+ // The live variable analysis barfs because we don't put VARDEF
+ // pseudos in the right place when we spill to these nodes.
+ return
+ }
+ if n.Class == PAUTO && n.Xoffset != 0 {
+ s.Fatalf("AUTO var with offset %s %d", n, n.Xoffset)
+ }
+ loc := ssa.LocalSlot{N: n, Type: n.Type, Off: 0}
+ values, ok := s.f.NamedValues[loc]
+ if !ok {
+ s.f.Names = append(s.f.Names, loc)
+ }
+ s.f.NamedValues[loc] = append(values, v)
+}
+
+// an unresolved branch
+type branch struct {
+ p *obj.Prog // branch instruction
+ b *ssa.Block // target
+}
+
+type genState struct {
+ // branches remembers all the branch instructions we've seen
+ // and where they would like to go.
+ branches []branch
+
+ // bstart remembers where each block starts (indexed by block ID)
+ bstart []*obj.Prog
+
+ // deferBranches remembers all the defer branches we've seen.
+ deferBranches []*obj.Prog
+
+ // deferTarget remembers the (last) deferreturn call site.
+ deferTarget *obj.Prog
+}
+
+// genssa appends entries to ptxt for each instruction in f.
+// gcargs and gclocals are filled in with pointer maps for the frame.
+func genssa(f *ssa.Func, ptxt *obj.Prog, gcargs, gclocals *Sym) {
+ var s genState
+
+ e := f.Config.Frontend().(*ssaExport)
+ // We're about to emit a bunch of Progs.
+ // Since the only way to get here is to explicitly request it,
+ // just fail on unimplemented instead of trying to unwind our mess.
+ e.mustImplement = true
+
+ // Remember where each block starts.
+ s.bstart = make([]*obj.Prog, f.NumBlocks())
+
+ var valueProgs map[*obj.Prog]*ssa.Value
+ var blockProgs map[*obj.Prog]*ssa.Block
+ const logProgs = true
+ if logProgs {
+ valueProgs = make(map[*obj.Prog]*ssa.Value, f.NumValues())
+ blockProgs = make(map[*obj.Prog]*ssa.Block, f.NumBlocks())
+ f.Logf("genssa %s\n", f.Name)
+ blockProgs[Pc] = f.Blocks[0]
+ }
+
+ // Emit basic blocks
+ for i, b := range f.Blocks {
+ s.bstart[b.ID] = Pc
+ // Emit values in block
+ s.markMoves(b)
+ for _, v := range b.Values {
+ x := Pc
+ s.genValue(v)
+ if logProgs {
+ for ; x != Pc; x = x.Link {
+ valueProgs[x] = v
+ }
+ }
+ }
+ // Emit control flow instructions for block
+ var next *ssa.Block
+ if i < len(f.Blocks)-1 && (Debug['N'] == 0 || b.Kind == ssa.BlockCall) {
+ // If -N, leave next==nil so every block with successors
+ // ends in a JMP (except call blocks - plive doesn't like
+ // select{send,recv} followed by a JMP call). Helps keep
+ // line numbers for otherwise empty blocks.
+ next = f.Blocks[i+1]
+ }
+ x := Pc
+ s.genBlock(b, next)
+ if logProgs {
+ for ; x != Pc; x = x.Link {
+ blockProgs[x] = b
+ }
+ }
+ }
+
+ // Resolve branches
+ for _, br := range s.branches {
+ br.p.To.Val = s.bstart[br.b.ID]
+ }
+ if s.deferBranches != nil && s.deferTarget == nil {
+ // This can happen when the function has a defer but
+ // no return (because it has an infinite loop).
+ s.deferReturn()
+ Prog(obj.ARET)
+ }
+ for _, p := range s.deferBranches {
+ p.To.Val = s.deferTarget
+ }
+
+ if logProgs {
+ for p := ptxt; p != nil; p = p.Link {
+ var s string
+ if v, ok := valueProgs[p]; ok {
+ s = v.String()
+ } else if b, ok := blockProgs[p]; ok {
+ s = b.String()
+ } else {
+ s = " " // most value and branch strings are 2-3 characters long
+ }
+ f.Logf("%s\t%s\n", s, p)
+ }
+ if f.Config.HTML != nil {
+ saved := ptxt.Ctxt.LineHist.PrintFilenameOnly
+ ptxt.Ctxt.LineHist.PrintFilenameOnly = true
+ var buf bytes.Buffer
+ buf.WriteString("<code>")
+ buf.WriteString("<dl class=\"ssa-gen\">")
+ for p := ptxt; p != nil; p = p.Link {
+ buf.WriteString("<dt class=\"ssa-prog-src\">")
+ if v, ok := valueProgs[p]; ok {
+ buf.WriteString(v.HTML())
+ } else if b, ok := blockProgs[p]; ok {
+ buf.WriteString(b.HTML())
+ }
+ buf.WriteString("</dt>")
+ buf.WriteString("<dd class=\"ssa-prog\">")
+ buf.WriteString(html.EscapeString(p.String()))
+ buf.WriteString("</dd>")
+ buf.WriteString("</li>")
+ }
+ buf.WriteString("</dl>")
+ buf.WriteString("</code>")
+ f.Config.HTML.WriteColumn("genssa", buf.String())
+ ptxt.Ctxt.LineHist.PrintFilenameOnly = saved
+ }
+ }
+
+ // Emit static data
+ if f.StaticData != nil {
+ for _, n := range f.StaticData.([]*Node) {
+ if !gen_as_init(n, false) {
+ Fatalf("non-static data marked as static: %v\n\n", n, f)
+ }
+ }
+ }
+
+ // Allocate stack frame
+ allocauto(ptxt)
+
+ // Generate gc bitmaps.
+ liveness(Curfn, ptxt, gcargs, gclocals)
+ gcsymdup(gcargs)
+ gcsymdup(gclocals)
+
+ // Add frame prologue. Zero ambiguously live variables.
+ Thearch.Defframe(ptxt)
+ if Debug['f'] != 0 {
+ frame(0)
+ }
+
+ // Remove leftover instrumentation from the instruction stream.
+ removevardef(ptxt)
+
+ f.Config.HTML.Close()
+}
+
+// opregreg emits instructions for
+// dest := dest(To) op src(From)
+// and also returns the created obj.Prog so it
+// may be further adjusted (offset, scale, etc).
+func opregreg(op int, dest, src int16) *obj.Prog {
+ p := Prog(op)
+ p.From.Type = obj.TYPE_REG
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = dest
+ p.From.Reg = src
+ return p
+}
+
+func (s *genState) genValue(v *ssa.Value) {
+ lineno = v.Line
+ switch v.Op {
+ case ssa.OpAMD64ADDQ, ssa.OpAMD64ADDL, ssa.OpAMD64ADDW:
+ r := regnum(v)
+ r1 := regnum(v.Args[0])
+ r2 := regnum(v.Args[1])
+ switch {
+ case r == r1:
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = r2
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ case r == r2:
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = r1
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ default:
+ var asm int
+ switch v.Op {
+ case ssa.OpAMD64ADDQ:
+ asm = x86.ALEAQ
+ case ssa.OpAMD64ADDL:
+ asm = x86.ALEAL
+ case ssa.OpAMD64ADDW:
+ asm = x86.ALEAL
+ }
+ p := Prog(asm)
+ p.From.Type = obj.TYPE_MEM
+ p.From.Reg = r1
+ p.From.Scale = 1
+ p.From.Index = r2
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ }
+ // 2-address opcode arithmetic, symmetric
+ case ssa.OpAMD64ADDB, ssa.OpAMD64ADDSS, ssa.OpAMD64ADDSD,
+ ssa.OpAMD64ANDQ, ssa.OpAMD64ANDL, ssa.OpAMD64ANDW, ssa.OpAMD64ANDB,
+ ssa.OpAMD64ORQ, ssa.OpAMD64ORL, ssa.OpAMD64ORW, ssa.OpAMD64ORB,
+ ssa.OpAMD64XORQ, ssa.OpAMD64XORL, ssa.OpAMD64XORW, ssa.OpAMD64XORB,
+ ssa.OpAMD64MULQ, ssa.OpAMD64MULL, ssa.OpAMD64MULW, ssa.OpAMD64MULB,
+ ssa.OpAMD64MULSS, ssa.OpAMD64MULSD, ssa.OpAMD64PXOR:
+ r := regnum(v)
+ x := regnum(v.Args[0])
+ y := regnum(v.Args[1])
+ if x != r && y != r {
+ opregreg(moveByType(v.Type), r, x)
+ x = r
+ }
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_REG
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ if x == r {
+ p.From.Reg = y
+ } else {
+ p.From.Reg = x
+ }
+ // 2-address opcode arithmetic, not symmetric
+ case ssa.OpAMD64SUBQ, ssa.OpAMD64SUBL, ssa.OpAMD64SUBW, ssa.OpAMD64SUBB:
+ r := regnum(v)
+ x := regnum(v.Args[0])
+ y := regnum(v.Args[1])
+ var neg bool
+ if y == r {
+ // compute -(y-x) instead
+ x, y = y, x
+ neg = true
+ }
+ if x != r {
+ opregreg(moveByType(v.Type), r, x)
+ }
+ opregreg(v.Op.Asm(), r, y)
+
+ if neg {
+ if v.Op == ssa.OpAMD64SUBQ {
+ p := Prog(x86.ANEGQ)
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ } else { // Avoids partial registers write
+ p := Prog(x86.ANEGL)
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ }
+ }
+ case ssa.OpAMD64SUBSS, ssa.OpAMD64SUBSD, ssa.OpAMD64DIVSS, ssa.OpAMD64DIVSD:
+ r := regnum(v)
+ x := regnum(v.Args[0])
+ y := regnum(v.Args[1])
+ if y == r && x != r {
+ // r/y := x op r/y, need to preserve x and rewrite to
+ // r/y := r/y op x15
+ x15 := int16(x86.REG_X15)
+ // register move y to x15
+ // register move x to y
+ // rename y with x15
+ opregreg(moveByType(v.Type), x15, y)
+ opregreg(moveByType(v.Type), r, x)
+ y = x15
+ } else if x != r {
+ opregreg(moveByType(v.Type), r, x)
+ }
+ opregreg(v.Op.Asm(), r, y)
+
+ case ssa.OpAMD64DIVQ, ssa.OpAMD64DIVL, ssa.OpAMD64DIVW,
+ ssa.OpAMD64DIVQU, ssa.OpAMD64DIVLU, ssa.OpAMD64DIVWU,
+ ssa.OpAMD64MODQ, ssa.OpAMD64MODL, ssa.OpAMD64MODW,
+ ssa.OpAMD64MODQU, ssa.OpAMD64MODLU, ssa.OpAMD64MODWU:
+
+ // Arg[0] is already in AX as it's the only register we allow
+ // and AX is the only output
+ x := regnum(v.Args[1])
+
+ // CPU faults upon signed overflow, which occurs when most
+ // negative int is divided by -1.
+ var j *obj.Prog
+ if v.Op == ssa.OpAMD64DIVQ || v.Op == ssa.OpAMD64DIVL ||
+ v.Op == ssa.OpAMD64DIVW || v.Op == ssa.OpAMD64MODQ ||
+ v.Op == ssa.OpAMD64MODL || v.Op == ssa.OpAMD64MODW {
+
+ var c *obj.Prog
+ switch v.Op {
+ case ssa.OpAMD64DIVQ, ssa.OpAMD64MODQ:
+ c = Prog(x86.ACMPQ)
+ j = Prog(x86.AJEQ)
+ // go ahead and sign extend to save doing it later
+ Prog(x86.ACQO)
+
+ case ssa.OpAMD64DIVL, ssa.OpAMD64MODL:
+ c = Prog(x86.ACMPL)
+ j = Prog(x86.AJEQ)
+ Prog(x86.ACDQ)
+
+ case ssa.OpAMD64DIVW, ssa.OpAMD64MODW:
+ c = Prog(x86.ACMPW)
+ j = Prog(x86.AJEQ)
+ Prog(x86.ACWD)
+ }
+ c.From.Type = obj.TYPE_REG
+ c.From.Reg = x
+ c.To.Type = obj.TYPE_CONST
+ c.To.Offset = -1
+
+ j.To.Type = obj.TYPE_BRANCH
+
+ }
+
+ // for unsigned ints, we sign extend by setting DX = 0
+ // signed ints were sign extended above
+ if v.Op == ssa.OpAMD64DIVQU || v.Op == ssa.OpAMD64MODQU ||
+ v.Op == ssa.OpAMD64DIVLU || v.Op == ssa.OpAMD64MODLU ||
+ v.Op == ssa.OpAMD64DIVWU || v.Op == ssa.OpAMD64MODWU {
+ c := Prog(x86.AXORQ)
+ c.From.Type = obj.TYPE_REG
+ c.From.Reg = x86.REG_DX
+ c.To.Type = obj.TYPE_REG
+ c.To.Reg = x86.REG_DX
+ }
+
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = x
+
+ // signed division, rest of the check for -1 case
+ if j != nil {
+ j2 := Prog(obj.AJMP)
+ j2.To.Type = obj.TYPE_BRANCH
+
+ var n *obj.Prog
+ if v.Op == ssa.OpAMD64DIVQ || v.Op == ssa.OpAMD64DIVL ||
+ v.Op == ssa.OpAMD64DIVW {
+ // n * -1 = -n
+ n = Prog(x86.ANEGQ)
+ n.To.Type = obj.TYPE_REG
+ n.To.Reg = x86.REG_AX
+ } else {
+ // n % -1 == 0
+ n = Prog(x86.AXORQ)
+ n.From.Type = obj.TYPE_REG
+ n.From.Reg = x86.REG_DX
+ n.To.Type = obj.TYPE_REG
+ n.To.Reg = x86.REG_DX
+ }
+
+ j.To.Val = n
+ j2.To.Val = Pc
+ }
+
+ case ssa.OpAMD64HMULQ, ssa.OpAMD64HMULL, ssa.OpAMD64HMULW, ssa.OpAMD64HMULB,
+ ssa.OpAMD64HMULQU, ssa.OpAMD64HMULLU, ssa.OpAMD64HMULWU, ssa.OpAMD64HMULBU:
+ // the frontend rewrites constant division by 8/16/32 bit integers into
+ // HMUL by a constant
+ // SSA rewrites generate the 64 bit versions
+
+ // Arg[0] is already in AX as it's the only register we allow
+ // and DX is the only output we care about (the high bits)
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = regnum(v.Args[1])
+
+ // IMULB puts the high portion in AH instead of DL,
+ // so move it to DL for consistency
+ if v.Type.Size() == 1 {
+ m := Prog(x86.AMOVB)
+ m.From.Type = obj.TYPE_REG
+ m.From.Reg = x86.REG_AH
+ m.To.Type = obj.TYPE_REG
+ m.To.Reg = x86.REG_DX
+ }
+
+ case ssa.OpAMD64AVGQU:
+ // compute (x+y)/2 unsigned.
+ // Do a 64-bit add, the overflow goes into the carry.
+ // Shift right once and pull the carry back into the 63rd bit.
+ r := regnum(v)
+ x := regnum(v.Args[0])
+ y := regnum(v.Args[1])
+ if x != r && y != r {
+ opregreg(moveByType(v.Type), r, x)
+ x = r
+ }
+ p := Prog(x86.AADDQ)
+ p.From.Type = obj.TYPE_REG
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ if x == r {
+ p.From.Reg = y
+ } else {
+ p.From.Reg = x
+ }
+ p = Prog(x86.ARCRQ)
+ p.From.Type = obj.TYPE_CONST
+ p.From.Offset = 1
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+
+ case ssa.OpAMD64SHLQ, ssa.OpAMD64SHLL, ssa.OpAMD64SHLW, ssa.OpAMD64SHLB,
+ ssa.OpAMD64SHRQ, ssa.OpAMD64SHRL, ssa.OpAMD64SHRW, ssa.OpAMD64SHRB,
+ ssa.OpAMD64SARQ, ssa.OpAMD64SARL, ssa.OpAMD64SARW, ssa.OpAMD64SARB:
+ x := regnum(v.Args[0])
+ r := regnum(v)
+ if x != r {
+ if r == x86.REG_CX {
+ v.Fatalf("can't implement %s, target and shift both in CX", v.LongString())
+ }
+ p := Prog(moveByType(v.Type))
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = x
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ }
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = regnum(v.Args[1]) // should be CX
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ case ssa.OpAMD64ADDQconst, ssa.OpAMD64ADDLconst, ssa.OpAMD64ADDWconst:
+ r := regnum(v)
+ a := regnum(v.Args[0])
+ if r == a {
+ if v.AuxInt2Int64() == 1 {
+ var asm int
+ switch v.Op {
+ // Software optimization manual recommends add $1,reg.
+ // But inc/dec is 1 byte smaller. ICC always uses inc
+ // Clang/GCC choose depending on flags, but prefer add.
+ // Experiments show that inc/dec is both a little faster
+ // and make a binary a little smaller.
+ case ssa.OpAMD64ADDQconst:
+ asm = x86.AINCQ
+ case ssa.OpAMD64ADDLconst:
+ asm = x86.AINCL
+ case ssa.OpAMD64ADDWconst:
+ asm = x86.AINCL
+ }
+ p := Prog(asm)
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ return
+ } else if v.AuxInt2Int64() == -1 {
+ var asm int
+ switch v.Op {
+ case ssa.OpAMD64ADDQconst:
+ asm = x86.ADECQ
+ case ssa.OpAMD64ADDLconst:
+ asm = x86.ADECL
+ case ssa.OpAMD64ADDWconst:
+ asm = x86.ADECL
+ }
+ p := Prog(asm)
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ return
+ } else {
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_CONST
+ p.From.Offset = v.AuxInt2Int64()
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ return
+ }
+ }
+ var asm int
+ switch v.Op {
+ case ssa.OpAMD64ADDQconst:
+ asm = x86.ALEAQ
+ case ssa.OpAMD64ADDLconst:
+ asm = x86.ALEAL
+ case ssa.OpAMD64ADDWconst:
+ asm = x86.ALEAL
+ }
+ p := Prog(asm)
+ p.From.Type = obj.TYPE_MEM
+ p.From.Reg = a
+ p.From.Offset = v.AuxInt2Int64()
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ case ssa.OpAMD64MULQconst, ssa.OpAMD64MULLconst, ssa.OpAMD64MULWconst, ssa.OpAMD64MULBconst:
+ r := regnum(v)
+ x := regnum(v.Args[0])
+ if r != x {
+ p := Prog(moveByType(v.Type))
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = x
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ }
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_CONST
+ p.From.Offset = v.AuxInt2Int64()
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ // TODO: Teach doasm to compile the three-address multiply imul $c, r1, r2
+ // instead of using the MOVQ above.
+ //p.From3 = new(obj.Addr)
+ //p.From3.Type = obj.TYPE_REG
+ //p.From3.Reg = regnum(v.Args[0])
+ case ssa.OpAMD64SUBQconst, ssa.OpAMD64SUBLconst, ssa.OpAMD64SUBWconst:
+ x := regnum(v.Args[0])
+ r := regnum(v)
+ // We have 3-op add (lea), so transforming a = b - const into
+ // a = b + (- const), saves us 1 instruction. We can't fit
+ // - (-1 << 31) into 4 bytes offset in lea.
+ // We handle 2-address just fine below.
+ if v.AuxInt2Int64() == -1<<31 || x == r {
+ if x != r {
+ // This code compensates for the fact that the register allocator
+ // doesn't understand 2-address instructions yet. TODO: fix that.
+ p := Prog(moveByType(v.Type))
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = x
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ }
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_CONST
+ p.From.Offset = v.AuxInt2Int64()
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ } else if x == r && v.AuxInt2Int64() == -1 {
+ var asm int
+ // x = x - (-1) is the same as x++
+ // See OpAMD64ADDQconst comments about inc vs add $1,reg
+ switch v.Op {
+ case ssa.OpAMD64SUBQconst:
+ asm = x86.AINCQ
+ case ssa.OpAMD64SUBLconst:
+ asm = x86.AINCL
+ case ssa.OpAMD64SUBWconst:
+ asm = x86.AINCL
+ }
+ p := Prog(asm)
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ } else if x == r && v.AuxInt2Int64() == 1 {
+ var asm int
+ switch v.Op {
+ case ssa.OpAMD64SUBQconst:
+ asm = x86.ADECQ
+ case ssa.OpAMD64SUBLconst:
+ asm = x86.ADECL
+ case ssa.OpAMD64SUBWconst:
+ asm = x86.ADECL
+ }
+ p := Prog(asm)
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ } else {
+ var asm int
+ switch v.Op {
+ case ssa.OpAMD64SUBQconst:
+ asm = x86.ALEAQ
+ case ssa.OpAMD64SUBLconst:
+ asm = x86.ALEAL
+ case ssa.OpAMD64SUBWconst:
+ asm = x86.ALEAL
+ }
+ p := Prog(asm)
+ p.From.Type = obj.TYPE_MEM
+ p.From.Reg = x
+ p.From.Offset = -v.AuxInt2Int64()
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ }
+
+ case ssa.OpAMD64ADDBconst,
+ ssa.OpAMD64ANDQconst, ssa.OpAMD64ANDLconst, ssa.OpAMD64ANDWconst, ssa.OpAMD64ANDBconst,
+ ssa.OpAMD64ORQconst, ssa.OpAMD64ORLconst, ssa.OpAMD64ORWconst, ssa.OpAMD64ORBconst,
+ ssa.OpAMD64XORQconst, ssa.OpAMD64XORLconst, ssa.OpAMD64XORWconst, ssa.OpAMD64XORBconst,
+ ssa.OpAMD64SUBBconst, ssa.OpAMD64SHLQconst, ssa.OpAMD64SHLLconst, ssa.OpAMD64SHLWconst,
+ ssa.OpAMD64SHLBconst, ssa.OpAMD64SHRQconst, ssa.OpAMD64SHRLconst, ssa.OpAMD64SHRWconst,
+ ssa.OpAMD64SHRBconst, ssa.OpAMD64SARQconst, ssa.OpAMD64SARLconst, ssa.OpAMD64SARWconst,
+ ssa.OpAMD64SARBconst, ssa.OpAMD64ROLQconst, ssa.OpAMD64ROLLconst, ssa.OpAMD64ROLWconst,
+ ssa.OpAMD64ROLBconst:
+ // This code compensates for the fact that the register allocator
+ // doesn't understand 2-address instructions yet. TODO: fix that.
+ x := regnum(v.Args[0])
+ r := regnum(v)
+ if x != r {
+ p := Prog(moveByType(v.Type))
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = x
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ }
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_CONST
+ p.From.Offset = v.AuxInt2Int64()
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ case ssa.OpAMD64SBBQcarrymask, ssa.OpAMD64SBBLcarrymask:
+ r := regnum(v)
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = r
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ case ssa.OpAMD64LEAQ1, ssa.OpAMD64LEAQ2, ssa.OpAMD64LEAQ4, ssa.OpAMD64LEAQ8:
+ p := Prog(x86.ALEAQ)
+ p.From.Type = obj.TYPE_MEM
+ p.From.Reg = regnum(v.Args[0])
+ switch v.Op {
+ case ssa.OpAMD64LEAQ1:
+ p.From.Scale = 1
+ case ssa.OpAMD64LEAQ2:
+ p.From.Scale = 2
+ case ssa.OpAMD64LEAQ4:
+ p.From.Scale = 4
+ case ssa.OpAMD64LEAQ8:
+ p.From.Scale = 8
+ }
+ p.From.Index = regnum(v.Args[1])
+ addAux(&p.From, v)
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = regnum(v)
+ case ssa.OpAMD64LEAQ:
+ p := Prog(x86.ALEAQ)
+ p.From.Type = obj.TYPE_MEM
+ p.From.Reg = regnum(v.Args[0])
+ addAux(&p.From, v)
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = regnum(v)
+ case ssa.OpAMD64CMPQ, ssa.OpAMD64CMPL, ssa.OpAMD64CMPW, ssa.OpAMD64CMPB,
+ ssa.OpAMD64TESTQ, ssa.OpAMD64TESTL, ssa.OpAMD64TESTW, ssa.OpAMD64TESTB:
+ opregreg(v.Op.Asm(), regnum(v.Args[1]), regnum(v.Args[0]))
+ case ssa.OpAMD64UCOMISS, ssa.OpAMD64UCOMISD:
+ // Go assembler has swapped operands for UCOMISx relative to CMP,
+ // must account for that right here.
+ opregreg(v.Op.Asm(), regnum(v.Args[0]), regnum(v.Args[1]))
+ case ssa.OpAMD64CMPQconst, ssa.OpAMD64CMPLconst, ssa.OpAMD64CMPWconst, ssa.OpAMD64CMPBconst:
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = regnum(v.Args[0])
+ p.To.Type = obj.TYPE_CONST
+ p.To.Offset = v.AuxInt2Int64()
+ case ssa.OpAMD64TESTQconst, ssa.OpAMD64TESTLconst, ssa.OpAMD64TESTWconst, ssa.OpAMD64TESTBconst:
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_CONST
+ p.From.Offset = v.AuxInt2Int64()
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = regnum(v.Args[0])
+ case ssa.OpAMD64MOVBconst, ssa.OpAMD64MOVWconst, ssa.OpAMD64MOVLconst, ssa.OpAMD64MOVQconst:
+ x := regnum(v)
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_CONST
+ p.From.Offset = v.AuxInt2Int64()
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = x
+ // If flags are live at this instruction, suppress the
+ // MOV $0,AX -> XOR AX,AX optimization.
+ if v.Aux != nil {
+ p.Mark |= x86.PRESERVEFLAGS
+ }
+ case ssa.OpAMD64MOVSSconst, ssa.OpAMD64MOVSDconst:
+ x := regnum(v)
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_FCONST
+ p.From.Val = math.Float64frombits(uint64(v.AuxInt))
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = x
+ case ssa.OpAMD64MOVQload, ssa.OpAMD64MOVSSload, ssa.OpAMD64MOVSDload, ssa.OpAMD64MOVLload, ssa.OpAMD64MOVWload, ssa.OpAMD64MOVBload, ssa.OpAMD64MOVBQSXload, ssa.OpAMD64MOVBQZXload, ssa.OpAMD64MOVWQSXload, ssa.OpAMD64MOVWQZXload, ssa.OpAMD64MOVLQSXload, ssa.OpAMD64MOVLQZXload, ssa.OpAMD64MOVOload:
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_MEM
+ p.From.Reg = regnum(v.Args[0])
+ addAux(&p.From, v)
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = regnum(v)
+ case ssa.OpAMD64MOVQloadidx8, ssa.OpAMD64MOVSDloadidx8:
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_MEM
+ p.From.Reg = regnum(v.Args[0])
+ addAux(&p.From, v)
+ p.From.Scale = 8
+ p.From.Index = regnum(v.Args[1])
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = regnum(v)
+ case ssa.OpAMD64MOVLloadidx4, ssa.OpAMD64MOVSSloadidx4:
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_MEM
+ p.From.Reg = regnum(v.Args[0])
+ addAux(&p.From, v)
+ p.From.Scale = 4
+ p.From.Index = regnum(v.Args[1])
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = regnum(v)
+ case ssa.OpAMD64MOVWloadidx2:
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_MEM
+ p.From.Reg = regnum(v.Args[0])
+ addAux(&p.From, v)
+ p.From.Scale = 2
+ p.From.Index = regnum(v.Args[1])
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = regnum(v)
+ case ssa.OpAMD64MOVBloadidx1:
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_MEM
+ p.From.Reg = regnum(v.Args[0])
+ addAux(&p.From, v)
+ p.From.Scale = 1
+ p.From.Index = regnum(v.Args[1])
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = regnum(v)
+ case ssa.OpAMD64MOVQstore, ssa.OpAMD64MOVSSstore, ssa.OpAMD64MOVSDstore, ssa.OpAMD64MOVLstore, ssa.OpAMD64MOVWstore, ssa.OpAMD64MOVBstore, ssa.OpAMD64MOVOstore:
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = regnum(v.Args[1])
+ p.To.Type = obj.TYPE_MEM
+ p.To.Reg = regnum(v.Args[0])
+ addAux(&p.To, v)
+ case ssa.OpAMD64MOVQstoreidx8, ssa.OpAMD64MOVSDstoreidx8:
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = regnum(v.Args[2])
+ p.To.Type = obj.TYPE_MEM
+ p.To.Reg = regnum(v.Args[0])
+ p.To.Scale = 8
+ p.To.Index = regnum(v.Args[1])
+ addAux(&p.To, v)
+ case ssa.OpAMD64MOVSSstoreidx4, ssa.OpAMD64MOVLstoreidx4:
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = regnum(v.Args[2])
+ p.To.Type = obj.TYPE_MEM
+ p.To.Reg = regnum(v.Args[0])
+ p.To.Scale = 4
+ p.To.Index = regnum(v.Args[1])
+ addAux(&p.To, v)
+ case ssa.OpAMD64MOVWstoreidx2:
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = regnum(v.Args[2])
+ p.To.Type = obj.TYPE_MEM
+ p.To.Reg = regnum(v.Args[0])
+ p.To.Scale = 2
+ p.To.Index = regnum(v.Args[1])
+ addAux(&p.To, v)
+ case ssa.OpAMD64MOVBstoreidx1:
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = regnum(v.Args[2])
+ p.To.Type = obj.TYPE_MEM
+ p.To.Reg = regnum(v.Args[0])
+ p.To.Scale = 1
+ p.To.Index = regnum(v.Args[1])
+ addAux(&p.To, v)
+ case ssa.OpAMD64MOVQstoreconst, ssa.OpAMD64MOVLstoreconst, ssa.OpAMD64MOVWstoreconst, ssa.OpAMD64MOVBstoreconst:
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_CONST
+ sc := v.AuxValAndOff()
+ i := sc.Val()
+ switch v.Op {
+ case ssa.OpAMD64MOVBstoreconst:
+ i = int64(int8(i))
+ case ssa.OpAMD64MOVWstoreconst:
+ i = int64(int16(i))
+ case ssa.OpAMD64MOVLstoreconst:
+ i = int64(int32(i))
+ case ssa.OpAMD64MOVQstoreconst:
+ }
+ p.From.Offset = i
+ p.To.Type = obj.TYPE_MEM
+ p.To.Reg = regnum(v.Args[0])
+ addAux2(&p.To, v, sc.Off())
+ case ssa.OpAMD64MOVQstoreconstidx8, ssa.OpAMD64MOVLstoreconstidx4, ssa.OpAMD64MOVWstoreconstidx2, ssa.OpAMD64MOVBstoreconstidx1:
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_CONST
+ sc := v.AuxValAndOff()
+ switch v.Op {
+ case ssa.OpAMD64MOVBstoreconstidx1:
+ p.From.Offset = int64(int8(sc.Val()))
+ p.To.Scale = 1
+ case ssa.OpAMD64MOVWstoreconstidx2:
+ p.From.Offset = int64(int16(sc.Val()))
+ p.To.Scale = 2
+ case ssa.OpAMD64MOVLstoreconstidx4:
+ p.From.Offset = int64(int32(sc.Val()))
+ p.To.Scale = 4
+ case ssa.OpAMD64MOVQstoreconstidx8:
+ p.From.Offset = sc.Val()
+ p.To.Scale = 8
+ }
+ p.To.Type = obj.TYPE_MEM
+ p.To.Reg = regnum(v.Args[0])
+ p.To.Index = regnum(v.Args[1])
+ addAux2(&p.To, v, sc.Off())
+ case ssa.OpAMD64MOVLQSX, ssa.OpAMD64MOVWQSX, ssa.OpAMD64MOVBQSX, ssa.OpAMD64MOVLQZX, ssa.OpAMD64MOVWQZX, ssa.OpAMD64MOVBQZX,
+ ssa.OpAMD64CVTSL2SS, ssa.OpAMD64CVTSL2SD, ssa.OpAMD64CVTSQ2SS, ssa.OpAMD64CVTSQ2SD,
+ ssa.OpAMD64CVTTSS2SL, ssa.OpAMD64CVTTSD2SL, ssa.OpAMD64CVTTSS2SQ, ssa.OpAMD64CVTTSD2SQ,
+ ssa.OpAMD64CVTSS2SD, ssa.OpAMD64CVTSD2SS:
+ opregreg(v.Op.Asm(), regnum(v), regnum(v.Args[0]))
+ case ssa.OpAMD64DUFFZERO:
+ p := Prog(obj.ADUFFZERO)
+ p.To.Type = obj.TYPE_ADDR
+ p.To.Sym = Linksym(Pkglookup("duffzero", Runtimepkg))
+ p.To.Offset = v.AuxInt
+ case ssa.OpAMD64MOVOconst:
+ if v.AuxInt != 0 {
+ v.Unimplementedf("MOVOconst can only do constant=0")
+ }
+ r := regnum(v)
+ opregreg(x86.AXORPS, r, r)
+ case ssa.OpAMD64DUFFCOPY:
+ p := Prog(obj.ADUFFCOPY)
+ p.To.Type = obj.TYPE_ADDR
+ p.To.Sym = Linksym(Pkglookup("duffcopy", Runtimepkg))
+ p.To.Offset = v.AuxInt
+
+ case ssa.OpCopy, ssa.OpAMD64MOVQconvert: // TODO: use MOVQreg for reg->reg copies instead of OpCopy?
+ if v.Type.IsMemory() {
+ return
+ }
+ x := regnum(v.Args[0])
+ y := regnum(v)
+ if x != y {
+ opregreg(moveByType(v.Type), y, x)
+ }
+ case ssa.OpLoadReg:
+ if v.Type.IsFlags() {
+ v.Unimplementedf("load flags not implemented: %v", v.LongString())
+ return
+ }
+ p := Prog(loadByType(v.Type))
+ n, off := autoVar(v.Args[0])
+ p.From.Type = obj.TYPE_MEM
+ p.From.Node = n
+ p.From.Sym = Linksym(n.Sym)
+ p.From.Offset = off
+ if n.Class == PPARAM || n.Class == PPARAMOUT {
+ p.From.Name = obj.NAME_PARAM
+ p.From.Offset += n.Xoffset
+ } else {
+ p.From.Name = obj.NAME_AUTO
+ }
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = regnum(v)
+
+ case ssa.OpStoreReg:
+ if v.Type.IsFlags() {
+ v.Unimplementedf("store flags not implemented: %v", v.LongString())
+ return
+ }
+ p := Prog(storeByType(v.Type))
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = regnum(v.Args[0])
+ n, off := autoVar(v)
+ p.To.Type = obj.TYPE_MEM
+ p.To.Node = n
+ p.To.Sym = Linksym(n.Sym)
+ p.To.Offset = off
+ if n.Class == PPARAM || n.Class == PPARAMOUT {
+ p.To.Name = obj.NAME_PARAM
+ p.To.Offset += n.Xoffset
+ } else {
+ p.To.Name = obj.NAME_AUTO
+ }
+ case ssa.OpPhi:
+ // just check to make sure regalloc and stackalloc did it right
+ if v.Type.IsMemory() {
+ return
+ }
+ f := v.Block.Func
+ loc := f.RegAlloc[v.ID]
+ for _, a := range v.Args {
+ if aloc := f.RegAlloc[a.ID]; aloc != loc { // TODO: .Equal() instead?
+ v.Fatalf("phi arg at different location than phi: %v @ %v, but arg %v @ %v\n%s\n", v, loc, a, aloc, v.Block.Func)
+ }
+ }
+ case ssa.OpInitMem:
+ // memory arg needs no code
+ case ssa.OpArg:
+ // input args need no code
+ case ssa.OpAMD64LoweredGetClosurePtr:
+ // Output is hardwired to DX only,
+ // and DX contains the closure pointer on
+ // closure entry, and this "instruction"
+ // is scheduled to the very beginning
+ // of the entry block.
+ case ssa.OpAMD64LoweredGetG:
+ r := regnum(v)
+ // See the comments in cmd/internal/obj/x86/obj6.go
+ // near CanUse1InsnTLS for a detailed explanation of these instructions.
+ if x86.CanUse1InsnTLS(Ctxt) {
+ // MOVQ (TLS), r
+ p := Prog(x86.AMOVQ)
+ p.From.Type = obj.TYPE_MEM
+ p.From.Reg = x86.REG_TLS
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ } else {
+ // MOVQ TLS, r
+ // MOVQ (r)(TLS*1), r
+ p := Prog(x86.AMOVQ)
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = x86.REG_TLS
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ q := Prog(x86.AMOVQ)
+ q.From.Type = obj.TYPE_MEM
+ q.From.Reg = r
+ q.From.Index = x86.REG_TLS
+ q.From.Scale = 1
+ q.To.Type = obj.TYPE_REG
+ q.To.Reg = r
+ }
+ case ssa.OpAMD64CALLstatic:
+ p := Prog(obj.ACALL)
+ p.To.Type = obj.TYPE_MEM
+ p.To.Name = obj.NAME_EXTERN
+ p.To.Sym = Linksym(v.Aux.(*Sym))
+ if Maxarg < v.AuxInt {
+ Maxarg = v.AuxInt
+ }
+ case ssa.OpAMD64CALLclosure:
+ p := Prog(obj.ACALL)
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = regnum(v.Args[0])
+ if Maxarg < v.AuxInt {
+ Maxarg = v.AuxInt
+ }
+ case ssa.OpAMD64CALLdefer:
+ p := Prog(obj.ACALL)
+ p.To.Type = obj.TYPE_MEM
+ p.To.Name = obj.NAME_EXTERN
+ p.To.Sym = Linksym(Deferproc.Sym)
+ if Maxarg < v.AuxInt {
+ Maxarg = v.AuxInt
+ }
+ // defer returns in rax:
+ // 0 if we should continue executing
+ // 1 if we should jump to deferreturn call
+ p = Prog(x86.ATESTL)
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = x86.REG_AX
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = x86.REG_AX
+ p = Prog(x86.AJNE)
+ p.To.Type = obj.TYPE_BRANCH
+ s.deferBranches = append(s.deferBranches, p)
+ case ssa.OpAMD64CALLgo:
+ p := Prog(obj.ACALL)
+ p.To.Type = obj.TYPE_MEM
+ p.To.Name = obj.NAME_EXTERN
+ p.To.Sym = Linksym(Newproc.Sym)
+ if Maxarg < v.AuxInt {
+ Maxarg = v.AuxInt
+ }
+ case ssa.OpAMD64CALLinter:
+ p := Prog(obj.ACALL)
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = regnum(v.Args[0])
+ if Maxarg < v.AuxInt {
+ Maxarg = v.AuxInt
+ }
+ case ssa.OpAMD64NEGQ, ssa.OpAMD64NEGL, ssa.OpAMD64NEGW, ssa.OpAMD64NEGB,
+ ssa.OpAMD64NOTQ, ssa.OpAMD64NOTL, ssa.OpAMD64NOTW, ssa.OpAMD64NOTB:
+ x := regnum(v.Args[0])
+ r := regnum(v)
+ if x != r {
+ p := Prog(moveByType(v.Type))
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = x
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ }
+ p := Prog(v.Op.Asm())
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = r
+ case ssa.OpAMD64SQRTSD:
+ p := Prog(v.Op.Asm())
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = regnum(v.Args[0])
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = regnum(v)
+ case ssa.OpSP, ssa.OpSB:
+ // nothing to do
+ case ssa.OpAMD64SETEQ, ssa.OpAMD64SETNE,
+ ssa.OpAMD64SETL, ssa.OpAMD64SETLE,
+ ssa.OpAMD64SETG, ssa.OpAMD64SETGE,
+ ssa.OpAMD64SETGF, ssa.OpAMD64SETGEF,
+ ssa.OpAMD64SETB, ssa.OpAMD64SETBE,
+ ssa.OpAMD64SETORD, ssa.OpAMD64SETNAN,
+ ssa.OpAMD64SETA, ssa.OpAMD64SETAE:
+ p := Prog(v.Op.Asm())
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = regnum(v)
+
+ case ssa.OpAMD64SETNEF:
+ p := Prog(v.Op.Asm())
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = regnum(v)
+ q := Prog(x86.ASETPS)
+ q.To.Type = obj.TYPE_REG
+ q.To.Reg = x86.REG_AX
+ // ORL avoids partial register write and is smaller than ORQ, used by old compiler
+ opregreg(x86.AORL, regnum(v), x86.REG_AX)
+
+ case ssa.OpAMD64SETEQF:
+ p := Prog(v.Op.Asm())
+ p.To.Type = obj.TYPE_REG
+ p.To.Reg = regnum(v)
+ q := Prog(x86.ASETPC)
+ q.To.Type = obj.TYPE_REG
+ q.To.Reg = x86.REG_AX
+ // ANDL avoids partial register write and is smaller than ANDQ, used by old compiler
+ opregreg(x86.AANDL, regnum(v), x86.REG_AX)
+
+ case ssa.OpAMD64InvertFlags:
+ v.Fatalf("InvertFlags should never make it to codegen %v", v)
+ case ssa.OpAMD64FlagEQ, ssa.OpAMD64FlagLT_ULT, ssa.OpAMD64FlagLT_UGT, ssa.OpAMD64FlagGT_ULT, ssa.OpAMD64FlagGT_UGT:
+ v.Fatalf("Flag* ops should never make it to codegen %v", v)
+ case ssa.OpAMD64REPSTOSQ:
+ Prog(x86.AREP)
+ Prog(x86.ASTOSQ)
+ case ssa.OpAMD64REPMOVSQ:
+ Prog(x86.AREP)
+ Prog(x86.AMOVSQ)
+ case ssa.OpVarDef:
+ Gvardef(v.Aux.(*Node))
+ case ssa.OpVarKill:
+ gvarkill(v.Aux.(*Node))
+ case ssa.OpVarLive:
+ gvarlive(v.Aux.(*Node))
+ case ssa.OpAMD64LoweredNilCheck:
+ // Optimization - if the subsequent block has a load or store
+ // at the same address, we don't need to issue this instruction.
+ mem := v.Args[1]
+ for _, w := range v.Block.Succs[0].Values {
+ if w.Op == ssa.OpPhi {
+ if w.Type.IsMemory() {
+ mem = w
+ }
+ continue
+ }
+ if len(w.Args) == 0 || !w.Args[len(w.Args)-1].Type.IsMemory() {
+ // w doesn't use a store - can't be a memory op.
+ continue
+ }
+ if w.Args[len(w.Args)-1] != mem {
+ v.Fatalf("wrong store after nilcheck v=%s w=%s", v, w)
+ }
+ switch w.Op {
+ case ssa.OpAMD64MOVQload, ssa.OpAMD64MOVLload, ssa.OpAMD64MOVWload, ssa.OpAMD64MOVBload,
+ ssa.OpAMD64MOVQstore, ssa.OpAMD64MOVLstore, ssa.OpAMD64MOVWstore, ssa.OpAMD64MOVBstore,
+ ssa.OpAMD64MOVBQSXload, ssa.OpAMD64MOVBQZXload, ssa.OpAMD64MOVWQSXload,
+ ssa.OpAMD64MOVWQZXload, ssa.OpAMD64MOVLQSXload, ssa.OpAMD64MOVLQZXload,
+ ssa.OpAMD64MOVSSload, ssa.OpAMD64MOVSDload, ssa.OpAMD64MOVOload,
+ ssa.OpAMD64MOVSSstore, ssa.OpAMD64MOVSDstore, ssa.OpAMD64MOVOstore:
+ if w.Args[0] == v.Args[0] && w.Aux == nil && w.AuxInt >= 0 && w.AuxInt < minZeroPage {
+ if Debug_checknil != 0 && int(v.Line) > 1 {
+ Warnl(int(v.Line), "removed nil check")
+ }
+ return
+ }
+ case ssa.OpAMD64MOVQstoreconst, ssa.OpAMD64MOVLstoreconst, ssa.OpAMD64MOVWstoreconst, ssa.OpAMD64MOVBstoreconst:
+ off := ssa.ValAndOff(v.AuxInt).Off()
+ if w.Args[0] == v.Args[0] && w.Aux == nil && off >= 0 && off < minZeroPage {
+ if Debug_checknil != 0 && int(v.Line) > 1 {
+ Warnl(int(v.Line), "removed nil check")
+ }
+ return
+ }
+ }
+ if w.Type.IsMemory() {
+ if w.Op == ssa.OpVarDef || w.Op == ssa.OpVarKill || w.Op == ssa.OpVarLive {
+ // these ops are OK
+ mem = w
+ continue
+ }
+ // We can't delay the nil check past the next store.
+ break
+ }
+ }
+ // Issue a load which will fault if the input is nil.
+ // TODO: We currently use the 2-byte instruction TESTB AX, (reg).
+ // Should we use the 3-byte TESTB $0, (reg) instead? It is larger
+ // but it doesn't have false dependency on AX.
+ // Or maybe allocate an output register and use MOVL (reg),reg2 ?
+ // That trades clobbering flags for clobbering a register.
+ p := Prog(x86.ATESTB)
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = x86.REG_AX
+ p.To.Type = obj.TYPE_MEM
+ p.To.Reg = regnum(v.Args[0])
+ addAux(&p.To, v)
+ if Debug_checknil != 0 && v.Line > 1 { // v.Line==1 in generated wrappers
+ Warnl(int(v.Line), "generated nil check")
+ }
+ default:
+ v.Unimplementedf("genValue not implemented: %s", v.LongString())
+ }
+}
+
+// markMoves marks any MOVXconst ops that need to avoid clobbering flags.
+func (s *genState) markMoves(b *ssa.Block) {
+ flive := b.FlagsLiveAtEnd
+ if b.Control != nil && b.Control.Type.IsFlags() {
+ flive = true
+ }
+ for i := len(b.Values) - 1; i >= 0; i-- {
+ v := b.Values[i]
+ if flive && (v.Op == ssa.OpAMD64MOVBconst || v.Op == ssa.OpAMD64MOVWconst || v.Op == ssa.OpAMD64MOVLconst || v.Op == ssa.OpAMD64MOVQconst) {
+ // The "mark" is any non-nil Aux value.
+ v.Aux = v
+ }
+ if v.Type.IsFlags() {
+ flive = false
+ }
+ for _, a := range v.Args {
+ if a.Type.IsFlags() {
+ flive = true
+ }
+ }
+ }
+}
+
+// movZero generates a register indirect move with a 0 immediate and keeps track of bytes left and next offset
+func movZero(as int, width int64, nbytes int64, offset int64, regnum int16) (nleft int64, noff int64) {
+ p := Prog(as)
+ // TODO: use zero register on archs that support it.
+ p.From.Type = obj.TYPE_CONST
+ p.From.Offset = 0
+ p.To.Type = obj.TYPE_MEM
+ p.To.Reg = regnum
+ p.To.Offset = offset
+ offset += width
+ nleft = nbytes - width
+ return nleft, offset
+}
+
+var blockJump = [...]struct {
+ asm, invasm int
+}{
+ ssa.BlockAMD64EQ: {x86.AJEQ, x86.AJNE},
+ ssa.BlockAMD64NE: {x86.AJNE, x86.AJEQ},
+ ssa.BlockAMD64LT: {x86.AJLT, x86.AJGE},
+ ssa.BlockAMD64GE: {x86.AJGE, x86.AJLT},
+ ssa.BlockAMD64LE: {x86.AJLE, x86.AJGT},
+ ssa.BlockAMD64GT: {x86.AJGT, x86.AJLE},
+ ssa.BlockAMD64ULT: {x86.AJCS, x86.AJCC},
+ ssa.BlockAMD64UGE: {x86.AJCC, x86.AJCS},
+ ssa.BlockAMD64UGT: {x86.AJHI, x86.AJLS},
+ ssa.BlockAMD64ULE: {x86.AJLS, x86.AJHI},
+ ssa.BlockAMD64ORD: {x86.AJPC, x86.AJPS},
+ ssa.BlockAMD64NAN: {x86.AJPS, x86.AJPC},
+}
+
+type floatingEQNEJump struct {
+ jump, index int
+}
+
+var eqfJumps = [2][2]floatingEQNEJump{
+ {{x86.AJNE, 1}, {x86.AJPS, 1}}, // next == b.Succs[0]
+ {{x86.AJNE, 1}, {x86.AJPC, 0}}, // next == b.Succs[1]
+}
+var nefJumps = [2][2]floatingEQNEJump{
+ {{x86.AJNE, 0}, {x86.AJPC, 1}}, // next == b.Succs[0]
+ {{x86.AJNE, 0}, {x86.AJPS, 0}}, // next == b.Succs[1]
+}
+
+func oneFPJump(b *ssa.Block, jumps *floatingEQNEJump, likely ssa.BranchPrediction, branches []branch) []branch {
+ p := Prog(jumps.jump)
+ p.To.Type = obj.TYPE_BRANCH
+ to := jumps.index
+ branches = append(branches, branch{p, b.Succs[to]})
+ if to == 1 {
+ likely = -likely
+ }
+ // liblink reorders the instruction stream as it sees fit.
+ // Pass along what we know so liblink can make use of it.
+ // TODO: Once we've fully switched to SSA,
+ // make liblink leave our output alone.
+ switch likely {
+ case ssa.BranchUnlikely:
+ p.From.Type = obj.TYPE_CONST
+ p.From.Offset = 0
+ case ssa.BranchLikely:
+ p.From.Type = obj.TYPE_CONST
+ p.From.Offset = 1
+ }
+ return branches
+}
+
+func genFPJump(s *genState, b, next *ssa.Block, jumps *[2][2]floatingEQNEJump) {
+ likely := b.Likely
+ switch next {
+ case b.Succs[0]:
+ s.branches = oneFPJump(b, &jumps[0][0], likely, s.branches)
+ s.branches = oneFPJump(b, &jumps[0][1], likely, s.branches)
+ case b.Succs[1]:
+ s.branches = oneFPJump(b, &jumps[1][0], likely, s.branches)
+ s.branches = oneFPJump(b, &jumps[1][1], likely, s.branches)
+ default:
+ s.branches = oneFPJump(b, &jumps[1][0], likely, s.branches)
+ s.branches = oneFPJump(b, &jumps[1][1], likely, s.branches)
+ q := Prog(obj.AJMP)
+ q.To.Type = obj.TYPE_BRANCH
+ s.branches = append(s.branches, branch{q, b.Succs[1]})
+ }
+}
+
+func (s *genState) genBlock(b, next *ssa.Block) {
+ lineno = b.Line
+
+ switch b.Kind {
+ case ssa.BlockPlain, ssa.BlockCall, ssa.BlockCheck:
+ if b.Succs[0] != next {
+ p := Prog(obj.AJMP)
+ p.To.Type = obj.TYPE_BRANCH
+ s.branches = append(s.branches, branch{p, b.Succs[0]})
+ }
+ case ssa.BlockExit:
+ Prog(obj.AUNDEF) // tell plive.go that we never reach here
+ case ssa.BlockRet:
+ if hasdefer {
+ s.deferReturn()
+ }
+ Prog(obj.ARET)
+ case ssa.BlockRetJmp:
+ p := Prog(obj.AJMP)
+ p.To.Type = obj.TYPE_MEM
+ p.To.Name = obj.NAME_EXTERN
+ p.To.Sym = Linksym(b.Aux.(*Sym))
+
+ case ssa.BlockAMD64EQF:
+ genFPJump(s, b, next, &eqfJumps)
+
+ case ssa.BlockAMD64NEF:
+ genFPJump(s, b, next, &nefJumps)
+
+ case ssa.BlockAMD64EQ, ssa.BlockAMD64NE,
+ ssa.BlockAMD64LT, ssa.BlockAMD64GE,
+ ssa.BlockAMD64LE, ssa.BlockAMD64GT,
+ ssa.BlockAMD64ULT, ssa.BlockAMD64UGT,
+ ssa.BlockAMD64ULE, ssa.BlockAMD64UGE:
+ jmp := blockJump[b.Kind]
+ likely := b.Likely
+ var p *obj.Prog
+ switch next {
+ case b.Succs[0]:
+ p = Prog(jmp.invasm)
+ likely *= -1
+ p.To.Type = obj.TYPE_BRANCH
+ s.branches = append(s.branches, branch{p, b.Succs[1]})
+ case b.Succs[1]:
+ p = Prog(jmp.asm)
+ p.To.Type = obj.TYPE_BRANCH
+ s.branches = append(s.branches, branch{p, b.Succs[0]})
+ default:
+ p = Prog(jmp.asm)
+ p.To.Type = obj.TYPE_BRANCH
+ s.branches = append(s.branches, branch{p, b.Succs[0]})
+ q := Prog(obj.AJMP)
+ q.To.Type = obj.TYPE_BRANCH
+ s.branches = append(s.branches, branch{q, b.Succs[1]})
+ }
+
+ // liblink reorders the instruction stream as it sees fit.
+ // Pass along what we know so liblink can make use of it.
+ // TODO: Once we've fully switched to SSA,
+ // make liblink leave our output alone.
+ switch likely {
+ case ssa.BranchUnlikely:
+ p.From.Type = obj.TYPE_CONST
+ p.From.Offset = 0
+ case ssa.BranchLikely:
+ p.From.Type = obj.TYPE_CONST
+ p.From.Offset = 1
+ }
+
+ default:
+ b.Unimplementedf("branch not implemented: %s. Control: %s", b.LongString(), b.Control.LongString())
+ }
+}
+
+func (s *genState) deferReturn() {
+ // Deferred calls will appear to be returning to
+ // the CALL deferreturn(SB) that we are about to emit.
+ // However, the stack trace code will show the line
+ // of the instruction byte before the return PC.
+ // To avoid that being an unrelated instruction,
+ // insert an actual hardware NOP that will have the right line number.
+ // This is different from obj.ANOP, which is a virtual no-op
+ // that doesn't make it into the instruction stream.
+ s.deferTarget = Pc
+ Thearch.Ginsnop()
+ p := Prog(obj.ACALL)
+ p.To.Type = obj.TYPE_MEM
+ p.To.Name = obj.NAME_EXTERN
+ p.To.Sym = Linksym(Deferreturn.Sym)
+}
+
+// addAux adds the offset in the aux fields (AuxInt and Aux) of v to a.
+func addAux(a *obj.Addr, v *ssa.Value) {
+ addAux2(a, v, v.AuxInt)
+}
+func addAux2(a *obj.Addr, v *ssa.Value, offset int64) {
+ if a.Type != obj.TYPE_MEM {
+ v.Fatalf("bad addAux addr %s", a)
+ }
+ // add integer offset
+ a.Offset += offset
+
+ // If no additional symbol offset, we're done.
+ if v.Aux == nil {
+ return
+ }
+ // Add symbol's offset from its base register.
+ switch sym := v.Aux.(type) {
+ case *ssa.ExternSymbol:
+ a.Name = obj.NAME_EXTERN
+ a.Sym = Linksym(sym.Sym.(*Sym))
+ case *ssa.ArgSymbol:
+ n := sym.Node.(*Node)
+ a.Name = obj.NAME_PARAM
+ a.Node = n
+ a.Sym = Linksym(n.Orig.Sym)
+ a.Offset += n.Xoffset // TODO: why do I have to add this here? I don't for auto variables.
+ case *ssa.AutoSymbol:
+ n := sym.Node.(*Node)
+ a.Name = obj.NAME_AUTO
+ a.Node = n
+ a.Sym = Linksym(n.Sym)
+ default:
+ v.Fatalf("aux in %s not implemented %#v", v, v.Aux)
+ }
+}
+
+// extendIndex extends v to a full int width.
+func (s *state) extendIndex(v *ssa.Value) *ssa.Value {
+ size := v.Type.Size()
+ if size == s.config.IntSize {
+ return v
+ }
+ if size > s.config.IntSize {
+ // TODO: truncate 64-bit indexes on 32-bit pointer archs. We'd need to test
+ // the high word and branch to out-of-bounds failure if it is not 0.
+ s.Unimplementedf("64->32 index truncation not implemented")
+ return v
+ }
+
+ // Extend value to the required size
+ var op ssa.Op
+ if v.Type.IsSigned() {
+ switch 10*size + s.config.IntSize {
+ case 14:
+ op = ssa.OpSignExt8to32
+ case 18:
+ op = ssa.OpSignExt8to64
+ case 24:
+ op = ssa.OpSignExt16to32
+ case 28:
+ op = ssa.OpSignExt16to64
+ case 48:
+ op = ssa.OpSignExt32to64
+ default:
+ s.Fatalf("bad signed index extension %s", v.Type)
+ }
+ } else {
+ switch 10*size + s.config.IntSize {
+ case 14:
+ op = ssa.OpZeroExt8to32
+ case 18:
+ op = ssa.OpZeroExt8to64
+ case 24:
+ op = ssa.OpZeroExt16to32
+ case 28:
+ op = ssa.OpZeroExt16to64
+ case 48:
+ op = ssa.OpZeroExt32to64
+ default:
+ s.Fatalf("bad unsigned index extension %s", v.Type)
+ }
+ }
+ return s.newValue1(op, Types[TINT], v)
+}
+
+// ssaRegToReg maps ssa register numbers to obj register numbers.
+var ssaRegToReg = [...]int16{
+ x86.REG_AX,
+ x86.REG_CX,
+ x86.REG_DX,
+ x86.REG_BX,
+ x86.REG_SP,
+ x86.REG_BP,
+ x86.REG_SI,
+ x86.REG_DI,
+ x86.REG_R8,
+ x86.REG_R9,
+ x86.REG_R10,
+ x86.REG_R11,
+ x86.REG_R12,
+ x86.REG_R13,
+ x86.REG_R14,
+ x86.REG_R15,
+ x86.REG_X0,
+ x86.REG_X1,
+ x86.REG_X2,
+ x86.REG_X3,
+ x86.REG_X4,
+ x86.REG_X5,
+ x86.REG_X6,
+ x86.REG_X7,
+ x86.REG_X8,
+ x86.REG_X9,
+ x86.REG_X10,
+ x86.REG_X11,
+ x86.REG_X12,
+ x86.REG_X13,
+ x86.REG_X14,
+ x86.REG_X15,
+ 0, // SB isn't a real register. We fill an Addr.Reg field with 0 in this case.
+ // TODO: arch-dependent
+}
+
+// loadByType returns the load instruction of the given type.
+func loadByType(t ssa.Type) int {
+ // Avoid partial register write
+ if !t.IsFloat() && t.Size() <= 2 {
+ if t.Size() == 1 {
+ return x86.AMOVBLZX
+ } else {
+ return x86.AMOVWLZX
+ }
+ }
+ // Otherwise, there's no difference between load and store opcodes.
+ return storeByType(t)
+}
+
+// storeByType returns the store instruction of the given type.
+func storeByType(t ssa.Type) int {
+ width := t.Size()
+ if t.IsFloat() {
+ switch width {
+ case 4:
+ return x86.AMOVSS
+ case 8:
+ return x86.AMOVSD
+ }
+ } else {
+ switch width {
+ case 1:
+ return x86.AMOVB
+ case 2:
+ return x86.AMOVW
+ case 4:
+ return x86.AMOVL
+ case 8:
+ return x86.AMOVQ
+ }
+ }
+ panic("bad store type")
+}
+
+// moveByType returns the reg->reg move instruction of the given type.
+func moveByType(t ssa.Type) int {
+ if t.IsFloat() {
+ // Moving the whole sse2 register is faster
+ // than moving just the correct low portion of it.
+ // There is no xmm->xmm move with 1 byte opcode,
+ // so use movups, which has 2 byte opcode.
+ return x86.AMOVUPS
+ } else {
+ switch t.Size() {
+ case 1:
+ // Avoids partial register write
+ return x86.AMOVL
+ case 2:
+ return x86.AMOVL
+ case 4:
+ return x86.AMOVL
+ case 8:
+ return x86.AMOVQ
+ default:
+ panic("bad int register width")
+ }
+ }
+ panic("bad register type")
+}
+
+// regnum returns the register (in cmd/internal/obj numbering) to
+// which v has been allocated. Panics if v is not assigned to a
+// register.
+// TODO: Make this panic again once it stops happening routinely.
+func regnum(v *ssa.Value) int16 {
+ reg := v.Block.Func.RegAlloc[v.ID]
+ if reg == nil {
+ v.Unimplementedf("nil regnum for value: %s\n%s\n", v.LongString(), v.Block.Func)
+ return 0
+ }
+ return ssaRegToReg[reg.(*ssa.Register).Num]
+}
+
+// autoVar returns a *Node and int64 representing the auto variable and offset within it
+// where v should be spilled.
+func autoVar(v *ssa.Value) (*Node, int64) {
+ loc := v.Block.Func.RegAlloc[v.ID].(ssa.LocalSlot)
+ if v.Type.Size() > loc.Type.Size() {
+ v.Fatalf("spill/restore type %s doesn't fit in slot type %s", v.Type, loc.Type)
+ }
+ return loc.N.(*Node), loc.Off
+}
+
+// fieldIdx finds the index of the field referred to by the ODOT node n.
+func fieldIdx(n *Node) int64 {
+ t := n.Left.Type
+ f := n.Right
+ if t.Etype != TSTRUCT {
+ panic("ODOT's LHS is not a struct")
+ }
+
+ var i int64
+ for t1 := t.Type; t1 != nil; t1 = t1.Down {
+ if t1.Etype != TFIELD {
+ panic("non-TFIELD in TSTRUCT")
+ }
+ if t1.Sym != f.Sym {
+ i++
+ continue
+ }
+ if t1.Width != n.Xoffset {
+ panic("field offset doesn't match")
+ }
+ return i
+ }
+ panic(fmt.Sprintf("can't find field in expr %s\n", n))
+
+ // TODO: keep the result of this fucntion somewhere in the ODOT Node
+ // so we don't have to recompute it each time we need it.
+}
+
+// ssaExport exports a bunch of compiler services for the ssa backend.
+type ssaExport struct {
+ log bool
+ unimplemented bool
+ mustImplement bool
+}
+
+func (s *ssaExport) TypeBool() ssa.Type { return Types[TBOOL] }
+func (s *ssaExport) TypeInt8() ssa.Type { return Types[TINT8] }
+func (s *ssaExport) TypeInt16() ssa.Type { return Types[TINT16] }
+func (s *ssaExport) TypeInt32() ssa.Type { return Types[TINT32] }
+func (s *ssaExport) TypeInt64() ssa.Type { return Types[TINT64] }
+func (s *ssaExport) TypeUInt8() ssa.Type { return Types[TUINT8] }
+func (s *ssaExport) TypeUInt16() ssa.Type { return Types[TUINT16] }
+func (s *ssaExport) TypeUInt32() ssa.Type { return Types[TUINT32] }
+func (s *ssaExport) TypeUInt64() ssa.Type { return Types[TUINT64] }
+func (s *ssaExport) TypeFloat32() ssa.Type { return Types[TFLOAT32] }
+func (s *ssaExport) TypeFloat64() ssa.Type { return Types[TFLOAT64] }
+func (s *ssaExport) TypeInt() ssa.Type { return Types[TINT] }
+func (s *ssaExport) TypeUintptr() ssa.Type { return Types[TUINTPTR] }
+func (s *ssaExport) TypeString() ssa.Type { return Types[TSTRING] }
+func (s *ssaExport) TypeBytePtr() ssa.Type { return Ptrto(Types[TUINT8]) }
+
+// StringData returns a symbol (a *Sym wrapped in an interface) which
+// is the data component of a global string constant containing s.
+func (*ssaExport) StringData(s string) interface{} {
+ // TODO: is idealstring correct? It might not matter...
+ _, data := stringsym(s)
+ return &ssa.ExternSymbol{Typ: idealstring, Sym: data}
+}
+
+func (e *ssaExport) Auto(t ssa.Type) ssa.GCNode {
+ n := temp(t.(*Type)) // Note: adds new auto to Curfn.Func.Dcl list
+ e.mustImplement = true // This modifies the input to SSA, so we want to make sure we succeed from here!
+ return n
+}
+
+func (e *ssaExport) CanSSA(t ssa.Type) bool {
+ return canSSAType(t.(*Type))
+}
+
+func (e *ssaExport) Line(line int32) string {
+ return Ctxt.Line(int(line))
+}
+
+// Log logs a message from the compiler.
+func (e *ssaExport) Logf(msg string, args ...interface{}) {
+ // If e was marked as unimplemented, anything could happen. Ignore.
+ if e.log && !e.unimplemented {
+ fmt.Printf(msg, args...)
+ }
+}
+
+func (e *ssaExport) Log() bool {
+ return e.log
+}
+
+// Fatal reports a compiler error and exits.
+func (e *ssaExport) Fatalf(line int32, msg string, args ...interface{}) {
+ // If e was marked as unimplemented, anything could happen. Ignore.
+ if !e.unimplemented {
+ lineno = line
+ Fatalf(msg, args...)
+ }
+}
+
+// Unimplemented reports that the function cannot be compiled.
+// It will be removed once SSA work is complete.
+func (e *ssaExport) Unimplementedf(line int32, msg string, args ...interface{}) {
+ if e.mustImplement {
+ lineno = line
+ Fatalf(msg, args...)
+ }
+ const alwaysLog = false // enable to calculate top unimplemented features
+ if !e.unimplemented && (e.log || alwaysLog) {
+ // first implementation failure, print explanation
+ fmt.Printf("SSA unimplemented: "+msg+"\n", args...)
+ }
+ e.unimplemented = true
+}
+
+// Warnl reports a "warning", which is usually flag-triggered
+// logging output for the benefit of tests.
+func (e *ssaExport) Warnl(line int, fmt_ string, args ...interface{}) {
+ Warnl(line, fmt_, args...)
+}
+
+func (e *ssaExport) Debug_checknil() bool {
+ return Debug_checknil != 0
+}
+
+func (n *Node) Typ() ssa.Type {
+ return n.Type
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package gc
+
+import (
+ "bytes"
+ "internal/testenv"
+ "os/exec"
+ "path/filepath"
+ "runtime"
+ "strings"
+ "testing"
+)
+
+// TODO: move all these tests elsewhere?
+// Perhaps teach test/run.go how to run them with a new action verb.
+func runTest(t *testing.T, filename string) {
+ doTest(t, filename, "run")
+}
+func buildTest(t *testing.T, filename string) {
+ doTest(t, filename, "build")
+}
+func doTest(t *testing.T, filename string, kind string) {
+ if runtime.GOARCH != "amd64" {
+ t.Skipf("skipping SSA tests on %s for now", runtime.GOARCH)
+ }
+ testenv.MustHaveGoBuild(t)
+ var stdout, stderr bytes.Buffer
+ cmd := exec.Command("go", kind, filepath.Join("testdata", filename))
+ cmd.Stdout = &stdout
+ cmd.Stderr = &stderr
+ if err := cmd.Run(); err != nil {
+ t.Fatalf("Failed: %v:\nOut: %s\nStderr: %s\n", err, &stdout, &stderr)
+ }
+ if s := stdout.String(); s != "" {
+ t.Errorf("Stdout = %s\nWant empty", s)
+ }
+ if s := stderr.String(); strings.Contains(s, "SSA unimplemented") {
+ t.Errorf("Unimplemented message found in stderr:\n%s", s)
+ }
+}
+
+// TestShortCircuit tests OANDAND and OOROR expressions and short circuiting.
+func TestShortCircuit(t *testing.T) { runTest(t, "short_ssa.go") }
+
+// TestBreakContinue tests that continue and break statements do what they say.
+func TestBreakContinue(t *testing.T) { runTest(t, "break_ssa.go") }
+
+// TestTypeAssertion tests type assertions.
+func TestTypeAssertion(t *testing.T) { runTest(t, "assert_ssa.go") }
+
+// TestArithmetic tests that both backends have the same result for arithmetic expressions.
+func TestArithmetic(t *testing.T) { runTest(t, "arith_ssa.go") }
+
+// TestFP tests that both backends have the same result for floating point expressions.
+func TestFP(t *testing.T) { runTest(t, "fp_ssa.go") }
+
+// TestArithmeticBoundary tests boundary results for arithmetic operations.
+func TestArithmeticBoundary(t *testing.T) { runTest(t, "arithBoundary_ssa.go") }
+
+// TestArithmeticConst tests results for arithmetic operations against constants.
+func TestArithmeticConst(t *testing.T) { runTest(t, "arithConst_ssa.go") }
+
+func TestChan(t *testing.T) { runTest(t, "chan_ssa.go") }
+
+func TestCompound(t *testing.T) { runTest(t, "compound_ssa.go") }
+
+func TestCtl(t *testing.T) { runTest(t, "ctl_ssa.go") }
+
+func TestFp(t *testing.T) { runTest(t, "fp_ssa.go") }
+
+func TestLoadStore(t *testing.T) { runTest(t, "loadstore_ssa.go") }
+
+func TestMap(t *testing.T) { runTest(t, "map_ssa.go") }
+
+func TestRegalloc(t *testing.T) { runTest(t, "regalloc_ssa.go") }
+
+func TestString(t *testing.T) { runTest(t, "string_ssa.go") }
+
+func TestDeferNoReturn(t *testing.T) { buildTest(t, "deferNoReturn_ssa.go") }
+
+// TestClosure tests closure related behavior.
+func TestClosure(t *testing.T) { runTest(t, "closure_ssa.go") }
+
+func TestArray(t *testing.T) { runTest(t, "array_ssa.go") }
+
+func TestAppend(t *testing.T) { runTest(t, "append_ssa.go") }
+
+func TestZero(t *testing.T) { runTest(t, "zero_ssa.go") }
+
+func TestAddressed(t *testing.T) { runTest(t, "addressed_ssa.go") }
+
+func TestCopy(t *testing.T) { runTest(t, "copy_ssa.go") }
+
+func TestUnsafe(t *testing.T) { runTest(t, "unsafe_ssa.go") }
+
+func TestPhi(t *testing.T) { runTest(t, "phi_ssa.go") }
type Error struct {
lineno int
- seq int
msg string
}
func adderr(line int, format string, args ...interface{}) {
errors = append(errors, Error{
- seq: len(errors),
lineno: line,
msg: fmt.Sprintf("%v: %s\n", Ctxt.Line(line), fmt.Sprintf(format, args...)),
})
}
-// errcmp sorts errors by line, then seq, then message.
-type errcmp []Error
+// byLineno sorts errors by lineno.
+type byLineno []Error
-func (x errcmp) Len() int { return len(x) }
-func (x errcmp) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
-func (x errcmp) Less(i, j int) bool {
- a := &x[i]
- b := &x[j]
- if a.lineno != b.lineno {
- return a.lineno < b.lineno
- }
- if a.seq != b.seq {
- return a.seq < b.seq
- }
- return a.msg < b.msg
-}
+func (x byLineno) Len() int { return len(x) }
+func (x byLineno) Less(i, j int) bool { return x[i].lineno < x[j].lineno }
+func (x byLineno) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func Flusherrors() {
bstdout.Flush()
if len(errors) == 0 {
return
}
- sort.Sort(errcmp(errors))
+ sort.Stable(byLineno(errors))
for i := 0; i < len(errors); i++ {
if i == 0 || errors[i].msg != errors[i-1].msg {
fmt.Printf("%s", errors[i].msg)
}
}
-var yyerror_lastsyntax int
+var yyerror_lastsyntax int32
func Yyerror(format string, args ...interface{}) {
msg := fmt.Sprintf(format, args...)
if strings.HasPrefix(msg, "syntax error") {
nsyntaxerrors++
- // An unexpected EOF caused a syntax error. Use the previous
- // line number since getc generated a fake newline character.
- if curio.eofnl {
- lexlineno = prevlineno
- }
-
// only one syntax error per line
- if int32(yyerror_lastsyntax) == lexlineno {
- return
- }
- yyerror_lastsyntax = int(lexlineno)
-
- // plain "syntax error" gets "near foo" added
- if msg == "syntax error" {
- yyerrorl(int(lexlineno), "syntax error near %s", lexbuf.String())
+ if yyerror_lastsyntax == lineno {
return
}
+ yyerror_lastsyntax = lineno
- yyerrorl(int(lexlineno), "%s", msg)
+ yyerrorl(int(lineno), "%s", msg)
return
}
}
s := &Sym{
- Name: name,
- Pkg: pkg,
- Lexical: LNAME,
+ Name: name,
+ Pkg: pkg,
}
if name == "init" {
initSyms = append(initSyms, s)
n.Orig = norig
}
-// ispaddedfield reports whether the given field
-// is followed by padding. For the case where t is
-// the last field, total gives the size of the enclosing struct.
-func ispaddedfield(t *Type, total int64) bool {
- if t.Etype != TFIELD {
- Fatalf("ispaddedfield called non-field %v", t)
- }
- if t.Down == nil {
- return t.Width+t.Type.Width != total
- }
- return t.Width+t.Type.Width != t.Down.Width
-}
-
-func algtype1(t *Type, bad **Type) int {
- if bad != nil {
- *bad = nil
- }
- if t.Broke {
- return AMEM
- }
- if t.Noalg {
- return ANOEQ
- }
-
- switch t.Etype {
- // will be defined later.
- case TANY, TFORW:
- *bad = t
-
- return -1
-
- case TINT8,
- TUINT8,
- TINT16,
- TUINT16,
- TINT32,
- TUINT32,
- TINT64,
- TUINT64,
- TINT,
- TUINT,
- TUINTPTR,
- TBOOL,
- TPTR32,
- TPTR64,
- TCHAN,
- TUNSAFEPTR:
- return AMEM
-
- case TFUNC, TMAP:
- if bad != nil {
- *bad = t
- }
- return ANOEQ
-
- case TFLOAT32:
- return AFLOAT32
-
- case TFLOAT64:
- return AFLOAT64
-
- case TCOMPLEX64:
- return ACPLX64
-
- case TCOMPLEX128:
- return ACPLX128
-
- case TSTRING:
- return ASTRING
-
- case TINTER:
- if isnilinter(t) {
- return ANILINTER
- }
- return AINTER
-
- case TARRAY:
- if Isslice(t) {
- if bad != nil {
- *bad = t
- }
- return ANOEQ
- }
-
- a := algtype1(t.Type, bad)
- if a == ANOEQ || a == AMEM {
- if a == ANOEQ && bad != nil {
- *bad = t
- }
- return a
- }
-
- return -1 // needs special compare
-
- case TSTRUCT:
- if t.Type != nil && t.Type.Down == nil && !isblanksym(t.Type.Sym) {
- // One-field struct is same as that one field alone.
- return algtype1(t.Type.Type, bad)
- }
-
- ret := AMEM
- var a int
- for t1 := t.Type; t1 != nil; t1 = t1.Down {
- // All fields must be comparable.
- a = algtype1(t1.Type, bad)
-
- if a == ANOEQ {
- return ANOEQ
- }
-
- // Blank fields, padded fields, fields with non-memory
- // equality need special compare.
- if a != AMEM || isblanksym(t1.Sym) || ispaddedfield(t1, t.Width) {
- ret = -1
- continue
- }
- }
-
- return ret
- }
-
- Fatalf("algtype1: unexpected type %v", t)
- return 0
-}
-
-func algtype(t *Type) int {
- a := algtype1(t, nil)
- if a == AMEM || a == ANOEQ {
- if Isslice(t) {
- return ASLICE
- }
- switch t.Width {
- case 0:
- return a + AMEM0 - AMEM
-
- case 1:
- return a + AMEM8 - AMEM
-
- case 2:
- return a + AMEM16 - AMEM
-
- case 4:
- return a + AMEM32 - AMEM
-
- case 8:
- return a + AMEM64 - AMEM
-
- case 16:
- return a + AMEM128 - AMEM
- }
- }
-
- return a
-}
-
func maptype(key *Type, val *Type) *Type {
if key != nil {
var bad *Type
}
if t1.Sym != nil || t2.Sym != nil {
// Special case: we keep byte and uint8 separate
- // for error messages. Treat them as equal.
+ // for error messages. Treat them as equal.
switch t1.Etype {
case TUINT8:
if (t1 == Types[TUINT8] || t1 == bytetype) && (t2 == Types[TUINT8] || t2 == bytetype) {
}
// The rules for interfaces are no different in conversions
- // than assignments. If interfaces are involved, stop now
+ // than assignments. If interfaces are involved, stop now
// with the good message from assignop.
// Otherwise clear the error.
if src.Etype == TINTER || dst.Etype == TINTER {
if Curfn != nil {
fmt.Printf("--- %v frame ---\n", Curfn.Func.Nname.Sym)
- for l := Curfn.Func.Dcl; l != nil; l = l.Next {
- printframenode(l.N)
+ for _, ln := range Curfn.Func.Dcl {
+ printframenode(ln)
}
}
}
l = list(l, nodlit(v)) // method name
call := Nod(OCALL, syslook("panicwrap", 0), nil)
call.List = l
- n.Nbody = list1(call)
- fn.Nbody = list(fn.Nbody, n)
+ n.Nbody.Set([]*Node{call})
+ fn.Nbody.Append(n)
}
dot := adddot(Nod(OXDOT, this.Left, newname(method.Sym)))
}
as := Nod(OAS, this.Left, Nod(OCONVNOP, dot, nil))
as.Right.Type = rcvr
- fn.Nbody = list(fn.Nbody, as)
+ fn.Nbody.Append(as)
n := Nod(ORETJMP, nil, nil)
n.Left = newname(methodsym(method.Sym, methodrcvr, 0))
- fn.Nbody = list(fn.Nbody, n)
+ fn.Nbody.Append(n)
} else {
fn.Func.Wrapper = true // ignore frame for panic+recover matching
call := Nod(OCALL, dot, nil)
call = n
}
- fn.Nbody = list(fn.Nbody, call)
+ fn.Nbody.Append(call)
}
if false && Debug['r'] != 0 {
- dumplist("genwrapper body", fn.Nbody)
+ dumpslice("genwrapper body", fn.Nbody.Slice())
}
funcbody(fn)
fn.Func.Dupok = true
}
typecheck(&fn, Etop)
- typechecklist(fn.Nbody, Etop)
+ typecheckslice(fn.Nbody.Slice(), Etop)
inlcalls(fn)
escAnalyze([]*Node{fn}, false)
return n
}
-func hashfor(t *Type) *Node {
- var sym *Sym
-
- a := algtype1(t, nil)
- switch a {
- case AMEM:
- Fatalf("hashfor with AMEM type")
-
- case AINTER:
- sym = Pkglookup("interhash", Runtimepkg)
-
- case ANILINTER:
- sym = Pkglookup("nilinterhash", Runtimepkg)
-
- case ASTRING:
- sym = Pkglookup("strhash", Runtimepkg)
-
- case AFLOAT32:
- sym = Pkglookup("f32hash", Runtimepkg)
-
- case AFLOAT64:
- sym = Pkglookup("f64hash", Runtimepkg)
-
- case ACPLX64:
- sym = Pkglookup("c64hash", Runtimepkg)
-
- case ACPLX128:
- sym = Pkglookup("c128hash", Runtimepkg)
-
- default:
- sym = typesymprefix(".hash", t)
- }
-
- n := newname(sym)
- n.Class = PFUNC
- tfn := Nod(OTFUNC, nil, nil)
- tfn.List = list(tfn.List, Nod(ODCLFIELD, nil, typenod(Ptrto(t))))
- tfn.List = list(tfn.List, Nod(ODCLFIELD, nil, typenod(Types[TUINTPTR])))
- tfn.Rlist = list(tfn.Rlist, Nod(ODCLFIELD, nil, typenod(Types[TUINTPTR])))
- typecheck(&tfn, Etype)
- n.Type = tfn.Type
- return n
-}
-
-// Generate a helper function to compute the hash of a value of type t.
-func genhash(sym *Sym, t *Type) {
- if Debug['r'] != 0 {
- fmt.Printf("genhash %v %v\n", sym, t)
- }
-
- lineno = 1 // less confusing than end of input
- dclcontext = PEXTERN
- markdcl()
-
- // func sym(p *T, h uintptr) uintptr
- fn := Nod(ODCLFUNC, nil, nil)
-
- fn.Func.Nname = newname(sym)
- fn.Func.Nname.Class = PFUNC
- tfn := Nod(OTFUNC, nil, nil)
- fn.Func.Nname.Name.Param.Ntype = tfn
-
- n := Nod(ODCLFIELD, newname(Lookup("p")), typenod(Ptrto(t)))
- tfn.List = list(tfn.List, n)
- np := n.Left
- n = Nod(ODCLFIELD, newname(Lookup("h")), typenod(Types[TUINTPTR]))
- tfn.List = list(tfn.List, n)
- nh := n.Left
- n = Nod(ODCLFIELD, nil, typenod(Types[TUINTPTR])) // return value
- tfn.Rlist = list(tfn.Rlist, n)
-
- funchdr(fn)
- typecheck(&fn.Func.Nname.Name.Param.Ntype, Etype)
-
- // genhash is only called for types that have equality but
- // cannot be handled by the standard algorithms,
- // so t must be either an array or a struct.
- switch t.Etype {
- default:
- Fatalf("genhash %v", t)
-
- case TARRAY:
- if Isslice(t) {
- Fatalf("genhash %v", t)
- }
-
- // An array of pure memory would be handled by the
- // standard algorithm, so the element type must not be
- // pure memory.
- hashel := hashfor(t.Type)
-
- n := Nod(ORANGE, nil, Nod(OIND, np, nil))
- ni := newname(Lookup("i"))
- ni.Type = Types[TINT]
- n.List = list1(ni)
- n.Colas = true
- colasdefn(n.List, n)
- ni = n.List.N
-
- // h = hashel(&p[i], h)
- call := Nod(OCALL, hashel, nil)
-
- nx := Nod(OINDEX, np, ni)
- nx.Bounded = true
- na := Nod(OADDR, nx, nil)
- na.Etype = 1 // no escape to heap
- call.List = list(call.List, na)
- call.List = list(call.List, nh)
- n.Nbody = list(n.Nbody, Nod(OAS, nh, call))
-
- fn.Nbody = list(fn.Nbody, n)
-
- // Walk the struct using memhash for runs of AMEM
- // and calling specific hash functions for the others.
- case TSTRUCT:
- var first *Type
-
- offend := int64(0)
- var size int64
- var call *Node
- var nx *Node
- var na *Node
- var hashel *Node
- for t1 := t.Type; ; t1 = t1.Down {
- if t1 != nil && algtype1(t1.Type, nil) == AMEM && !isblanksym(t1.Sym) {
- offend = t1.Width + t1.Type.Width
- if first == nil {
- first = t1
- }
-
- // If it's a memory field but it's padded, stop here.
- if ispaddedfield(t1, t.Width) {
- t1 = t1.Down
- } else {
- continue
- }
- }
-
- // Run memhash for fields up to this one.
- if first != nil {
- size = offend - first.Width // first->width is offset
- hashel = hashmem(first.Type)
-
- // h = hashel(&p.first, size, h)
- call = Nod(OCALL, hashel, nil)
-
- nx = Nod(OXDOT, np, newname(first.Sym)) // TODO: fields from other packages?
- na = Nod(OADDR, nx, nil)
- na.Etype = 1 // no escape to heap
- call.List = list(call.List, na)
- call.List = list(call.List, nh)
- call.List = list(call.List, Nodintconst(size))
- fn.Nbody = list(fn.Nbody, Nod(OAS, nh, call))
-
- first = nil
- }
-
- if t1 == nil {
- break
- }
- if isblanksym(t1.Sym) {
- continue
- }
-
- // Run hash for this field.
- if algtype1(t1.Type, nil) == AMEM {
- hashel = hashmem(t1.Type)
-
- // h = memhash(&p.t1, h, size)
- call = Nod(OCALL, hashel, nil)
-
- nx = Nod(OXDOT, np, newname(t1.Sym)) // TODO: fields from other packages?
- na = Nod(OADDR, nx, nil)
- na.Etype = 1 // no escape to heap
- call.List = list(call.List, na)
- call.List = list(call.List, nh)
- call.List = list(call.List, Nodintconst(t1.Type.Width))
- fn.Nbody = list(fn.Nbody, Nod(OAS, nh, call))
- } else {
- hashel = hashfor(t1.Type)
-
- // h = hashel(&p.t1, h)
- call = Nod(OCALL, hashel, nil)
-
- nx = Nod(OXDOT, np, newname(t1.Sym)) // TODO: fields from other packages?
- na = Nod(OADDR, nx, nil)
- na.Etype = 1 // no escape to heap
- call.List = list(call.List, na)
- call.List = list(call.List, nh)
- fn.Nbody = list(fn.Nbody, Nod(OAS, nh, call))
- }
- }
- }
-
- r := Nod(ORETURN, nil, nil)
- r.List = list(r.List, nh)
- fn.Nbody = list(fn.Nbody, r)
-
- if Debug['r'] != 0 {
- dumplist("genhash body", fn.Nbody)
- }
-
- funcbody(fn)
- Curfn = fn
- fn.Func.Dupok = true
- typecheck(&fn, Etop)
- typechecklist(fn.Nbody, Etop)
- Curfn = nil
-
- // Disable safemode while compiling this code: the code we
- // generate internally can refer to unsafe.Pointer.
- // In this case it can happen if we need to generate an ==
- // for a struct containing a reflect.Value, which itself has
- // an unexported field of type unsafe.Pointer.
- old_safemode := safemode
-
- safemode = 0
- funccompile(fn)
- safemode = old_safemode
-}
-
-// Return node for
-// if p.field != q.field { return false }
-func eqfield(p *Node, q *Node, field *Node) *Node {
- nx := Nod(OXDOT, p, field)
- ny := Nod(OXDOT, q, field)
- nif := Nod(OIF, nil, nil)
- nif.Left = Nod(ONE, nx, ny)
- r := Nod(ORETURN, nil, nil)
- r.List = list(r.List, Nodbool(false))
- nif.Nbody = list(nif.Nbody, r)
- return nif
-}
-
-func eqmemfunc(size int64, type_ *Type, needsize *int) *Node {
- var fn *Node
-
- switch size {
- default:
- fn = syslook("memequal", 1)
- *needsize = 1
-
- case 1, 2, 4, 8, 16:
- buf := fmt.Sprintf("memequal%d", int(size)*8)
- fn = syslook(buf, 1)
- *needsize = 0
- }
-
- substArgTypes(fn, type_, type_)
- return fn
-}
-
-// Return node for
-// if !memequal(&p.field, &q.field [, size]) { return false }
-func eqmem(p *Node, q *Node, field *Node, size int64) *Node {
- var needsize int
-
- nx := Nod(OADDR, Nod(OXDOT, p, field), nil)
- nx.Etype = 1 // does not escape
- ny := Nod(OADDR, Nod(OXDOT, q, field), nil)
- ny.Etype = 1 // does not escape
- typecheck(&nx, Erv)
- typecheck(&ny, Erv)
-
- call := Nod(OCALL, eqmemfunc(size, nx.Type.Type, &needsize), nil)
- call.List = list(call.List, nx)
- call.List = list(call.List, ny)
- if needsize != 0 {
- call.List = list(call.List, Nodintconst(size))
- }
-
- nif := Nod(OIF, nil, nil)
- nif.Left = Nod(ONOT, call, nil)
- r := Nod(ORETURN, nil, nil)
- r.List = list(r.List, Nodbool(false))
- nif.Nbody = list(nif.Nbody, r)
- return nif
-}
-
-// Generate a helper function to check equality of two values of type t.
-func geneq(sym *Sym, t *Type) {
- if Debug['r'] != 0 {
- fmt.Printf("geneq %v %v\n", sym, t)
- }
-
- lineno = 1 // less confusing than end of input
- dclcontext = PEXTERN
- markdcl()
-
- // func sym(p, q *T) bool
- fn := Nod(ODCLFUNC, nil, nil)
-
- fn.Func.Nname = newname(sym)
- fn.Func.Nname.Class = PFUNC
- tfn := Nod(OTFUNC, nil, nil)
- fn.Func.Nname.Name.Param.Ntype = tfn
-
- n := Nod(ODCLFIELD, newname(Lookup("p")), typenod(Ptrto(t)))
- tfn.List = list(tfn.List, n)
- np := n.Left
- n = Nod(ODCLFIELD, newname(Lookup("q")), typenod(Ptrto(t)))
- tfn.List = list(tfn.List, n)
- nq := n.Left
- n = Nod(ODCLFIELD, nil, typenod(Types[TBOOL]))
- tfn.Rlist = list(tfn.Rlist, n)
-
- funchdr(fn)
-
- // geneq is only called for types that have equality but
- // cannot be handled by the standard algorithms,
- // so t must be either an array or a struct.
- switch t.Etype {
- default:
- Fatalf("geneq %v", t)
-
- case TARRAY:
- if Isslice(t) {
- Fatalf("geneq %v", t)
- }
-
- // An array of pure memory would be handled by the
- // standard memequal, so the element type must not be
- // pure memory. Even if we unrolled the range loop,
- // each iteration would be a function call, so don't bother
- // unrolling.
- nrange := Nod(ORANGE, nil, Nod(OIND, np, nil))
-
- ni := newname(Lookup("i"))
- ni.Type = Types[TINT]
- nrange.List = list1(ni)
- nrange.Colas = true
- colasdefn(nrange.List, nrange)
- ni = nrange.List.N
-
- // if p[i] != q[i] { return false }
- nx := Nod(OINDEX, np, ni)
-
- nx.Bounded = true
- ny := Nod(OINDEX, nq, ni)
- ny.Bounded = true
-
- nif := Nod(OIF, nil, nil)
- nif.Left = Nod(ONE, nx, ny)
- r := Nod(ORETURN, nil, nil)
- r.List = list(r.List, Nodbool(false))
- nif.Nbody = list(nif.Nbody, r)
- nrange.Nbody = list(nrange.Nbody, nif)
- fn.Nbody = list(fn.Nbody, nrange)
-
- // Walk the struct using memequal for runs of AMEM
- // and calling specific equality tests for the others.
- // Skip blank-named fields.
- case TSTRUCT:
- var first *Type
-
- offend := int64(0)
- var size int64
- for t1 := t.Type; ; t1 = t1.Down {
- if t1 != nil && algtype1(t1.Type, nil) == AMEM && !isblanksym(t1.Sym) {
- offend = t1.Width + t1.Type.Width
- if first == nil {
- first = t1
- }
-
- // If it's a memory field but it's padded, stop here.
- if ispaddedfield(t1, t.Width) {
- t1 = t1.Down
- } else {
- continue
- }
- }
-
- // Run memequal for fields up to this one.
- // TODO(rsc): All the calls to newname are wrong for
- // cross-package unexported fields.
- if first != nil {
- if first.Down == t1 {
- fn.Nbody = list(fn.Nbody, eqfield(np, nq, newname(first.Sym)))
- } else if first.Down.Down == t1 {
- fn.Nbody = list(fn.Nbody, eqfield(np, nq, newname(first.Sym)))
- first = first.Down
- if !isblanksym(first.Sym) {
- fn.Nbody = list(fn.Nbody, eqfield(np, nq, newname(first.Sym)))
- }
- } else {
- // More than two fields: use memequal.
- size = offend - first.Width // first->width is offset
- fn.Nbody = list(fn.Nbody, eqmem(np, nq, newname(first.Sym), size))
- }
-
- first = nil
- }
-
- if t1 == nil {
- break
- }
- if isblanksym(t1.Sym) {
- continue
- }
-
- // Check this field, which is not just memory.
- fn.Nbody = list(fn.Nbody, eqfield(np, nq, newname(t1.Sym)))
- }
- }
-
- // return true
- r := Nod(ORETURN, nil, nil)
-
- r.List = list(r.List, Nodbool(true))
- fn.Nbody = list(fn.Nbody, r)
-
- if Debug['r'] != 0 {
- dumplist("geneq body", fn.Nbody)
- }
-
- funcbody(fn)
- Curfn = fn
- fn.Func.Dupok = true
- typecheck(&fn, Etop)
- typechecklist(fn.Nbody, Etop)
- Curfn = nil
-
- // Disable safemode while compiling this code: the code we
- // generate internally can refer to unsafe.Pointer.
- // In this case it can happen if we need to generate an ==
- // for a struct containing a reflect.Value, which itself has
- // an unexported field of type unsafe.Pointer.
- old_safemode := safemode
-
- safemode = 0
- funccompile(fn)
- safemode = old_safemode
-}
-
func ifacelookdot(s *Sym, t *Type, followptr *bool, ignorecase int) *Type {
*followptr = false
return n
}
+func liststmtslice(l []*Node) *Node {
+ var ll *NodeList
+ for _, n := range l {
+ ll = list(ll, n)
+ }
+ return liststmt(ll)
+}
+
// return nelem of list
func structcount(t *Type) int {
var s Iter
}
// Convert raw string to the prefix that will be used in the symbol
-// table. All control characters, space, '%' and '"', as well as
-// non-7-bit clean bytes turn into %xx. The period needs escaping
+// table. All control characters, space, '%' and '"', as well as
+// non-7-bit clean bytes turn into %xx. The period needs escaping
// only in the last segment of the path, and it makes for happier
// users if we escape that as little as possible.
//
n.Ullman = UINF
}
+func addinitslice(np **Node, init []*Node) {
+ var l *NodeList
+ for _, n := range init {
+ l = list(l, n)
+ }
+ addinit(np, l)
+}
+
var reservedimports = []string{
"go",
"type",
}
}
- typechecklist(ncase.Nbody, Etop)
+ typecheckslice(ncase.Nbody.Slice(), Etop)
}
lineno = int32(lno)
}
// convert the switch into OIF statements
- var cas *NodeList
+ var cas []*Node
if s.kind == switchKindTrue || s.kind == switchKindFalse {
s.exprname = Nodbool(s.kind == switchKindTrue)
} else if consttype(cond) >= 0 {
s.exprname = cond
} else {
s.exprname = temp(cond.Type)
- cas = list1(Nod(OAS, s.exprname, cond))
- typechecklist(cas, Etop)
+ cas = []*Node{Nod(OAS, s.exprname, cond)}
+ typecheckslice(cas, Etop)
}
// enumerate the cases, and lop off the default case
// deal with expressions one at a time
if !okforcmp[t.Etype] || cc[0].typ != caseKindExprConst {
a := s.walkCases(cc[:1])
- cas = list(cas, a)
+ cas = append(cas, a)
cc = cc[1:]
continue
}
// sort and compile constants
sort.Sort(caseClauseByExpr(cc[:run]))
a := s.walkCases(cc[:run])
- cas = list(cas, a)
+ cas = append(cas, a)
cc = cc[run:]
}
// handle default case
if nerrors == 0 {
- cas = list(cas, def)
- sw.Nbody = concat(cas, sw.Nbody)
- walkstmtlist(sw.Nbody)
+ cas = append(cas, def)
+ sw.Nbody.Set(append(cas, sw.Nbody.Slice()...))
+ walkstmtslice(sw.Nbody.Slice())
}
}
a.Left = Nod(ONOT, n.Left, nil) // if !val
typecheck(&a.Left, Erv)
}
- a.Nbody = list1(n.Right) // goto l
+ a.Nbody.Set([]*Node{n.Right}) // goto l
cas = list(cas, a)
lineno = int32(lno)
a.Left = le
}
typecheck(&a.Left, Erv)
- a.Nbody = list1(s.walkCases(cc[:half]))
+ a.Nbody.Set([]*Node{s.walkCases(cc[:half])})
a.Rlist = list1(s.walkCases(cc[half:]))
return a
}
lno := setlineno(sw)
- var cas *NodeList // cases
- var stat *NodeList // statements
- var def *Node // defaults
+ var cas *NodeList // cases
+ var stat []*Node // statements
+ var def *Node // defaults
br := Nod(OBREAK, nil, nil)
for l := sw.List; l != nil; l = l.Next {
}
}
- stat = list(stat, Nod(OLABEL, jmp.Left, nil))
+ stat = append(stat, Nod(OLABEL, jmp.Left, nil))
if typeswvar != nil && needvar && n.Rlist != nil {
- l := list1(Nod(ODCL, n.Rlist.N, nil))
- l = list(l, Nod(OAS, n.Rlist.N, typeswvar))
- typechecklist(l, Etop)
- stat = concat(stat, l)
+ l := []*Node{
+ Nod(ODCL, n.Rlist.N, nil),
+ Nod(OAS, n.Rlist.N, typeswvar),
+ }
+ typecheckslice(l, Etop)
+ stat = append(stat, l...)
}
- stat = concat(stat, n.Nbody)
+ stat = append(stat, n.Nbody.Slice()...)
// botch - shouldn't fall thru declaration
- last := stat.End.N
+ last := stat[len(stat)-1]
if last.Xoffset == n.Xoffset && last.Op == OXFALL {
if typeswvar != nil {
setlineno(last)
last.Op = OFALL
} else {
- stat = list(stat, br)
+ stat = append(stat, br)
}
}
- stat = list(stat, br)
+ stat = append(stat, br)
if def != nil {
cas = list(cas, def)
}
sw.List = cas
- sw.Nbody = stat
+ sw.Nbody.Set(stat)
lineno = lno
}
return
}
- var cas *NodeList
+ var cas []*Node
// predeclare temporary variables and the boolean var
s.facename = temp(cond.Right.Type)
a := Nod(OAS, s.facename, cond.Right)
typecheck(&a, Etop)
- cas = list(cas, a)
+ cas = append(cas, a)
s.okname = temp(Types[TBOOL])
typecheck(&s.okname, Erv)
// set up labels and jumps
casebody(sw, s.facename)
- // calculate type hash
- t := cond.Right.Type
- if isnilinter(t) {
- a = syslook("efacethash", 1)
- } else {
- a = syslook("ifacethash", 1)
- }
- substArgTypes(a, t)
- a = Nod(OCALL, a, nil)
- a.List = list1(s.facename)
- a = Nod(OAS, s.hashname, a)
- typecheck(&a, Etop)
- cas = list(cas, a)
-
cc := caseClauses(sw, switchKindType)
sw.List = nil
var def *Node
} else {
def = Nod(OBREAK, nil, nil)
}
+ var typenil *Node
+ if len(cc) > 0 && cc[0].typ == caseKindTypeNil {
+ typenil = cc[0].node.Right
+ cc = cc[1:]
+ }
+
+ // For empty interfaces, do:
+ // if e._type == nil {
+ // do nil case if it exists, otherwise default
+ // }
+ // h := e._type.hash
+ // Use a similar strategy for non-empty interfaces.
+
+ // Get interface descriptor word.
+ typ := Nod(OITAB, s.facename, nil)
+
+ // Check for nil first.
+ i := Nod(OIF, nil, nil)
+ i.Left = Nod(OEQ, typ, nodnil())
+ if typenil != nil {
+ // Do explicit nil case right here.
+ i.Nbody.Set([]*Node{typenil})
+ } else {
+ // Jump to default case.
+ lbl := newCaseLabel()
+ i.Nbody.Set([]*Node{Nod(OGOTO, lbl, nil)})
+ // Wrap default case with label.
+ blk := Nod(OBLOCK, nil, nil)
+ blk.List = list(list1(Nod(OLABEL, lbl, nil)), def)
+ def = blk
+ }
+ typecheck(&i.Left, Erv)
+ cas = append(cas, i)
+
+ if !isnilinter(cond.Right.Type) {
+ // Load type from itab.
+ typ = Nod(ODOTPTR, typ, nil)
+ typ.Type = Ptrto(Types[TUINT8])
+ typ.Typecheck = 1
+ typ.Xoffset = int64(Widthptr) // offset of _type in runtime.itab
+ typ.Bounded = true // guaranteed not to fault
+ }
+ // Load hash from type.
+ h := Nod(ODOTPTR, typ, nil)
+ h.Type = Types[TUINT32]
+ h.Typecheck = 1
+ h.Xoffset = int64(2 * Widthptr) // offset of hash in runtime._type
+ h.Bounded = true // guaranteed not to fault
+ a = Nod(OAS, s.hashname, h)
+ typecheck(&a, Etop)
+ cas = append(cas, a)
// insert type equality check into each case block
for _, c := range cc {
n := c.node
switch c.typ {
- case caseKindTypeNil:
- var v Val
- v.U = new(NilVal)
- a = Nod(OIF, nil, nil)
- a.Left = Nod(OEQ, s.facename, nodlit(v))
- typecheck(&a.Left, Erv)
- a.Nbody = list1(n.Right) // if i==nil { goto l }
- n.Right = a
-
case caseKindTypeVar, caseKindTypeConst:
n.Right = s.typeone(n)
+ default:
+ Fatalf("typeSwitch with bad kind: %d", c.typ)
}
}
for len(cc) > 0 {
if cc[0].typ != caseKindTypeConst {
n := cc[0].node
- cas = list(cas, n.Right)
+ cas = append(cas, n.Right)
cc = cc[1:]
continue
}
if false {
for i := 0; i < run; i++ {
n := cc[i].node
- cas = list(cas, n.Right)
+ cas = append(cas, n.Right)
}
continue
}
}
// binary search among cases to narrow by hash
- cas = list(cas, s.walkCases(cc[:ncase]))
+ cas = append(cas, s.walkCases(cc[:ncase]))
cc = cc[ncase:]
}
// handle default case
if nerrors == 0 {
- cas = list(cas, def)
- sw.Nbody = concat(cas, sw.Nbody)
+ cas = append(cas, def)
+ sw.Nbody.Set(append(cas, sw.Nbody.Slice()...))
sw.List = nil
- walkstmtlist(sw.Nbody)
+ walkstmtslice(sw.Nbody.Slice())
}
}
c := Nod(OIF, nil, nil)
c.Left = s.okname
- c.Nbody = list1(t.Right) // if ok { goto l }
+ c.Nbody.Set([]*Node{t.Right}) // if ok { goto l }
return liststmt(list(init, c))
}
a := Nod(OIF, nil, nil)
a.Left = Nod(OEQ, s.hashname, Nodintconst(int64(c.hash)))
typecheck(&a.Left, Erv)
- a.Nbody = list1(n.Right)
+ a.Nbody.Set([]*Node{n.Right})
cas = list(cas, a)
}
return liststmt(cas)
a := Nod(OIF, nil, nil)
a.Left = Nod(OLE, s.hashname, Nodintconst(int64(cc[half-1].hash)))
typecheck(&a.Left, Erv)
- a.Nbody = list1(s.walkCases(cc[:half]))
+ a.Nbody.Set([]*Node{s.walkCases(cc[:half])})
a.Rlist = list1(s.walkCases(cc[half:]))
return a
}
Left *Node
Right *Node
Ninit *NodeList
- Nbody *NodeList
+ Nbody Nodes
List *NodeList
Rlist *NodeList
// Func holds Node fields used only with function-like nodes.
type Func struct {
Shortname *Node
- Enter *NodeList
- Exit *NodeList
- Cvars *NodeList // closure params
- Dcl *NodeList // autodcl for this func/closure
- Inldcl *NodeList // copy of dcl for use in inlining
+ Enter Nodes // for example, allocate and initialize memory for escaping parameters
+ Exit Nodes
+ Cvars Nodes // closure params
+ Dcl []*Node // autodcl for this func/closure
+ Inldcl *[]*Node // copy of dcl for use in inlining
Closgen int
Outerfunc *Node
Fieldtrack []*Type
FCurfn *Node
Nname *Node
- Inl *NodeList // copy of the body for use in inlining
+ Inl Nodes // copy of the body for use in inlining
InlCost int32
Depth int32
Endlineno int32
+ WBLineno int32 // line number of first write barrier
- Norace bool // func must not have race detector annotations
- Nosplit bool // func should not execute on separate stack
- Noinline bool // func should not be inlined
- Nowritebarrier bool // emit compiler error instead of write barrier
- Nowritebarrierrec bool // error on write barrier in this or recursive callees
- Dupok bool // duplicate definitions ok
- Wrapper bool // is method wrapper
- Needctxt bool // function uses context register (has closure variables)
- Systemstack bool // must run on system stack
-
- WBLineno int32 // line number of first write barrier
+ Pragma Pragma // go:xxx function annotations
+ Dupok bool // duplicate definitions ok
+ Wrapper bool // is method wrapper
+ Needctxt bool // function uses context register (has closure variables)
}
type Op uint8
}
return int(n)
}
+
+// Nodes is a pointer to a slice of *Node.
+// For fields that are not used in most nodes, this is used instead of
+// a slice to save space.
+type Nodes struct{ slice *[]*Node }
+
+// Slice returns the entries in Nodes as a slice.
+// Changes to the slice entries (as in s[i] = n) will be reflected in
+// the Nodes.
+func (n *Nodes) Slice() []*Node {
+ if n.slice == nil {
+ return nil
+ }
+ return *n.slice
+}
+
+// NodeList returns the entries in Nodes as a NodeList.
+// Changes to the NodeList entries (as in l.N = n) will *not* be
+// reflected in the Nodes.
+// This wastes memory and should be used as little as possible.
+func (n *Nodes) NodeList() *NodeList {
+ if n.slice == nil {
+ return nil
+ }
+ var ret *NodeList
+ for _, n := range *n.slice {
+ ret = list(ret, n)
+ }
+ return ret
+}
+
+// Set sets Nodes to a slice.
+// This takes ownership of the slice.
+func (n *Nodes) Set(s []*Node) {
+ if len(s) == 0 {
+ n.slice = nil
+ } else {
+ n.slice = &s
+ }
+}
+
+// Append appends entries to Nodes.
+// If a slice is passed in, this will take ownership of it.
+func (n *Nodes) Append(a ...*Node) {
+ if n.slice == nil {
+ if len(a) > 0 {
+ n.slice = &a
+ }
+ } else {
+ *n.slice = append(*n.slice, a...)
+ }
+}
+
+// SetToNodeList sets Nodes to the contents of a NodeList.
+func (n *Nodes) SetToNodeList(l *NodeList) {
+ s := make([]*Node, 0, count(l))
+ for ; l != nil; l = l.Next {
+ s = append(s, l.N)
+ }
+ n.Set(s)
+}
+
+// AppendNodeList appends the contents of a NodeList.
+func (n *Nodes) AppendNodeList(l *NodeList) {
+ if n.slice == nil {
+ n.SetToNodeList(l)
+ } else {
+ for ; l != nil; l = l.Next {
+ *n.slice = append(*n.slice, l.N)
+ }
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package main
+
+import "fmt"
+
+var output string
+
+func mypanic(s string) {
+ fmt.Printf(output)
+ panic(s)
+}
+
+func assertEqual(x, y int) {
+ if x != y {
+ mypanic("assertEqual failed")
+ }
+}
+
+func main() {
+ x := f1_ssa(2, 3)
+ output += fmt.Sprintln("*x is", *x)
+ output += fmt.Sprintln("Gratuitously use some stack")
+ output += fmt.Sprintln("*x is", *x)
+ assertEqual(*x, 9)
+
+ w := f3a_ssa(6)
+ output += fmt.Sprintln("*w is", *w)
+ output += fmt.Sprintln("Gratuitously use some stack")
+ output += fmt.Sprintln("*w is", *w)
+ assertEqual(*w, 6)
+
+ y := f3b_ssa(12)
+ output += fmt.Sprintln("*y.(*int) is", *y.(*int))
+ output += fmt.Sprintln("Gratuitously use some stack")
+ output += fmt.Sprintln("*y.(*int) is", *y.(*int))
+ assertEqual(*y.(*int), 12)
+
+ z := f3c_ssa(8)
+ output += fmt.Sprintln("*z.(*int) is", *z.(*int))
+ output += fmt.Sprintln("Gratuitously use some stack")
+ output += fmt.Sprintln("*z.(*int) is", *z.(*int))
+ assertEqual(*z.(*int), 8)
+
+ args()
+ test_autos()
+}
+
+func f1_ssa(x, y int) *int {
+ switch {
+ } //go:noinline
+ x = x*y + y
+ return &x
+}
+
+func f3a_ssa(x int) *int {
+ switch {
+ } //go:noinline
+ return &x
+}
+
+func f3b_ssa(x int) interface{} { // ./foo.go:15: internal error: f3b_ssa ~r1 (type interface {}) recorded as live on entry
+ switch {
+ } //go:noinline
+ return &x
+}
+
+func f3c_ssa(y int) interface{} {
+ switch {
+ } //go:noinline
+ x := y
+ return &x
+}
+
+type V struct {
+ p *V
+ w, x int64
+}
+
+func args() {
+ v := V{p: nil, w: 1, x: 1}
+ a := V{p: &v, w: 2, x: 2}
+ b := V{p: &v, w: 0, x: 0}
+ i := v.args_ssa(a, b)
+ output += fmt.Sprintln("i=", i)
+ assertEqual(int(i), 2)
+}
+
+func (v V) args_ssa(a, b V) int64 {
+ switch {
+ } //go:noinline
+ if v.w == 0 {
+ return v.x
+ }
+ if v.w == 1 {
+ return a.x
+ }
+ if v.w == 2 {
+ return b.x
+ }
+ b.p.p = &a // v.p in caller = &a
+
+ return -1
+}
+
+func test_autos() {
+ test(11)
+ test(12)
+ test(13)
+ test(21)
+ test(22)
+ test(23)
+ test(31)
+ test(32)
+}
+
+func test(which int64) {
+ output += fmt.Sprintln("test", which)
+ v1 := V{w: 30, x: 3, p: nil}
+ v2, v3 := v1.autos_ssa(which, 10, 1, 20, 2)
+ if which != v2.val() {
+ output += fmt.Sprintln("Expected which=", which, "got v2.val()=", v2.val())
+ mypanic("Failure of expected V value")
+ }
+ if v2.p.val() != v3.val() {
+ output += fmt.Sprintln("Expected v2.p.val()=", v2.p.val(), "got v3.val()=", v3.val())
+ mypanic("Failure of expected V.p value")
+ }
+ if which != v3.p.p.p.p.p.p.p.val() {
+ output += fmt.Sprintln("Expected which=", which, "got v3.p.p.p.p.p.p.p.val()=", v3.p.p.p.p.p.p.p.val())
+ mypanic("Failure of expected V.p value")
+ }
+}
+
+func (v V) val() int64 {
+ return v.w + v.x
+}
+
+// autos_ssa uses contents of v and parameters w1, w2, x1, x2
+// to initialize a bunch of locals, all of which have their
+// address taken to force heap allocation, and then based on
+// the value of which a pair of those locals are copied in
+// various ways to the two results y, and z, which are also
+// addressed. Which is expected to be one of 11-13, 21-23, 31, 32,
+// and y.val() should be equal to which and y.p.val() should
+// be equal to z.val(). Also, x(.p)**8 == x; that is, the
+// autos are all linked into a ring.
+func (v V) autos_ssa(which, w1, x1, w2, x2 int64) (y, z V) {
+ switch {
+ } //go:noinline
+ fill_ssa(v.w, v.x, &v, v.p) // gratuitous no-op to force addressing
+ var a, b, c, d, e, f, g, h V
+ fill_ssa(w1, x1, &a, &b)
+ fill_ssa(w1, x2, &b, &c)
+ fill_ssa(w1, v.x, &c, &d)
+ fill_ssa(w2, x1, &d, &e)
+ fill_ssa(w2, x2, &e, &f)
+ fill_ssa(w2, v.x, &f, &g)
+ fill_ssa(v.w, x1, &g, &h)
+ fill_ssa(v.w, x2, &h, &a)
+ switch which {
+ case 11:
+ y = a
+ z.getsI(&b)
+ case 12:
+ y.gets(&b)
+ z = c
+ case 13:
+ y.gets(&c)
+ z = d
+ case 21:
+ y.getsI(&d)
+ z.gets(&e)
+ case 22:
+ y = e
+ z = f
+ case 23:
+ y.gets(&f)
+ z.getsI(&g)
+ case 31:
+ y = g
+ z.gets(&h)
+ case 32:
+ y.getsI(&h)
+ z = a
+ default:
+
+ panic("")
+ }
+ return
+}
+
+// gets is an address-mentioning way of implementing
+// structure assignment.
+func (to *V) gets(from *V) {
+ switch {
+ } //go:noinline
+ *to = *from
+}
+
+// gets is an address-and-interface-mentioning way of
+// implementing structure assignment.
+func (to *V) getsI(from interface{}) {
+ switch {
+ } //go:noinline
+ *to = *from.(*V)
+}
+
+// fill_ssa initializes r with V{w:w, x:x, p:p}
+func fill_ssa(w, x int64, r, p *V) {
+ switch {
+ } //go:noinline
+ *r = V{w: w, x: x, p: p}
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// append_ssa.go tests append operations.
+package main
+
+import "fmt"
+
+var failed = false
+
+//go:noinline
+func appendOne_ssa(a []int, x int) []int {
+ return append(a, x)
+}
+
+//go:noinline
+func appendThree_ssa(a []int, x, y, z int) []int {
+ return append(a, x, y, z)
+}
+
+func eq(a, b []int) bool {
+ if len(a) != len(b) {
+ return false
+ }
+ for i := range a {
+ if a[i] != b[i] {
+ return false
+ }
+ }
+ return true
+}
+
+func expect(got, want []int) {
+ if eq(got, want) {
+ return
+ }
+ fmt.Printf("expected %v, got %v\n", want, got)
+ failed = true
+}
+
+func testAppend() {
+ var store [7]int
+ a := store[:0]
+
+ a = appendOne_ssa(a, 1)
+ expect(a, []int{1})
+ a = appendThree_ssa(a, 2, 3, 4)
+ expect(a, []int{1, 2, 3, 4})
+ a = appendThree_ssa(a, 5, 6, 7)
+ expect(a, []int{1, 2, 3, 4, 5, 6, 7})
+ if &a[0] != &store[0] {
+ fmt.Println("unnecessary grow")
+ failed = true
+ }
+ a = appendOne_ssa(a, 8)
+ expect(a, []int{1, 2, 3, 4, 5, 6, 7, 8})
+ if &a[0] == &store[0] {
+ fmt.Println("didn't grow")
+ failed = true
+ }
+}
+
+func main() {
+ testAppend()
+
+ if failed {
+ panic("failed")
+ }
+}
--- /dev/null
+package main
+
+import "fmt"
+
+type utd64 struct {
+ a, b uint64
+ add, sub, mul, div, mod uint64
+}
+type itd64 struct {
+ a, b int64
+ add, sub, mul, div, mod int64
+}
+type utd32 struct {
+ a, b uint32
+ add, sub, mul, div, mod uint32
+}
+type itd32 struct {
+ a, b int32
+ add, sub, mul, div, mod int32
+}
+type utd16 struct {
+ a, b uint16
+ add, sub, mul, div, mod uint16
+}
+type itd16 struct {
+ a, b int16
+ add, sub, mul, div, mod int16
+}
+type utd8 struct {
+ a, b uint8
+ add, sub, mul, div, mod uint8
+}
+type itd8 struct {
+ a, b int8
+ add, sub, mul, div, mod int8
+}
+
+//go:noinline
+func add_uint64_ssa(a, b uint64) uint64 {
+ return a + b
+}
+
+//go:noinline
+func sub_uint64_ssa(a, b uint64) uint64 {
+ return a - b
+}
+
+//go:noinline
+func div_uint64_ssa(a, b uint64) uint64 {
+ return a / b
+}
+
+//go:noinline
+func mod_uint64_ssa(a, b uint64) uint64 {
+ return a % b
+}
+
+//go:noinline
+func mul_uint64_ssa(a, b uint64) uint64 {
+ return a * b
+}
+
+//go:noinline
+func add_int64_ssa(a, b int64) int64 {
+ return a + b
+}
+
+//go:noinline
+func sub_int64_ssa(a, b int64) int64 {
+ return a - b
+}
+
+//go:noinline
+func div_int64_ssa(a, b int64) int64 {
+ return a / b
+}
+
+//go:noinline
+func mod_int64_ssa(a, b int64) int64 {
+ return a % b
+}
+
+//go:noinline
+func mul_int64_ssa(a, b int64) int64 {
+ return a * b
+}
+
+//go:noinline
+func add_uint32_ssa(a, b uint32) uint32 {
+ return a + b
+}
+
+//go:noinline
+func sub_uint32_ssa(a, b uint32) uint32 {
+ return a - b
+}
+
+//go:noinline
+func div_uint32_ssa(a, b uint32) uint32 {
+ return a / b
+}
+
+//go:noinline
+func mod_uint32_ssa(a, b uint32) uint32 {
+ return a % b
+}
+
+//go:noinline
+func mul_uint32_ssa(a, b uint32) uint32 {
+ return a * b
+}
+
+//go:noinline
+func add_int32_ssa(a, b int32) int32 {
+ return a + b
+}
+
+//go:noinline
+func sub_int32_ssa(a, b int32) int32 {
+ return a - b
+}
+
+//go:noinline
+func div_int32_ssa(a, b int32) int32 {
+ return a / b
+}
+
+//go:noinline
+func mod_int32_ssa(a, b int32) int32 {
+ return a % b
+}
+
+//go:noinline
+func mul_int32_ssa(a, b int32) int32 {
+ return a * b
+}
+
+//go:noinline
+func add_uint16_ssa(a, b uint16) uint16 {
+ return a + b
+}
+
+//go:noinline
+func sub_uint16_ssa(a, b uint16) uint16 {
+ return a - b
+}
+
+//go:noinline
+func div_uint16_ssa(a, b uint16) uint16 {
+ return a / b
+}
+
+//go:noinline
+func mod_uint16_ssa(a, b uint16) uint16 {
+ return a % b
+}
+
+//go:noinline
+func mul_uint16_ssa(a, b uint16) uint16 {
+ return a * b
+}
+
+//go:noinline
+func add_int16_ssa(a, b int16) int16 {
+ return a + b
+}
+
+//go:noinline
+func sub_int16_ssa(a, b int16) int16 {
+ return a - b
+}
+
+//go:noinline
+func div_int16_ssa(a, b int16) int16 {
+ return a / b
+}
+
+//go:noinline
+func mod_int16_ssa(a, b int16) int16 {
+ return a % b
+}
+
+//go:noinline
+func mul_int16_ssa(a, b int16) int16 {
+ return a * b
+}
+
+//go:noinline
+func add_uint8_ssa(a, b uint8) uint8 {
+ return a + b
+}
+
+//go:noinline
+func sub_uint8_ssa(a, b uint8) uint8 {
+ return a - b
+}
+
+//go:noinline
+func div_uint8_ssa(a, b uint8) uint8 {
+ return a / b
+}
+
+//go:noinline
+func mod_uint8_ssa(a, b uint8) uint8 {
+ return a % b
+}
+
+//go:noinline
+func mul_uint8_ssa(a, b uint8) uint8 {
+ return a * b
+}
+
+//go:noinline
+func add_int8_ssa(a, b int8) int8 {
+ return a + b
+}
+
+//go:noinline
+func sub_int8_ssa(a, b int8) int8 {
+ return a - b
+}
+
+//go:noinline
+func div_int8_ssa(a, b int8) int8 {
+ return a / b
+}
+
+//go:noinline
+func mod_int8_ssa(a, b int8) int8 {
+ return a % b
+}
+
+//go:noinline
+func mul_int8_ssa(a, b int8) int8 {
+ return a * b
+}
+
+var uint64_data []utd64 = []utd64{utd64{a: 0, b: 0, add: 0, sub: 0, mul: 0},
+ utd64{a: 0, b: 1, add: 1, sub: 18446744073709551615, mul: 0, div: 0, mod: 0},
+ utd64{a: 0, b: 4294967296, add: 4294967296, sub: 18446744069414584320, mul: 0, div: 0, mod: 0},
+ utd64{a: 0, b: 18446744073709551615, add: 18446744073709551615, sub: 1, mul: 0, div: 0, mod: 0},
+ utd64{a: 1, b: 0, add: 1, sub: 1, mul: 0},
+ utd64{a: 1, b: 1, add: 2, sub: 0, mul: 1, div: 1, mod: 0},
+ utd64{a: 1, b: 4294967296, add: 4294967297, sub: 18446744069414584321, mul: 4294967296, div: 0, mod: 1},
+ utd64{a: 1, b: 18446744073709551615, add: 0, sub: 2, mul: 18446744073709551615, div: 0, mod: 1},
+ utd64{a: 4294967296, b: 0, add: 4294967296, sub: 4294967296, mul: 0},
+ utd64{a: 4294967296, b: 1, add: 4294967297, sub: 4294967295, mul: 4294967296, div: 4294967296, mod: 0},
+ utd64{a: 4294967296, b: 4294967296, add: 8589934592, sub: 0, mul: 0, div: 1, mod: 0},
+ utd64{a: 4294967296, b: 18446744073709551615, add: 4294967295, sub: 4294967297, mul: 18446744069414584320, div: 0, mod: 4294967296},
+ utd64{a: 18446744073709551615, b: 0, add: 18446744073709551615, sub: 18446744073709551615, mul: 0},
+ utd64{a: 18446744073709551615, b: 1, add: 0, sub: 18446744073709551614, mul: 18446744073709551615, div: 18446744073709551615, mod: 0},
+ utd64{a: 18446744073709551615, b: 4294967296, add: 4294967295, sub: 18446744069414584319, mul: 18446744069414584320, div: 4294967295, mod: 4294967295},
+ utd64{a: 18446744073709551615, b: 18446744073709551615, add: 18446744073709551614, sub: 0, mul: 1, div: 1, mod: 0},
+}
+var int64_data []itd64 = []itd64{itd64{a: -9223372036854775808, b: -9223372036854775808, add: 0, sub: 0, mul: 0, div: 1, mod: 0},
+ itd64{a: -9223372036854775808, b: -9223372036854775807, add: 1, sub: -1, mul: -9223372036854775808, div: 1, mod: -1},
+ itd64{a: -9223372036854775808, b: -4294967296, add: 9223372032559808512, sub: -9223372032559808512, mul: 0, div: 2147483648, mod: 0},
+ itd64{a: -9223372036854775808, b: -1, add: 9223372036854775807, sub: -9223372036854775807, mul: -9223372036854775808, div: -9223372036854775808, mod: 0},
+ itd64{a: -9223372036854775808, b: 0, add: -9223372036854775808, sub: -9223372036854775808, mul: 0},
+ itd64{a: -9223372036854775808, b: 1, add: -9223372036854775807, sub: 9223372036854775807, mul: -9223372036854775808, div: -9223372036854775808, mod: 0},
+ itd64{a: -9223372036854775808, b: 4294967296, add: -9223372032559808512, sub: 9223372032559808512, mul: 0, div: -2147483648, mod: 0},
+ itd64{a: -9223372036854775808, b: 9223372036854775806, add: -2, sub: 2, mul: 0, div: -1, mod: -2},
+ itd64{a: -9223372036854775808, b: 9223372036854775807, add: -1, sub: 1, mul: -9223372036854775808, div: -1, mod: -1},
+ itd64{a: -9223372036854775807, b: -9223372036854775808, add: 1, sub: 1, mul: -9223372036854775808, div: 0, mod: -9223372036854775807},
+ itd64{a: -9223372036854775807, b: -9223372036854775807, add: 2, sub: 0, mul: 1, div: 1, mod: 0},
+ itd64{a: -9223372036854775807, b: -4294967296, add: 9223372032559808513, sub: -9223372032559808511, mul: -4294967296, div: 2147483647, mod: -4294967295},
+ itd64{a: -9223372036854775807, b: -1, add: -9223372036854775808, sub: -9223372036854775806, mul: 9223372036854775807, div: 9223372036854775807, mod: 0},
+ itd64{a: -9223372036854775807, b: 0, add: -9223372036854775807, sub: -9223372036854775807, mul: 0},
+ itd64{a: -9223372036854775807, b: 1, add: -9223372036854775806, sub: -9223372036854775808, mul: -9223372036854775807, div: -9223372036854775807, mod: 0},
+ itd64{a: -9223372036854775807, b: 4294967296, add: -9223372032559808511, sub: 9223372032559808513, mul: 4294967296, div: -2147483647, mod: -4294967295},
+ itd64{a: -9223372036854775807, b: 9223372036854775806, add: -1, sub: 3, mul: 9223372036854775806, div: -1, mod: -1},
+ itd64{a: -9223372036854775807, b: 9223372036854775807, add: 0, sub: 2, mul: -1, div: -1, mod: 0},
+ itd64{a: -4294967296, b: -9223372036854775808, add: 9223372032559808512, sub: 9223372032559808512, mul: 0, div: 0, mod: -4294967296},
+ itd64{a: -4294967296, b: -9223372036854775807, add: 9223372032559808513, sub: 9223372032559808511, mul: -4294967296, div: 0, mod: -4294967296},
+ itd64{a: -4294967296, b: -4294967296, add: -8589934592, sub: 0, mul: 0, div: 1, mod: 0},
+ itd64{a: -4294967296, b: -1, add: -4294967297, sub: -4294967295, mul: 4294967296, div: 4294967296, mod: 0},
+ itd64{a: -4294967296, b: 0, add: -4294967296, sub: -4294967296, mul: 0},
+ itd64{a: -4294967296, b: 1, add: -4294967295, sub: -4294967297, mul: -4294967296, div: -4294967296, mod: 0},
+ itd64{a: -4294967296, b: 4294967296, add: 0, sub: -8589934592, mul: 0, div: -1, mod: 0},
+ itd64{a: -4294967296, b: 9223372036854775806, add: 9223372032559808510, sub: 9223372032559808514, mul: 8589934592, div: 0, mod: -4294967296},
+ itd64{a: -4294967296, b: 9223372036854775807, add: 9223372032559808511, sub: 9223372032559808513, mul: 4294967296, div: 0, mod: -4294967296},
+ itd64{a: -1, b: -9223372036854775808, add: 9223372036854775807, sub: 9223372036854775807, mul: -9223372036854775808, div: 0, mod: -1},
+ itd64{a: -1, b: -9223372036854775807, add: -9223372036854775808, sub: 9223372036854775806, mul: 9223372036854775807, div: 0, mod: -1},
+ itd64{a: -1, b: -4294967296, add: -4294967297, sub: 4294967295, mul: 4294967296, div: 0, mod: -1},
+ itd64{a: -1, b: -1, add: -2, sub: 0, mul: 1, div: 1, mod: 0},
+ itd64{a: -1, b: 0, add: -1, sub: -1, mul: 0},
+ itd64{a: -1, b: 1, add: 0, sub: -2, mul: -1, div: -1, mod: 0},
+ itd64{a: -1, b: 4294967296, add: 4294967295, sub: -4294967297, mul: -4294967296, div: 0, mod: -1},
+ itd64{a: -1, b: 9223372036854775806, add: 9223372036854775805, sub: -9223372036854775807, mul: -9223372036854775806, div: 0, mod: -1},
+ itd64{a: -1, b: 9223372036854775807, add: 9223372036854775806, sub: -9223372036854775808, mul: -9223372036854775807, div: 0, mod: -1},
+ itd64{a: 0, b: -9223372036854775808, add: -9223372036854775808, sub: -9223372036854775808, mul: 0, div: 0, mod: 0},
+ itd64{a: 0, b: -9223372036854775807, add: -9223372036854775807, sub: 9223372036854775807, mul: 0, div: 0, mod: 0},
+ itd64{a: 0, b: -4294967296, add: -4294967296, sub: 4294967296, mul: 0, div: 0, mod: 0},
+ itd64{a: 0, b: -1, add: -1, sub: 1, mul: 0, div: 0, mod: 0},
+ itd64{a: 0, b: 0, add: 0, sub: 0, mul: 0},
+ itd64{a: 0, b: 1, add: 1, sub: -1, mul: 0, div: 0, mod: 0},
+ itd64{a: 0, b: 4294967296, add: 4294967296, sub: -4294967296, mul: 0, div: 0, mod: 0},
+ itd64{a: 0, b: 9223372036854775806, add: 9223372036854775806, sub: -9223372036854775806, mul: 0, div: 0, mod: 0},
+ itd64{a: 0, b: 9223372036854775807, add: 9223372036854775807, sub: -9223372036854775807, mul: 0, div: 0, mod: 0},
+ itd64{a: 1, b: -9223372036854775808, add: -9223372036854775807, sub: -9223372036854775807, mul: -9223372036854775808, div: 0, mod: 1},
+ itd64{a: 1, b: -9223372036854775807, add: -9223372036854775806, sub: -9223372036854775808, mul: -9223372036854775807, div: 0, mod: 1},
+ itd64{a: 1, b: -4294967296, add: -4294967295, sub: 4294967297, mul: -4294967296, div: 0, mod: 1},
+ itd64{a: 1, b: -1, add: 0, sub: 2, mul: -1, div: -1, mod: 0},
+ itd64{a: 1, b: 0, add: 1, sub: 1, mul: 0},
+ itd64{a: 1, b: 1, add: 2, sub: 0, mul: 1, div: 1, mod: 0},
+ itd64{a: 1, b: 4294967296, add: 4294967297, sub: -4294967295, mul: 4294967296, div: 0, mod: 1},
+ itd64{a: 1, b: 9223372036854775806, add: 9223372036854775807, sub: -9223372036854775805, mul: 9223372036854775806, div: 0, mod: 1},
+ itd64{a: 1, b: 9223372036854775807, add: -9223372036854775808, sub: -9223372036854775806, mul: 9223372036854775807, div: 0, mod: 1},
+ itd64{a: 4294967296, b: -9223372036854775808, add: -9223372032559808512, sub: -9223372032559808512, mul: 0, div: 0, mod: 4294967296},
+ itd64{a: 4294967296, b: -9223372036854775807, add: -9223372032559808511, sub: -9223372032559808513, mul: 4294967296, div: 0, mod: 4294967296},
+ itd64{a: 4294967296, b: -4294967296, add: 0, sub: 8589934592, mul: 0, div: -1, mod: 0},
+ itd64{a: 4294967296, b: -1, add: 4294967295, sub: 4294967297, mul: -4294967296, div: -4294967296, mod: 0},
+ itd64{a: 4294967296, b: 0, add: 4294967296, sub: 4294967296, mul: 0},
+ itd64{a: 4294967296, b: 1, add: 4294967297, sub: 4294967295, mul: 4294967296, div: 4294967296, mod: 0},
+ itd64{a: 4294967296, b: 4294967296, add: 8589934592, sub: 0, mul: 0, div: 1, mod: 0},
+ itd64{a: 4294967296, b: 9223372036854775806, add: -9223372032559808514, sub: -9223372032559808510, mul: -8589934592, div: 0, mod: 4294967296},
+ itd64{a: 4294967296, b: 9223372036854775807, add: -9223372032559808513, sub: -9223372032559808511, mul: -4294967296, div: 0, mod: 4294967296},
+ itd64{a: 9223372036854775806, b: -9223372036854775808, add: -2, sub: -2, mul: 0, div: 0, mod: 9223372036854775806},
+ itd64{a: 9223372036854775806, b: -9223372036854775807, add: -1, sub: -3, mul: 9223372036854775806, div: 0, mod: 9223372036854775806},
+ itd64{a: 9223372036854775806, b: -4294967296, add: 9223372032559808510, sub: -9223372032559808514, mul: 8589934592, div: -2147483647, mod: 4294967294},
+ itd64{a: 9223372036854775806, b: -1, add: 9223372036854775805, sub: 9223372036854775807, mul: -9223372036854775806, div: -9223372036854775806, mod: 0},
+ itd64{a: 9223372036854775806, b: 0, add: 9223372036854775806, sub: 9223372036854775806, mul: 0},
+ itd64{a: 9223372036854775806, b: 1, add: 9223372036854775807, sub: 9223372036854775805, mul: 9223372036854775806, div: 9223372036854775806, mod: 0},
+ itd64{a: 9223372036854775806, b: 4294967296, add: -9223372032559808514, sub: 9223372032559808510, mul: -8589934592, div: 2147483647, mod: 4294967294},
+ itd64{a: 9223372036854775806, b: 9223372036854775806, add: -4, sub: 0, mul: 4, div: 1, mod: 0},
+ itd64{a: 9223372036854775806, b: 9223372036854775807, add: -3, sub: -1, mul: -9223372036854775806, div: 0, mod: 9223372036854775806},
+ itd64{a: 9223372036854775807, b: -9223372036854775808, add: -1, sub: -1, mul: -9223372036854775808, div: 0, mod: 9223372036854775807},
+ itd64{a: 9223372036854775807, b: -9223372036854775807, add: 0, sub: -2, mul: -1, div: -1, mod: 0},
+ itd64{a: 9223372036854775807, b: -4294967296, add: 9223372032559808511, sub: -9223372032559808513, mul: 4294967296, div: -2147483647, mod: 4294967295},
+ itd64{a: 9223372036854775807, b: -1, add: 9223372036854775806, sub: -9223372036854775808, mul: -9223372036854775807, div: -9223372036854775807, mod: 0},
+ itd64{a: 9223372036854775807, b: 0, add: 9223372036854775807, sub: 9223372036854775807, mul: 0},
+ itd64{a: 9223372036854775807, b: 1, add: -9223372036854775808, sub: 9223372036854775806, mul: 9223372036854775807, div: 9223372036854775807, mod: 0},
+ itd64{a: 9223372036854775807, b: 4294967296, add: -9223372032559808513, sub: 9223372032559808511, mul: -4294967296, div: 2147483647, mod: 4294967295},
+ itd64{a: 9223372036854775807, b: 9223372036854775806, add: -3, sub: 1, mul: -9223372036854775806, div: 1, mod: 1},
+ itd64{a: 9223372036854775807, b: 9223372036854775807, add: -2, sub: 0, mul: 1, div: 1, mod: 0},
+}
+var uint32_data []utd32 = []utd32{utd32{a: 0, b: 0, add: 0, sub: 0, mul: 0},
+ utd32{a: 0, b: 1, add: 1, sub: 4294967295, mul: 0, div: 0, mod: 0},
+ utd32{a: 0, b: 4294967295, add: 4294967295, sub: 1, mul: 0, div: 0, mod: 0},
+ utd32{a: 1, b: 0, add: 1, sub: 1, mul: 0},
+ utd32{a: 1, b: 1, add: 2, sub: 0, mul: 1, div: 1, mod: 0},
+ utd32{a: 1, b: 4294967295, add: 0, sub: 2, mul: 4294967295, div: 0, mod: 1},
+ utd32{a: 4294967295, b: 0, add: 4294967295, sub: 4294967295, mul: 0},
+ utd32{a: 4294967295, b: 1, add: 0, sub: 4294967294, mul: 4294967295, div: 4294967295, mod: 0},
+ utd32{a: 4294967295, b: 4294967295, add: 4294967294, sub: 0, mul: 1, div: 1, mod: 0},
+}
+var int32_data []itd32 = []itd32{itd32{a: -2147483648, b: -2147483648, add: 0, sub: 0, mul: 0, div: 1, mod: 0},
+ itd32{a: -2147483648, b: -2147483647, add: 1, sub: -1, mul: -2147483648, div: 1, mod: -1},
+ itd32{a: -2147483648, b: -1, add: 2147483647, sub: -2147483647, mul: -2147483648, div: -2147483648, mod: 0},
+ itd32{a: -2147483648, b: 0, add: -2147483648, sub: -2147483648, mul: 0},
+ itd32{a: -2147483648, b: 1, add: -2147483647, sub: 2147483647, mul: -2147483648, div: -2147483648, mod: 0},
+ itd32{a: -2147483648, b: 2147483647, add: -1, sub: 1, mul: -2147483648, div: -1, mod: -1},
+ itd32{a: -2147483647, b: -2147483648, add: 1, sub: 1, mul: -2147483648, div: 0, mod: -2147483647},
+ itd32{a: -2147483647, b: -2147483647, add: 2, sub: 0, mul: 1, div: 1, mod: 0},
+ itd32{a: -2147483647, b: -1, add: -2147483648, sub: -2147483646, mul: 2147483647, div: 2147483647, mod: 0},
+ itd32{a: -2147483647, b: 0, add: -2147483647, sub: -2147483647, mul: 0},
+ itd32{a: -2147483647, b: 1, add: -2147483646, sub: -2147483648, mul: -2147483647, div: -2147483647, mod: 0},
+ itd32{a: -2147483647, b: 2147483647, add: 0, sub: 2, mul: -1, div: -1, mod: 0},
+ itd32{a: -1, b: -2147483648, add: 2147483647, sub: 2147483647, mul: -2147483648, div: 0, mod: -1},
+ itd32{a: -1, b: -2147483647, add: -2147483648, sub: 2147483646, mul: 2147483647, div: 0, mod: -1},
+ itd32{a: -1, b: -1, add: -2, sub: 0, mul: 1, div: 1, mod: 0},
+ itd32{a: -1, b: 0, add: -1, sub: -1, mul: 0},
+ itd32{a: -1, b: 1, add: 0, sub: -2, mul: -1, div: -1, mod: 0},
+ itd32{a: -1, b: 2147483647, add: 2147483646, sub: -2147483648, mul: -2147483647, div: 0, mod: -1},
+ itd32{a: 0, b: -2147483648, add: -2147483648, sub: -2147483648, mul: 0, div: 0, mod: 0},
+ itd32{a: 0, b: -2147483647, add: -2147483647, sub: 2147483647, mul: 0, div: 0, mod: 0},
+ itd32{a: 0, b: -1, add: -1, sub: 1, mul: 0, div: 0, mod: 0},
+ itd32{a: 0, b: 0, add: 0, sub: 0, mul: 0},
+ itd32{a: 0, b: 1, add: 1, sub: -1, mul: 0, div: 0, mod: 0},
+ itd32{a: 0, b: 2147483647, add: 2147483647, sub: -2147483647, mul: 0, div: 0, mod: 0},
+ itd32{a: 1, b: -2147483648, add: -2147483647, sub: -2147483647, mul: -2147483648, div: 0, mod: 1},
+ itd32{a: 1, b: -2147483647, add: -2147483646, sub: -2147483648, mul: -2147483647, div: 0, mod: 1},
+ itd32{a: 1, b: -1, add: 0, sub: 2, mul: -1, div: -1, mod: 0},
+ itd32{a: 1, b: 0, add: 1, sub: 1, mul: 0},
+ itd32{a: 1, b: 1, add: 2, sub: 0, mul: 1, div: 1, mod: 0},
+ itd32{a: 1, b: 2147483647, add: -2147483648, sub: -2147483646, mul: 2147483647, div: 0, mod: 1},
+ itd32{a: 2147483647, b: -2147483648, add: -1, sub: -1, mul: -2147483648, div: 0, mod: 2147483647},
+ itd32{a: 2147483647, b: -2147483647, add: 0, sub: -2, mul: -1, div: -1, mod: 0},
+ itd32{a: 2147483647, b: -1, add: 2147483646, sub: -2147483648, mul: -2147483647, div: -2147483647, mod: 0},
+ itd32{a: 2147483647, b: 0, add: 2147483647, sub: 2147483647, mul: 0},
+ itd32{a: 2147483647, b: 1, add: -2147483648, sub: 2147483646, mul: 2147483647, div: 2147483647, mod: 0},
+ itd32{a: 2147483647, b: 2147483647, add: -2, sub: 0, mul: 1, div: 1, mod: 0},
+}
+var uint16_data []utd16 = []utd16{utd16{a: 0, b: 0, add: 0, sub: 0, mul: 0},
+ utd16{a: 0, b: 1, add: 1, sub: 65535, mul: 0, div: 0, mod: 0},
+ utd16{a: 0, b: 65535, add: 65535, sub: 1, mul: 0, div: 0, mod: 0},
+ utd16{a: 1, b: 0, add: 1, sub: 1, mul: 0},
+ utd16{a: 1, b: 1, add: 2, sub: 0, mul: 1, div: 1, mod: 0},
+ utd16{a: 1, b: 65535, add: 0, sub: 2, mul: 65535, div: 0, mod: 1},
+ utd16{a: 65535, b: 0, add: 65535, sub: 65535, mul: 0},
+ utd16{a: 65535, b: 1, add: 0, sub: 65534, mul: 65535, div: 65535, mod: 0},
+ utd16{a: 65535, b: 65535, add: 65534, sub: 0, mul: 1, div: 1, mod: 0},
+}
+var int16_data []itd16 = []itd16{itd16{a: -32768, b: -32768, add: 0, sub: 0, mul: 0, div: 1, mod: 0},
+ itd16{a: -32768, b: -32767, add: 1, sub: -1, mul: -32768, div: 1, mod: -1},
+ itd16{a: -32768, b: -1, add: 32767, sub: -32767, mul: -32768, div: -32768, mod: 0},
+ itd16{a: -32768, b: 0, add: -32768, sub: -32768, mul: 0},
+ itd16{a: -32768, b: 1, add: -32767, sub: 32767, mul: -32768, div: -32768, mod: 0},
+ itd16{a: -32768, b: 32766, add: -2, sub: 2, mul: 0, div: -1, mod: -2},
+ itd16{a: -32768, b: 32767, add: -1, sub: 1, mul: -32768, div: -1, mod: -1},
+ itd16{a: -32767, b: -32768, add: 1, sub: 1, mul: -32768, div: 0, mod: -32767},
+ itd16{a: -32767, b: -32767, add: 2, sub: 0, mul: 1, div: 1, mod: 0},
+ itd16{a: -32767, b: -1, add: -32768, sub: -32766, mul: 32767, div: 32767, mod: 0},
+ itd16{a: -32767, b: 0, add: -32767, sub: -32767, mul: 0},
+ itd16{a: -32767, b: 1, add: -32766, sub: -32768, mul: -32767, div: -32767, mod: 0},
+ itd16{a: -32767, b: 32766, add: -1, sub: 3, mul: 32766, div: -1, mod: -1},
+ itd16{a: -32767, b: 32767, add: 0, sub: 2, mul: -1, div: -1, mod: 0},
+ itd16{a: -1, b: -32768, add: 32767, sub: 32767, mul: -32768, div: 0, mod: -1},
+ itd16{a: -1, b: -32767, add: -32768, sub: 32766, mul: 32767, div: 0, mod: -1},
+ itd16{a: -1, b: -1, add: -2, sub: 0, mul: 1, div: 1, mod: 0},
+ itd16{a: -1, b: 0, add: -1, sub: -1, mul: 0},
+ itd16{a: -1, b: 1, add: 0, sub: -2, mul: -1, div: -1, mod: 0},
+ itd16{a: -1, b: 32766, add: 32765, sub: -32767, mul: -32766, div: 0, mod: -1},
+ itd16{a: -1, b: 32767, add: 32766, sub: -32768, mul: -32767, div: 0, mod: -1},
+ itd16{a: 0, b: -32768, add: -32768, sub: -32768, mul: 0, div: 0, mod: 0},
+ itd16{a: 0, b: -32767, add: -32767, sub: 32767, mul: 0, div: 0, mod: 0},
+ itd16{a: 0, b: -1, add: -1, sub: 1, mul: 0, div: 0, mod: 0},
+ itd16{a: 0, b: 0, add: 0, sub: 0, mul: 0},
+ itd16{a: 0, b: 1, add: 1, sub: -1, mul: 0, div: 0, mod: 0},
+ itd16{a: 0, b: 32766, add: 32766, sub: -32766, mul: 0, div: 0, mod: 0},
+ itd16{a: 0, b: 32767, add: 32767, sub: -32767, mul: 0, div: 0, mod: 0},
+ itd16{a: 1, b: -32768, add: -32767, sub: -32767, mul: -32768, div: 0, mod: 1},
+ itd16{a: 1, b: -32767, add: -32766, sub: -32768, mul: -32767, div: 0, mod: 1},
+ itd16{a: 1, b: -1, add: 0, sub: 2, mul: -1, div: -1, mod: 0},
+ itd16{a: 1, b: 0, add: 1, sub: 1, mul: 0},
+ itd16{a: 1, b: 1, add: 2, sub: 0, mul: 1, div: 1, mod: 0},
+ itd16{a: 1, b: 32766, add: 32767, sub: -32765, mul: 32766, div: 0, mod: 1},
+ itd16{a: 1, b: 32767, add: -32768, sub: -32766, mul: 32767, div: 0, mod: 1},
+ itd16{a: 32766, b: -32768, add: -2, sub: -2, mul: 0, div: 0, mod: 32766},
+ itd16{a: 32766, b: -32767, add: -1, sub: -3, mul: 32766, div: 0, mod: 32766},
+ itd16{a: 32766, b: -1, add: 32765, sub: 32767, mul: -32766, div: -32766, mod: 0},
+ itd16{a: 32766, b: 0, add: 32766, sub: 32766, mul: 0},
+ itd16{a: 32766, b: 1, add: 32767, sub: 32765, mul: 32766, div: 32766, mod: 0},
+ itd16{a: 32766, b: 32766, add: -4, sub: 0, mul: 4, div: 1, mod: 0},
+ itd16{a: 32766, b: 32767, add: -3, sub: -1, mul: -32766, div: 0, mod: 32766},
+ itd16{a: 32767, b: -32768, add: -1, sub: -1, mul: -32768, div: 0, mod: 32767},
+ itd16{a: 32767, b: -32767, add: 0, sub: -2, mul: -1, div: -1, mod: 0},
+ itd16{a: 32767, b: -1, add: 32766, sub: -32768, mul: -32767, div: -32767, mod: 0},
+ itd16{a: 32767, b: 0, add: 32767, sub: 32767, mul: 0},
+ itd16{a: 32767, b: 1, add: -32768, sub: 32766, mul: 32767, div: 32767, mod: 0},
+ itd16{a: 32767, b: 32766, add: -3, sub: 1, mul: -32766, div: 1, mod: 1},
+ itd16{a: 32767, b: 32767, add: -2, sub: 0, mul: 1, div: 1, mod: 0},
+}
+var uint8_data []utd8 = []utd8{utd8{a: 0, b: 0, add: 0, sub: 0, mul: 0},
+ utd8{a: 0, b: 1, add: 1, sub: 255, mul: 0, div: 0, mod: 0},
+ utd8{a: 0, b: 255, add: 255, sub: 1, mul: 0, div: 0, mod: 0},
+ utd8{a: 1, b: 0, add: 1, sub: 1, mul: 0},
+ utd8{a: 1, b: 1, add: 2, sub: 0, mul: 1, div: 1, mod: 0},
+ utd8{a: 1, b: 255, add: 0, sub: 2, mul: 255, div: 0, mod: 1},
+ utd8{a: 255, b: 0, add: 255, sub: 255, mul: 0},
+ utd8{a: 255, b: 1, add: 0, sub: 254, mul: 255, div: 255, mod: 0},
+ utd8{a: 255, b: 255, add: 254, sub: 0, mul: 1, div: 1, mod: 0},
+}
+var int8_data []itd8 = []itd8{itd8{a: -128, b: -128, add: 0, sub: 0, mul: 0, div: 1, mod: 0},
+ itd8{a: -128, b: -127, add: 1, sub: -1, mul: -128, div: 1, mod: -1},
+ itd8{a: -128, b: -1, add: 127, sub: -127, mul: -128, div: -128, mod: 0},
+ itd8{a: -128, b: 0, add: -128, sub: -128, mul: 0},
+ itd8{a: -128, b: 1, add: -127, sub: 127, mul: -128, div: -128, mod: 0},
+ itd8{a: -128, b: 126, add: -2, sub: 2, mul: 0, div: -1, mod: -2},
+ itd8{a: -128, b: 127, add: -1, sub: 1, mul: -128, div: -1, mod: -1},
+ itd8{a: -127, b: -128, add: 1, sub: 1, mul: -128, div: 0, mod: -127},
+ itd8{a: -127, b: -127, add: 2, sub: 0, mul: 1, div: 1, mod: 0},
+ itd8{a: -127, b: -1, add: -128, sub: -126, mul: 127, div: 127, mod: 0},
+ itd8{a: -127, b: 0, add: -127, sub: -127, mul: 0},
+ itd8{a: -127, b: 1, add: -126, sub: -128, mul: -127, div: -127, mod: 0},
+ itd8{a: -127, b: 126, add: -1, sub: 3, mul: 126, div: -1, mod: -1},
+ itd8{a: -127, b: 127, add: 0, sub: 2, mul: -1, div: -1, mod: 0},
+ itd8{a: -1, b: -128, add: 127, sub: 127, mul: -128, div: 0, mod: -1},
+ itd8{a: -1, b: -127, add: -128, sub: 126, mul: 127, div: 0, mod: -1},
+ itd8{a: -1, b: -1, add: -2, sub: 0, mul: 1, div: 1, mod: 0},
+ itd8{a: -1, b: 0, add: -1, sub: -1, mul: 0},
+ itd8{a: -1, b: 1, add: 0, sub: -2, mul: -1, div: -1, mod: 0},
+ itd8{a: -1, b: 126, add: 125, sub: -127, mul: -126, div: 0, mod: -1},
+ itd8{a: -1, b: 127, add: 126, sub: -128, mul: -127, div: 0, mod: -1},
+ itd8{a: 0, b: -128, add: -128, sub: -128, mul: 0, div: 0, mod: 0},
+ itd8{a: 0, b: -127, add: -127, sub: 127, mul: 0, div: 0, mod: 0},
+ itd8{a: 0, b: -1, add: -1, sub: 1, mul: 0, div: 0, mod: 0},
+ itd8{a: 0, b: 0, add: 0, sub: 0, mul: 0},
+ itd8{a: 0, b: 1, add: 1, sub: -1, mul: 0, div: 0, mod: 0},
+ itd8{a: 0, b: 126, add: 126, sub: -126, mul: 0, div: 0, mod: 0},
+ itd8{a: 0, b: 127, add: 127, sub: -127, mul: 0, div: 0, mod: 0},
+ itd8{a: 1, b: -128, add: -127, sub: -127, mul: -128, div: 0, mod: 1},
+ itd8{a: 1, b: -127, add: -126, sub: -128, mul: -127, div: 0, mod: 1},
+ itd8{a: 1, b: -1, add: 0, sub: 2, mul: -1, div: -1, mod: 0},
+ itd8{a: 1, b: 0, add: 1, sub: 1, mul: 0},
+ itd8{a: 1, b: 1, add: 2, sub: 0, mul: 1, div: 1, mod: 0},
+ itd8{a: 1, b: 126, add: 127, sub: -125, mul: 126, div: 0, mod: 1},
+ itd8{a: 1, b: 127, add: -128, sub: -126, mul: 127, div: 0, mod: 1},
+ itd8{a: 126, b: -128, add: -2, sub: -2, mul: 0, div: 0, mod: 126},
+ itd8{a: 126, b: -127, add: -1, sub: -3, mul: 126, div: 0, mod: 126},
+ itd8{a: 126, b: -1, add: 125, sub: 127, mul: -126, div: -126, mod: 0},
+ itd8{a: 126, b: 0, add: 126, sub: 126, mul: 0},
+ itd8{a: 126, b: 1, add: 127, sub: 125, mul: 126, div: 126, mod: 0},
+ itd8{a: 126, b: 126, add: -4, sub: 0, mul: 4, div: 1, mod: 0},
+ itd8{a: 126, b: 127, add: -3, sub: -1, mul: -126, div: 0, mod: 126},
+ itd8{a: 127, b: -128, add: -1, sub: -1, mul: -128, div: 0, mod: 127},
+ itd8{a: 127, b: -127, add: 0, sub: -2, mul: -1, div: -1, mod: 0},
+ itd8{a: 127, b: -1, add: 126, sub: -128, mul: -127, div: -127, mod: 0},
+ itd8{a: 127, b: 0, add: 127, sub: 127, mul: 0},
+ itd8{a: 127, b: 1, add: -128, sub: 126, mul: 127, div: 127, mod: 0},
+ itd8{a: 127, b: 126, add: -3, sub: 1, mul: -126, div: 1, mod: 1},
+ itd8{a: 127, b: 127, add: -2, sub: 0, mul: 1, div: 1, mod: 0},
+}
+var failed bool
+
+func main() {
+
+ for _, v := range uint64_data {
+ if got := add_uint64_ssa(v.a, v.b); got != v.add {
+ fmt.Printf("add_uint64 %d+%d = %d, wanted %d\n", v.a, v.b, got, v.add)
+ failed = true
+ }
+ if got := sub_uint64_ssa(v.a, v.b); got != v.sub {
+ fmt.Printf("sub_uint64 %d-%d = %d, wanted %d\n", v.a, v.b, got, v.sub)
+ failed = true
+ }
+ if v.b != 0 {
+ if got := div_uint64_ssa(v.a, v.b); got != v.div {
+ fmt.Printf("div_uint64 %d/%d = %d, wanted %d\n", v.a, v.b, got, v.div)
+ failed = true
+ }
+
+ }
+ if v.b != 0 {
+ if got := mod_uint64_ssa(v.a, v.b); got != v.mod {
+ fmt.Printf("mod_uint64 %d%%%d = %d, wanted %d\n", v.a, v.b, got, v.mod)
+ failed = true
+ }
+
+ }
+ if got := mul_uint64_ssa(v.a, v.b); got != v.mul {
+ fmt.Printf("mul_uint64 %d*%d = %d, wanted %d\n", v.a, v.b, got, v.mul)
+ failed = true
+ }
+ }
+ for _, v := range int64_data {
+ if got := add_int64_ssa(v.a, v.b); got != v.add {
+ fmt.Printf("add_int64 %d+%d = %d, wanted %d\n", v.a, v.b, got, v.add)
+ failed = true
+ }
+ if got := sub_int64_ssa(v.a, v.b); got != v.sub {
+ fmt.Printf("sub_int64 %d-%d = %d, wanted %d\n", v.a, v.b, got, v.sub)
+ failed = true
+ }
+ if v.b != 0 {
+ if got := div_int64_ssa(v.a, v.b); got != v.div {
+ fmt.Printf("div_int64 %d/%d = %d, wanted %d\n", v.a, v.b, got, v.div)
+ failed = true
+ }
+
+ }
+ if v.b != 0 {
+ if got := mod_int64_ssa(v.a, v.b); got != v.mod {
+ fmt.Printf("mod_int64 %d%%%d = %d, wanted %d\n", v.a, v.b, got, v.mod)
+ failed = true
+ }
+
+ }
+ if got := mul_int64_ssa(v.a, v.b); got != v.mul {
+ fmt.Printf("mul_int64 %d*%d = %d, wanted %d\n", v.a, v.b, got, v.mul)
+ failed = true
+ }
+ }
+ for _, v := range uint32_data {
+ if got := add_uint32_ssa(v.a, v.b); got != v.add {
+ fmt.Printf("add_uint32 %d+%d = %d, wanted %d\n", v.a, v.b, got, v.add)
+ failed = true
+ }
+ if got := sub_uint32_ssa(v.a, v.b); got != v.sub {
+ fmt.Printf("sub_uint32 %d-%d = %d, wanted %d\n", v.a, v.b, got, v.sub)
+ failed = true
+ }
+ if v.b != 0 {
+ if got := div_uint32_ssa(v.a, v.b); got != v.div {
+ fmt.Printf("div_uint32 %d/%d = %d, wanted %d\n", v.a, v.b, got, v.div)
+ failed = true
+ }
+
+ }
+ if v.b != 0 {
+ if got := mod_uint32_ssa(v.a, v.b); got != v.mod {
+ fmt.Printf("mod_uint32 %d%%%d = %d, wanted %d\n", v.a, v.b, got, v.mod)
+ failed = true
+ }
+
+ }
+ if got := mul_uint32_ssa(v.a, v.b); got != v.mul {
+ fmt.Printf("mul_uint32 %d*%d = %d, wanted %d\n", v.a, v.b, got, v.mul)
+ failed = true
+ }
+ }
+ for _, v := range int32_data {
+ if got := add_int32_ssa(v.a, v.b); got != v.add {
+ fmt.Printf("add_int32 %d+%d = %d, wanted %d\n", v.a, v.b, got, v.add)
+ failed = true
+ }
+ if got := sub_int32_ssa(v.a, v.b); got != v.sub {
+ fmt.Printf("sub_int32 %d-%d = %d, wanted %d\n", v.a, v.b, got, v.sub)
+ failed = true
+ }
+ if v.b != 0 {
+ if got := div_int32_ssa(v.a, v.b); got != v.div {
+ fmt.Printf("div_int32 %d/%d = %d, wanted %d\n", v.a, v.b, got, v.div)
+ failed = true
+ }
+
+ }
+ if v.b != 0 {
+ if got := mod_int32_ssa(v.a, v.b); got != v.mod {
+ fmt.Printf("mod_int32 %d%%%d = %d, wanted %d\n", v.a, v.b, got, v.mod)
+ failed = true
+ }
+
+ }
+ if got := mul_int32_ssa(v.a, v.b); got != v.mul {
+ fmt.Printf("mul_int32 %d*%d = %d, wanted %d\n", v.a, v.b, got, v.mul)
+ failed = true
+ }
+ }
+ for _, v := range uint16_data {
+ if got := add_uint16_ssa(v.a, v.b); got != v.add {
+ fmt.Printf("add_uint16 %d+%d = %d, wanted %d\n", v.a, v.b, got, v.add)
+ failed = true
+ }
+ if got := sub_uint16_ssa(v.a, v.b); got != v.sub {
+ fmt.Printf("sub_uint16 %d-%d = %d, wanted %d\n", v.a, v.b, got, v.sub)
+ failed = true
+ }
+ if v.b != 0 {
+ if got := div_uint16_ssa(v.a, v.b); got != v.div {
+ fmt.Printf("div_uint16 %d/%d = %d, wanted %d\n", v.a, v.b, got, v.div)
+ failed = true
+ }
+
+ }
+ if v.b != 0 {
+ if got := mod_uint16_ssa(v.a, v.b); got != v.mod {
+ fmt.Printf("mod_uint16 %d%%%d = %d, wanted %d\n", v.a, v.b, got, v.mod)
+ failed = true
+ }
+
+ }
+ if got := mul_uint16_ssa(v.a, v.b); got != v.mul {
+ fmt.Printf("mul_uint16 %d*%d = %d, wanted %d\n", v.a, v.b, got, v.mul)
+ failed = true
+ }
+ }
+ for _, v := range int16_data {
+ if got := add_int16_ssa(v.a, v.b); got != v.add {
+ fmt.Printf("add_int16 %d+%d = %d, wanted %d\n", v.a, v.b, got, v.add)
+ failed = true
+ }
+ if got := sub_int16_ssa(v.a, v.b); got != v.sub {
+ fmt.Printf("sub_int16 %d-%d = %d, wanted %d\n", v.a, v.b, got, v.sub)
+ failed = true
+ }
+ if v.b != 0 {
+ if got := div_int16_ssa(v.a, v.b); got != v.div {
+ fmt.Printf("div_int16 %d/%d = %d, wanted %d\n", v.a, v.b, got, v.div)
+ failed = true
+ }
+
+ }
+ if v.b != 0 {
+ if got := mod_int16_ssa(v.a, v.b); got != v.mod {
+ fmt.Printf("mod_int16 %d%%%d = %d, wanted %d\n", v.a, v.b, got, v.mod)
+ failed = true
+ }
+
+ }
+ if got := mul_int16_ssa(v.a, v.b); got != v.mul {
+ fmt.Printf("mul_int16 %d*%d = %d, wanted %d\n", v.a, v.b, got, v.mul)
+ failed = true
+ }
+ }
+ for _, v := range uint8_data {
+ if got := add_uint8_ssa(v.a, v.b); got != v.add {
+ fmt.Printf("add_uint8 %d+%d = %d, wanted %d\n", v.a, v.b, got, v.add)
+ failed = true
+ }
+ if got := sub_uint8_ssa(v.a, v.b); got != v.sub {
+ fmt.Printf("sub_uint8 %d-%d = %d, wanted %d\n", v.a, v.b, got, v.sub)
+ failed = true
+ }
+ if v.b != 0 {
+ if got := div_uint8_ssa(v.a, v.b); got != v.div {
+ fmt.Printf("div_uint8 %d/%d = %d, wanted %d\n", v.a, v.b, got, v.div)
+ failed = true
+ }
+
+ }
+ if v.b != 0 {
+ if got := mod_uint8_ssa(v.a, v.b); got != v.mod {
+ fmt.Printf("mod_uint8 %d%%%d = %d, wanted %d\n", v.a, v.b, got, v.mod)
+ failed = true
+ }
+
+ }
+ if got := mul_uint8_ssa(v.a, v.b); got != v.mul {
+ fmt.Printf("mul_uint8 %d*%d = %d, wanted %d\n", v.a, v.b, got, v.mul)
+ failed = true
+ }
+ }
+ for _, v := range int8_data {
+ if got := add_int8_ssa(v.a, v.b); got != v.add {
+ fmt.Printf("add_int8 %d+%d = %d, wanted %d\n", v.a, v.b, got, v.add)
+ failed = true
+ }
+ if got := sub_int8_ssa(v.a, v.b); got != v.sub {
+ fmt.Printf("sub_int8 %d-%d = %d, wanted %d\n", v.a, v.b, got, v.sub)
+ failed = true
+ }
+ if v.b != 0 {
+ if got := div_int8_ssa(v.a, v.b); got != v.div {
+ fmt.Printf("div_int8 %d/%d = %d, wanted %d\n", v.a, v.b, got, v.div)
+ failed = true
+ }
+
+ }
+ if v.b != 0 {
+ if got := mod_int8_ssa(v.a, v.b); got != v.mod {
+ fmt.Printf("mod_int8 %d%%%d = %d, wanted %d\n", v.a, v.b, got, v.mod)
+ failed = true
+ }
+
+ }
+ if got := mul_int8_ssa(v.a, v.b); got != v.mul {
+ fmt.Printf("mul_int8 %d*%d = %d, wanted %d\n", v.a, v.b, got, v.mul)
+ failed = true
+ }
+ }
+ if failed {
+ panic("tests failed")
+ }
+}
--- /dev/null
+package main
+
+import "fmt"
+
+//go:noinline
+func add_uint64_0_ssa(a uint64) uint64 {
+ return a + 0
+}
+
+//go:noinline
+func add_0_uint64_ssa(a uint64) uint64 {
+ return 0 + a
+}
+
+//go:noinline
+func add_uint64_1_ssa(a uint64) uint64 {
+ return a + 1
+}
+
+//go:noinline
+func add_1_uint64_ssa(a uint64) uint64 {
+ return 1 + a
+}
+
+//go:noinline
+func add_uint64_4294967296_ssa(a uint64) uint64 {
+ return a + 4294967296
+}
+
+//go:noinline
+func add_4294967296_uint64_ssa(a uint64) uint64 {
+ return 4294967296 + a
+}
+
+//go:noinline
+func add_uint64_18446744073709551615_ssa(a uint64) uint64 {
+ return a + 18446744073709551615
+}
+
+//go:noinline
+func add_18446744073709551615_uint64_ssa(a uint64) uint64 {
+ return 18446744073709551615 + a
+}
+
+//go:noinline
+func sub_uint64_0_ssa(a uint64) uint64 {
+ return a - 0
+}
+
+//go:noinline
+func sub_0_uint64_ssa(a uint64) uint64 {
+ return 0 - a
+}
+
+//go:noinline
+func sub_uint64_1_ssa(a uint64) uint64 {
+ return a - 1
+}
+
+//go:noinline
+func sub_1_uint64_ssa(a uint64) uint64 {
+ return 1 - a
+}
+
+//go:noinline
+func sub_uint64_4294967296_ssa(a uint64) uint64 {
+ return a - 4294967296
+}
+
+//go:noinline
+func sub_4294967296_uint64_ssa(a uint64) uint64 {
+ return 4294967296 - a
+}
+
+//go:noinline
+func sub_uint64_18446744073709551615_ssa(a uint64) uint64 {
+ return a - 18446744073709551615
+}
+
+//go:noinline
+func sub_18446744073709551615_uint64_ssa(a uint64) uint64 {
+ return 18446744073709551615 - a
+}
+
+//go:noinline
+func div_0_uint64_ssa(a uint64) uint64 {
+ return 0 / a
+}
+
+//go:noinline
+func div_uint64_1_ssa(a uint64) uint64 {
+ return a / 1
+}
+
+//go:noinline
+func div_1_uint64_ssa(a uint64) uint64 {
+ return 1 / a
+}
+
+//go:noinline
+func div_uint64_4294967296_ssa(a uint64) uint64 {
+ return a / 4294967296
+}
+
+//go:noinline
+func div_4294967296_uint64_ssa(a uint64) uint64 {
+ return 4294967296 / a
+}
+
+//go:noinline
+func div_uint64_18446744073709551615_ssa(a uint64) uint64 {
+ return a / 18446744073709551615
+}
+
+//go:noinline
+func div_18446744073709551615_uint64_ssa(a uint64) uint64 {
+ return 18446744073709551615 / a
+}
+
+//go:noinline
+func mul_uint64_0_ssa(a uint64) uint64 {
+ return a * 0
+}
+
+//go:noinline
+func mul_0_uint64_ssa(a uint64) uint64 {
+ return 0 * a
+}
+
+//go:noinline
+func mul_uint64_1_ssa(a uint64) uint64 {
+ return a * 1
+}
+
+//go:noinline
+func mul_1_uint64_ssa(a uint64) uint64 {
+ return 1 * a
+}
+
+//go:noinline
+func mul_uint64_4294967296_ssa(a uint64) uint64 {
+ return a * 4294967296
+}
+
+//go:noinline
+func mul_4294967296_uint64_ssa(a uint64) uint64 {
+ return 4294967296 * a
+}
+
+//go:noinline
+func mul_uint64_18446744073709551615_ssa(a uint64) uint64 {
+ return a * 18446744073709551615
+}
+
+//go:noinline
+func mul_18446744073709551615_uint64_ssa(a uint64) uint64 {
+ return 18446744073709551615 * a
+}
+
+//go:noinline
+func lsh_uint64_0_ssa(a uint64) uint64 {
+ return a << 0
+}
+
+//go:noinline
+func lsh_0_uint64_ssa(a uint64) uint64 {
+ return 0 << a
+}
+
+//go:noinline
+func lsh_uint64_1_ssa(a uint64) uint64 {
+ return a << 1
+}
+
+//go:noinline
+func lsh_1_uint64_ssa(a uint64) uint64 {
+ return 1 << a
+}
+
+//go:noinline
+func lsh_uint64_4294967296_ssa(a uint64) uint64 {
+ return a << 4294967296
+}
+
+//go:noinline
+func lsh_4294967296_uint64_ssa(a uint64) uint64 {
+ return 4294967296 << a
+}
+
+//go:noinline
+func lsh_uint64_18446744073709551615_ssa(a uint64) uint64 {
+ return a << 18446744073709551615
+}
+
+//go:noinline
+func lsh_18446744073709551615_uint64_ssa(a uint64) uint64 {
+ return 18446744073709551615 << a
+}
+
+//go:noinline
+func rsh_uint64_0_ssa(a uint64) uint64 {
+ return a >> 0
+}
+
+//go:noinline
+func rsh_0_uint64_ssa(a uint64) uint64 {
+ return 0 >> a
+}
+
+//go:noinline
+func rsh_uint64_1_ssa(a uint64) uint64 {
+ return a >> 1
+}
+
+//go:noinline
+func rsh_1_uint64_ssa(a uint64) uint64 {
+ return 1 >> a
+}
+
+//go:noinline
+func rsh_uint64_4294967296_ssa(a uint64) uint64 {
+ return a >> 4294967296
+}
+
+//go:noinline
+func rsh_4294967296_uint64_ssa(a uint64) uint64 {
+ return 4294967296 >> a
+}
+
+//go:noinline
+func rsh_uint64_18446744073709551615_ssa(a uint64) uint64 {
+ return a >> 18446744073709551615
+}
+
+//go:noinline
+func rsh_18446744073709551615_uint64_ssa(a uint64) uint64 {
+ return 18446744073709551615 >> a
+}
+
+//go:noinline
+func add_int64_Neg9223372036854775808_ssa(a int64) int64 {
+ return a + -9223372036854775808
+}
+
+//go:noinline
+func add_Neg9223372036854775808_int64_ssa(a int64) int64 {
+ return -9223372036854775808 + a
+}
+
+//go:noinline
+func add_int64_Neg9223372036854775807_ssa(a int64) int64 {
+ return a + -9223372036854775807
+}
+
+//go:noinline
+func add_Neg9223372036854775807_int64_ssa(a int64) int64 {
+ return -9223372036854775807 + a
+}
+
+//go:noinline
+func add_int64_Neg4294967296_ssa(a int64) int64 {
+ return a + -4294967296
+}
+
+//go:noinline
+func add_Neg4294967296_int64_ssa(a int64) int64 {
+ return -4294967296 + a
+}
+
+//go:noinline
+func add_int64_Neg1_ssa(a int64) int64 {
+ return a + -1
+}
+
+//go:noinline
+func add_Neg1_int64_ssa(a int64) int64 {
+ return -1 + a
+}
+
+//go:noinline
+func add_int64_0_ssa(a int64) int64 {
+ return a + 0
+}
+
+//go:noinline
+func add_0_int64_ssa(a int64) int64 {
+ return 0 + a
+}
+
+//go:noinline
+func add_int64_1_ssa(a int64) int64 {
+ return a + 1
+}
+
+//go:noinline
+func add_1_int64_ssa(a int64) int64 {
+ return 1 + a
+}
+
+//go:noinline
+func add_int64_4294967296_ssa(a int64) int64 {
+ return a + 4294967296
+}
+
+//go:noinline
+func add_4294967296_int64_ssa(a int64) int64 {
+ return 4294967296 + a
+}
+
+//go:noinline
+func add_int64_9223372036854775806_ssa(a int64) int64 {
+ return a + 9223372036854775806
+}
+
+//go:noinline
+func add_9223372036854775806_int64_ssa(a int64) int64 {
+ return 9223372036854775806 + a
+}
+
+//go:noinline
+func add_int64_9223372036854775807_ssa(a int64) int64 {
+ return a + 9223372036854775807
+}
+
+//go:noinline
+func add_9223372036854775807_int64_ssa(a int64) int64 {
+ return 9223372036854775807 + a
+}
+
+//go:noinline
+func sub_int64_Neg9223372036854775808_ssa(a int64) int64 {
+ return a - -9223372036854775808
+}
+
+//go:noinline
+func sub_Neg9223372036854775808_int64_ssa(a int64) int64 {
+ return -9223372036854775808 - a
+}
+
+//go:noinline
+func sub_int64_Neg9223372036854775807_ssa(a int64) int64 {
+ return a - -9223372036854775807
+}
+
+//go:noinline
+func sub_Neg9223372036854775807_int64_ssa(a int64) int64 {
+ return -9223372036854775807 - a
+}
+
+//go:noinline
+func sub_int64_Neg4294967296_ssa(a int64) int64 {
+ return a - -4294967296
+}
+
+//go:noinline
+func sub_Neg4294967296_int64_ssa(a int64) int64 {
+ return -4294967296 - a
+}
+
+//go:noinline
+func sub_int64_Neg1_ssa(a int64) int64 {
+ return a - -1
+}
+
+//go:noinline
+func sub_Neg1_int64_ssa(a int64) int64 {
+ return -1 - a
+}
+
+//go:noinline
+func sub_int64_0_ssa(a int64) int64 {
+ return a - 0
+}
+
+//go:noinline
+func sub_0_int64_ssa(a int64) int64 {
+ return 0 - a
+}
+
+//go:noinline
+func sub_int64_1_ssa(a int64) int64 {
+ return a - 1
+}
+
+//go:noinline
+func sub_1_int64_ssa(a int64) int64 {
+ return 1 - a
+}
+
+//go:noinline
+func sub_int64_4294967296_ssa(a int64) int64 {
+ return a - 4294967296
+}
+
+//go:noinline
+func sub_4294967296_int64_ssa(a int64) int64 {
+ return 4294967296 - a
+}
+
+//go:noinline
+func sub_int64_9223372036854775806_ssa(a int64) int64 {
+ return a - 9223372036854775806
+}
+
+//go:noinline
+func sub_9223372036854775806_int64_ssa(a int64) int64 {
+ return 9223372036854775806 - a
+}
+
+//go:noinline
+func sub_int64_9223372036854775807_ssa(a int64) int64 {
+ return a - 9223372036854775807
+}
+
+//go:noinline
+func sub_9223372036854775807_int64_ssa(a int64) int64 {
+ return 9223372036854775807 - a
+}
+
+//go:noinline
+func div_int64_Neg9223372036854775808_ssa(a int64) int64 {
+ return a / -9223372036854775808
+}
+
+//go:noinline
+func div_Neg9223372036854775808_int64_ssa(a int64) int64 {
+ return -9223372036854775808 / a
+}
+
+//go:noinline
+func div_int64_Neg9223372036854775807_ssa(a int64) int64 {
+ return a / -9223372036854775807
+}
+
+//go:noinline
+func div_Neg9223372036854775807_int64_ssa(a int64) int64 {
+ return -9223372036854775807 / a
+}
+
+//go:noinline
+func div_int64_Neg4294967296_ssa(a int64) int64 {
+ return a / -4294967296
+}
+
+//go:noinline
+func div_Neg4294967296_int64_ssa(a int64) int64 {
+ return -4294967296 / a
+}
+
+//go:noinline
+func div_int64_Neg1_ssa(a int64) int64 {
+ return a / -1
+}
+
+//go:noinline
+func div_Neg1_int64_ssa(a int64) int64 {
+ return -1 / a
+}
+
+//go:noinline
+func div_0_int64_ssa(a int64) int64 {
+ return 0 / a
+}
+
+//go:noinline
+func div_int64_1_ssa(a int64) int64 {
+ return a / 1
+}
+
+//go:noinline
+func div_1_int64_ssa(a int64) int64 {
+ return 1 / a
+}
+
+//go:noinline
+func div_int64_4294967296_ssa(a int64) int64 {
+ return a / 4294967296
+}
+
+//go:noinline
+func div_4294967296_int64_ssa(a int64) int64 {
+ return 4294967296 / a
+}
+
+//go:noinline
+func div_int64_9223372036854775806_ssa(a int64) int64 {
+ return a / 9223372036854775806
+}
+
+//go:noinline
+func div_9223372036854775806_int64_ssa(a int64) int64 {
+ return 9223372036854775806 / a
+}
+
+//go:noinline
+func div_int64_9223372036854775807_ssa(a int64) int64 {
+ return a / 9223372036854775807
+}
+
+//go:noinline
+func div_9223372036854775807_int64_ssa(a int64) int64 {
+ return 9223372036854775807 / a
+}
+
+//go:noinline
+func mul_int64_Neg9223372036854775808_ssa(a int64) int64 {
+ return a * -9223372036854775808
+}
+
+//go:noinline
+func mul_Neg9223372036854775808_int64_ssa(a int64) int64 {
+ return -9223372036854775808 * a
+}
+
+//go:noinline
+func mul_int64_Neg9223372036854775807_ssa(a int64) int64 {
+ return a * -9223372036854775807
+}
+
+//go:noinline
+func mul_Neg9223372036854775807_int64_ssa(a int64) int64 {
+ return -9223372036854775807 * a
+}
+
+//go:noinline
+func mul_int64_Neg4294967296_ssa(a int64) int64 {
+ return a * -4294967296
+}
+
+//go:noinline
+func mul_Neg4294967296_int64_ssa(a int64) int64 {
+ return -4294967296 * a
+}
+
+//go:noinline
+func mul_int64_Neg1_ssa(a int64) int64 {
+ return a * -1
+}
+
+//go:noinline
+func mul_Neg1_int64_ssa(a int64) int64 {
+ return -1 * a
+}
+
+//go:noinline
+func mul_int64_0_ssa(a int64) int64 {
+ return a * 0
+}
+
+//go:noinline
+func mul_0_int64_ssa(a int64) int64 {
+ return 0 * a
+}
+
+//go:noinline
+func mul_int64_1_ssa(a int64) int64 {
+ return a * 1
+}
+
+//go:noinline
+func mul_1_int64_ssa(a int64) int64 {
+ return 1 * a
+}
+
+//go:noinline
+func mul_int64_4294967296_ssa(a int64) int64 {
+ return a * 4294967296
+}
+
+//go:noinline
+func mul_4294967296_int64_ssa(a int64) int64 {
+ return 4294967296 * a
+}
+
+//go:noinline
+func mul_int64_9223372036854775806_ssa(a int64) int64 {
+ return a * 9223372036854775806
+}
+
+//go:noinline
+func mul_9223372036854775806_int64_ssa(a int64) int64 {
+ return 9223372036854775806 * a
+}
+
+//go:noinline
+func mul_int64_9223372036854775807_ssa(a int64) int64 {
+ return a * 9223372036854775807
+}
+
+//go:noinline
+func mul_9223372036854775807_int64_ssa(a int64) int64 {
+ return 9223372036854775807 * a
+}
+
+//go:noinline
+func add_uint32_0_ssa(a uint32) uint32 {
+ return a + 0
+}
+
+//go:noinline
+func add_0_uint32_ssa(a uint32) uint32 {
+ return 0 + a
+}
+
+//go:noinline
+func add_uint32_1_ssa(a uint32) uint32 {
+ return a + 1
+}
+
+//go:noinline
+func add_1_uint32_ssa(a uint32) uint32 {
+ return 1 + a
+}
+
+//go:noinline
+func add_uint32_4294967295_ssa(a uint32) uint32 {
+ return a + 4294967295
+}
+
+//go:noinline
+func add_4294967295_uint32_ssa(a uint32) uint32 {
+ return 4294967295 + a
+}
+
+//go:noinline
+func sub_uint32_0_ssa(a uint32) uint32 {
+ return a - 0
+}
+
+//go:noinline
+func sub_0_uint32_ssa(a uint32) uint32 {
+ return 0 - a
+}
+
+//go:noinline
+func sub_uint32_1_ssa(a uint32) uint32 {
+ return a - 1
+}
+
+//go:noinline
+func sub_1_uint32_ssa(a uint32) uint32 {
+ return 1 - a
+}
+
+//go:noinline
+func sub_uint32_4294967295_ssa(a uint32) uint32 {
+ return a - 4294967295
+}
+
+//go:noinline
+func sub_4294967295_uint32_ssa(a uint32) uint32 {
+ return 4294967295 - a
+}
+
+//go:noinline
+func div_0_uint32_ssa(a uint32) uint32 {
+ return 0 / a
+}
+
+//go:noinline
+func div_uint32_1_ssa(a uint32) uint32 {
+ return a / 1
+}
+
+//go:noinline
+func div_1_uint32_ssa(a uint32) uint32 {
+ return 1 / a
+}
+
+//go:noinline
+func div_uint32_4294967295_ssa(a uint32) uint32 {
+ return a / 4294967295
+}
+
+//go:noinline
+func div_4294967295_uint32_ssa(a uint32) uint32 {
+ return 4294967295 / a
+}
+
+//go:noinline
+func mul_uint32_0_ssa(a uint32) uint32 {
+ return a * 0
+}
+
+//go:noinline
+func mul_0_uint32_ssa(a uint32) uint32 {
+ return 0 * a
+}
+
+//go:noinline
+func mul_uint32_1_ssa(a uint32) uint32 {
+ return a * 1
+}
+
+//go:noinline
+func mul_1_uint32_ssa(a uint32) uint32 {
+ return 1 * a
+}
+
+//go:noinline
+func mul_uint32_4294967295_ssa(a uint32) uint32 {
+ return a * 4294967295
+}
+
+//go:noinline
+func mul_4294967295_uint32_ssa(a uint32) uint32 {
+ return 4294967295 * a
+}
+
+//go:noinline
+func lsh_uint32_0_ssa(a uint32) uint32 {
+ return a << 0
+}
+
+//go:noinline
+func lsh_0_uint32_ssa(a uint32) uint32 {
+ return 0 << a
+}
+
+//go:noinline
+func lsh_uint32_1_ssa(a uint32) uint32 {
+ return a << 1
+}
+
+//go:noinline
+func lsh_1_uint32_ssa(a uint32) uint32 {
+ return 1 << a
+}
+
+//go:noinline
+func lsh_uint32_4294967295_ssa(a uint32) uint32 {
+ return a << 4294967295
+}
+
+//go:noinline
+func lsh_4294967295_uint32_ssa(a uint32) uint32 {
+ return 4294967295 << a
+}
+
+//go:noinline
+func rsh_uint32_0_ssa(a uint32) uint32 {
+ return a >> 0
+}
+
+//go:noinline
+func rsh_0_uint32_ssa(a uint32) uint32 {
+ return 0 >> a
+}
+
+//go:noinline
+func rsh_uint32_1_ssa(a uint32) uint32 {
+ return a >> 1
+}
+
+//go:noinline
+func rsh_1_uint32_ssa(a uint32) uint32 {
+ return 1 >> a
+}
+
+//go:noinline
+func rsh_uint32_4294967295_ssa(a uint32) uint32 {
+ return a >> 4294967295
+}
+
+//go:noinline
+func rsh_4294967295_uint32_ssa(a uint32) uint32 {
+ return 4294967295 >> a
+}
+
+//go:noinline
+func add_int32_Neg2147483648_ssa(a int32) int32 {
+ return a + -2147483648
+}
+
+//go:noinline
+func add_Neg2147483648_int32_ssa(a int32) int32 {
+ return -2147483648 + a
+}
+
+//go:noinline
+func add_int32_Neg2147483647_ssa(a int32) int32 {
+ return a + -2147483647
+}
+
+//go:noinline
+func add_Neg2147483647_int32_ssa(a int32) int32 {
+ return -2147483647 + a
+}
+
+//go:noinline
+func add_int32_Neg1_ssa(a int32) int32 {
+ return a + -1
+}
+
+//go:noinline
+func add_Neg1_int32_ssa(a int32) int32 {
+ return -1 + a
+}
+
+//go:noinline
+func add_int32_0_ssa(a int32) int32 {
+ return a + 0
+}
+
+//go:noinline
+func add_0_int32_ssa(a int32) int32 {
+ return 0 + a
+}
+
+//go:noinline
+func add_int32_1_ssa(a int32) int32 {
+ return a + 1
+}
+
+//go:noinline
+func add_1_int32_ssa(a int32) int32 {
+ return 1 + a
+}
+
+//go:noinline
+func add_int32_2147483647_ssa(a int32) int32 {
+ return a + 2147483647
+}
+
+//go:noinline
+func add_2147483647_int32_ssa(a int32) int32 {
+ return 2147483647 + a
+}
+
+//go:noinline
+func sub_int32_Neg2147483648_ssa(a int32) int32 {
+ return a - -2147483648
+}
+
+//go:noinline
+func sub_Neg2147483648_int32_ssa(a int32) int32 {
+ return -2147483648 - a
+}
+
+//go:noinline
+func sub_int32_Neg2147483647_ssa(a int32) int32 {
+ return a - -2147483647
+}
+
+//go:noinline
+func sub_Neg2147483647_int32_ssa(a int32) int32 {
+ return -2147483647 - a
+}
+
+//go:noinline
+func sub_int32_Neg1_ssa(a int32) int32 {
+ return a - -1
+}
+
+//go:noinline
+func sub_Neg1_int32_ssa(a int32) int32 {
+ return -1 - a
+}
+
+//go:noinline
+func sub_int32_0_ssa(a int32) int32 {
+ return a - 0
+}
+
+//go:noinline
+func sub_0_int32_ssa(a int32) int32 {
+ return 0 - a
+}
+
+//go:noinline
+func sub_int32_1_ssa(a int32) int32 {
+ return a - 1
+}
+
+//go:noinline
+func sub_1_int32_ssa(a int32) int32 {
+ return 1 - a
+}
+
+//go:noinline
+func sub_int32_2147483647_ssa(a int32) int32 {
+ return a - 2147483647
+}
+
+//go:noinline
+func sub_2147483647_int32_ssa(a int32) int32 {
+ return 2147483647 - a
+}
+
+//go:noinline
+func div_int32_Neg2147483648_ssa(a int32) int32 {
+ return a / -2147483648
+}
+
+//go:noinline
+func div_Neg2147483648_int32_ssa(a int32) int32 {
+ return -2147483648 / a
+}
+
+//go:noinline
+func div_int32_Neg2147483647_ssa(a int32) int32 {
+ return a / -2147483647
+}
+
+//go:noinline
+func div_Neg2147483647_int32_ssa(a int32) int32 {
+ return -2147483647 / a
+}
+
+//go:noinline
+func div_int32_Neg1_ssa(a int32) int32 {
+ return a / -1
+}
+
+//go:noinline
+func div_Neg1_int32_ssa(a int32) int32 {
+ return -1 / a
+}
+
+//go:noinline
+func div_0_int32_ssa(a int32) int32 {
+ return 0 / a
+}
+
+//go:noinline
+func div_int32_1_ssa(a int32) int32 {
+ return a / 1
+}
+
+//go:noinline
+func div_1_int32_ssa(a int32) int32 {
+ return 1 / a
+}
+
+//go:noinline
+func div_int32_2147483647_ssa(a int32) int32 {
+ return a / 2147483647
+}
+
+//go:noinline
+func div_2147483647_int32_ssa(a int32) int32 {
+ return 2147483647 / a
+}
+
+//go:noinline
+func mul_int32_Neg2147483648_ssa(a int32) int32 {
+ return a * -2147483648
+}
+
+//go:noinline
+func mul_Neg2147483648_int32_ssa(a int32) int32 {
+ return -2147483648 * a
+}
+
+//go:noinline
+func mul_int32_Neg2147483647_ssa(a int32) int32 {
+ return a * -2147483647
+}
+
+//go:noinline
+func mul_Neg2147483647_int32_ssa(a int32) int32 {
+ return -2147483647 * a
+}
+
+//go:noinline
+func mul_int32_Neg1_ssa(a int32) int32 {
+ return a * -1
+}
+
+//go:noinline
+func mul_Neg1_int32_ssa(a int32) int32 {
+ return -1 * a
+}
+
+//go:noinline
+func mul_int32_0_ssa(a int32) int32 {
+ return a * 0
+}
+
+//go:noinline
+func mul_0_int32_ssa(a int32) int32 {
+ return 0 * a
+}
+
+//go:noinline
+func mul_int32_1_ssa(a int32) int32 {
+ return a * 1
+}
+
+//go:noinline
+func mul_1_int32_ssa(a int32) int32 {
+ return 1 * a
+}
+
+//go:noinline
+func mul_int32_2147483647_ssa(a int32) int32 {
+ return a * 2147483647
+}
+
+//go:noinline
+func mul_2147483647_int32_ssa(a int32) int32 {
+ return 2147483647 * a
+}
+
+//go:noinline
+func add_uint16_0_ssa(a uint16) uint16 {
+ return a + 0
+}
+
+//go:noinline
+func add_0_uint16_ssa(a uint16) uint16 {
+ return 0 + a
+}
+
+//go:noinline
+func add_uint16_1_ssa(a uint16) uint16 {
+ return a + 1
+}
+
+//go:noinline
+func add_1_uint16_ssa(a uint16) uint16 {
+ return 1 + a
+}
+
+//go:noinline
+func add_uint16_65535_ssa(a uint16) uint16 {
+ return a + 65535
+}
+
+//go:noinline
+func add_65535_uint16_ssa(a uint16) uint16 {
+ return 65535 + a
+}
+
+//go:noinline
+func sub_uint16_0_ssa(a uint16) uint16 {
+ return a - 0
+}
+
+//go:noinline
+func sub_0_uint16_ssa(a uint16) uint16 {
+ return 0 - a
+}
+
+//go:noinline
+func sub_uint16_1_ssa(a uint16) uint16 {
+ return a - 1
+}
+
+//go:noinline
+func sub_1_uint16_ssa(a uint16) uint16 {
+ return 1 - a
+}
+
+//go:noinline
+func sub_uint16_65535_ssa(a uint16) uint16 {
+ return a - 65535
+}
+
+//go:noinline
+func sub_65535_uint16_ssa(a uint16) uint16 {
+ return 65535 - a
+}
+
+//go:noinline
+func div_0_uint16_ssa(a uint16) uint16 {
+ return 0 / a
+}
+
+//go:noinline
+func div_uint16_1_ssa(a uint16) uint16 {
+ return a / 1
+}
+
+//go:noinline
+func div_1_uint16_ssa(a uint16) uint16 {
+ return 1 / a
+}
+
+//go:noinline
+func div_uint16_65535_ssa(a uint16) uint16 {
+ return a / 65535
+}
+
+//go:noinline
+func div_65535_uint16_ssa(a uint16) uint16 {
+ return 65535 / a
+}
+
+//go:noinline
+func mul_uint16_0_ssa(a uint16) uint16 {
+ return a * 0
+}
+
+//go:noinline
+func mul_0_uint16_ssa(a uint16) uint16 {
+ return 0 * a
+}
+
+//go:noinline
+func mul_uint16_1_ssa(a uint16) uint16 {
+ return a * 1
+}
+
+//go:noinline
+func mul_1_uint16_ssa(a uint16) uint16 {
+ return 1 * a
+}
+
+//go:noinline
+func mul_uint16_65535_ssa(a uint16) uint16 {
+ return a * 65535
+}
+
+//go:noinline
+func mul_65535_uint16_ssa(a uint16) uint16 {
+ return 65535 * a
+}
+
+//go:noinline
+func lsh_uint16_0_ssa(a uint16) uint16 {
+ return a << 0
+}
+
+//go:noinline
+func lsh_0_uint16_ssa(a uint16) uint16 {
+ return 0 << a
+}
+
+//go:noinline
+func lsh_uint16_1_ssa(a uint16) uint16 {
+ return a << 1
+}
+
+//go:noinline
+func lsh_1_uint16_ssa(a uint16) uint16 {
+ return 1 << a
+}
+
+//go:noinline
+func lsh_uint16_65535_ssa(a uint16) uint16 {
+ return a << 65535
+}
+
+//go:noinline
+func lsh_65535_uint16_ssa(a uint16) uint16 {
+ return 65535 << a
+}
+
+//go:noinline
+func rsh_uint16_0_ssa(a uint16) uint16 {
+ return a >> 0
+}
+
+//go:noinline
+func rsh_0_uint16_ssa(a uint16) uint16 {
+ return 0 >> a
+}
+
+//go:noinline
+func rsh_uint16_1_ssa(a uint16) uint16 {
+ return a >> 1
+}
+
+//go:noinline
+func rsh_1_uint16_ssa(a uint16) uint16 {
+ return 1 >> a
+}
+
+//go:noinline
+func rsh_uint16_65535_ssa(a uint16) uint16 {
+ return a >> 65535
+}
+
+//go:noinline
+func rsh_65535_uint16_ssa(a uint16) uint16 {
+ return 65535 >> a
+}
+
+//go:noinline
+func add_int16_Neg32768_ssa(a int16) int16 {
+ return a + -32768
+}
+
+//go:noinline
+func add_Neg32768_int16_ssa(a int16) int16 {
+ return -32768 + a
+}
+
+//go:noinline
+func add_int16_Neg32767_ssa(a int16) int16 {
+ return a + -32767
+}
+
+//go:noinline
+func add_Neg32767_int16_ssa(a int16) int16 {
+ return -32767 + a
+}
+
+//go:noinline
+func add_int16_Neg1_ssa(a int16) int16 {
+ return a + -1
+}
+
+//go:noinline
+func add_Neg1_int16_ssa(a int16) int16 {
+ return -1 + a
+}
+
+//go:noinline
+func add_int16_0_ssa(a int16) int16 {
+ return a + 0
+}
+
+//go:noinline
+func add_0_int16_ssa(a int16) int16 {
+ return 0 + a
+}
+
+//go:noinline
+func add_int16_1_ssa(a int16) int16 {
+ return a + 1
+}
+
+//go:noinline
+func add_1_int16_ssa(a int16) int16 {
+ return 1 + a
+}
+
+//go:noinline
+func add_int16_32766_ssa(a int16) int16 {
+ return a + 32766
+}
+
+//go:noinline
+func add_32766_int16_ssa(a int16) int16 {
+ return 32766 + a
+}
+
+//go:noinline
+func add_int16_32767_ssa(a int16) int16 {
+ return a + 32767
+}
+
+//go:noinline
+func add_32767_int16_ssa(a int16) int16 {
+ return 32767 + a
+}
+
+//go:noinline
+func sub_int16_Neg32768_ssa(a int16) int16 {
+ return a - -32768
+}
+
+//go:noinline
+func sub_Neg32768_int16_ssa(a int16) int16 {
+ return -32768 - a
+}
+
+//go:noinline
+func sub_int16_Neg32767_ssa(a int16) int16 {
+ return a - -32767
+}
+
+//go:noinline
+func sub_Neg32767_int16_ssa(a int16) int16 {
+ return -32767 - a
+}
+
+//go:noinline
+func sub_int16_Neg1_ssa(a int16) int16 {
+ return a - -1
+}
+
+//go:noinline
+func sub_Neg1_int16_ssa(a int16) int16 {
+ return -1 - a
+}
+
+//go:noinline
+func sub_int16_0_ssa(a int16) int16 {
+ return a - 0
+}
+
+//go:noinline
+func sub_0_int16_ssa(a int16) int16 {
+ return 0 - a
+}
+
+//go:noinline
+func sub_int16_1_ssa(a int16) int16 {
+ return a - 1
+}
+
+//go:noinline
+func sub_1_int16_ssa(a int16) int16 {
+ return 1 - a
+}
+
+//go:noinline
+func sub_int16_32766_ssa(a int16) int16 {
+ return a - 32766
+}
+
+//go:noinline
+func sub_32766_int16_ssa(a int16) int16 {
+ return 32766 - a
+}
+
+//go:noinline
+func sub_int16_32767_ssa(a int16) int16 {
+ return a - 32767
+}
+
+//go:noinline
+func sub_32767_int16_ssa(a int16) int16 {
+ return 32767 - a
+}
+
+//go:noinline
+func div_int16_Neg32768_ssa(a int16) int16 {
+ return a / -32768
+}
+
+//go:noinline
+func div_Neg32768_int16_ssa(a int16) int16 {
+ return -32768 / a
+}
+
+//go:noinline
+func div_int16_Neg32767_ssa(a int16) int16 {
+ return a / -32767
+}
+
+//go:noinline
+func div_Neg32767_int16_ssa(a int16) int16 {
+ return -32767 / a
+}
+
+//go:noinline
+func div_int16_Neg1_ssa(a int16) int16 {
+ return a / -1
+}
+
+//go:noinline
+func div_Neg1_int16_ssa(a int16) int16 {
+ return -1 / a
+}
+
+//go:noinline
+func div_0_int16_ssa(a int16) int16 {
+ return 0 / a
+}
+
+//go:noinline
+func div_int16_1_ssa(a int16) int16 {
+ return a / 1
+}
+
+//go:noinline
+func div_1_int16_ssa(a int16) int16 {
+ return 1 / a
+}
+
+//go:noinline
+func div_int16_32766_ssa(a int16) int16 {
+ return a / 32766
+}
+
+//go:noinline
+func div_32766_int16_ssa(a int16) int16 {
+ return 32766 / a
+}
+
+//go:noinline
+func div_int16_32767_ssa(a int16) int16 {
+ return a / 32767
+}
+
+//go:noinline
+func div_32767_int16_ssa(a int16) int16 {
+ return 32767 / a
+}
+
+//go:noinline
+func mul_int16_Neg32768_ssa(a int16) int16 {
+ return a * -32768
+}
+
+//go:noinline
+func mul_Neg32768_int16_ssa(a int16) int16 {
+ return -32768 * a
+}
+
+//go:noinline
+func mul_int16_Neg32767_ssa(a int16) int16 {
+ return a * -32767
+}
+
+//go:noinline
+func mul_Neg32767_int16_ssa(a int16) int16 {
+ return -32767 * a
+}
+
+//go:noinline
+func mul_int16_Neg1_ssa(a int16) int16 {
+ return a * -1
+}
+
+//go:noinline
+func mul_Neg1_int16_ssa(a int16) int16 {
+ return -1 * a
+}
+
+//go:noinline
+func mul_int16_0_ssa(a int16) int16 {
+ return a * 0
+}
+
+//go:noinline
+func mul_0_int16_ssa(a int16) int16 {
+ return 0 * a
+}
+
+//go:noinline
+func mul_int16_1_ssa(a int16) int16 {
+ return a * 1
+}
+
+//go:noinline
+func mul_1_int16_ssa(a int16) int16 {
+ return 1 * a
+}
+
+//go:noinline
+func mul_int16_32766_ssa(a int16) int16 {
+ return a * 32766
+}
+
+//go:noinline
+func mul_32766_int16_ssa(a int16) int16 {
+ return 32766 * a
+}
+
+//go:noinline
+func mul_int16_32767_ssa(a int16) int16 {
+ return a * 32767
+}
+
+//go:noinline
+func mul_32767_int16_ssa(a int16) int16 {
+ return 32767 * a
+}
+
+//go:noinline
+func add_uint8_0_ssa(a uint8) uint8 {
+ return a + 0
+}
+
+//go:noinline
+func add_0_uint8_ssa(a uint8) uint8 {
+ return 0 + a
+}
+
+//go:noinline
+func add_uint8_1_ssa(a uint8) uint8 {
+ return a + 1
+}
+
+//go:noinline
+func add_1_uint8_ssa(a uint8) uint8 {
+ return 1 + a
+}
+
+//go:noinline
+func add_uint8_255_ssa(a uint8) uint8 {
+ return a + 255
+}
+
+//go:noinline
+func add_255_uint8_ssa(a uint8) uint8 {
+ return 255 + a
+}
+
+//go:noinline
+func sub_uint8_0_ssa(a uint8) uint8 {
+ return a - 0
+}
+
+//go:noinline
+func sub_0_uint8_ssa(a uint8) uint8 {
+ return 0 - a
+}
+
+//go:noinline
+func sub_uint8_1_ssa(a uint8) uint8 {
+ return a - 1
+}
+
+//go:noinline
+func sub_1_uint8_ssa(a uint8) uint8 {
+ return 1 - a
+}
+
+//go:noinline
+func sub_uint8_255_ssa(a uint8) uint8 {
+ return a - 255
+}
+
+//go:noinline
+func sub_255_uint8_ssa(a uint8) uint8 {
+ return 255 - a
+}
+
+//go:noinline
+func div_0_uint8_ssa(a uint8) uint8 {
+ return 0 / a
+}
+
+//go:noinline
+func div_uint8_1_ssa(a uint8) uint8 {
+ return a / 1
+}
+
+//go:noinline
+func div_1_uint8_ssa(a uint8) uint8 {
+ return 1 / a
+}
+
+//go:noinline
+func div_uint8_255_ssa(a uint8) uint8 {
+ return a / 255
+}
+
+//go:noinline
+func div_255_uint8_ssa(a uint8) uint8 {
+ return 255 / a
+}
+
+//go:noinline
+func mul_uint8_0_ssa(a uint8) uint8 {
+ return a * 0
+}
+
+//go:noinline
+func mul_0_uint8_ssa(a uint8) uint8 {
+ return 0 * a
+}
+
+//go:noinline
+func mul_uint8_1_ssa(a uint8) uint8 {
+ return a * 1
+}
+
+//go:noinline
+func mul_1_uint8_ssa(a uint8) uint8 {
+ return 1 * a
+}
+
+//go:noinline
+func mul_uint8_255_ssa(a uint8) uint8 {
+ return a * 255
+}
+
+//go:noinline
+func mul_255_uint8_ssa(a uint8) uint8 {
+ return 255 * a
+}
+
+//go:noinline
+func lsh_uint8_0_ssa(a uint8) uint8 {
+ return a << 0
+}
+
+//go:noinline
+func lsh_0_uint8_ssa(a uint8) uint8 {
+ return 0 << a
+}
+
+//go:noinline
+func lsh_uint8_1_ssa(a uint8) uint8 {
+ return a << 1
+}
+
+//go:noinline
+func lsh_1_uint8_ssa(a uint8) uint8 {
+ return 1 << a
+}
+
+//go:noinline
+func lsh_uint8_255_ssa(a uint8) uint8 {
+ return a << 255
+}
+
+//go:noinline
+func lsh_255_uint8_ssa(a uint8) uint8 {
+ return 255 << a
+}
+
+//go:noinline
+func rsh_uint8_0_ssa(a uint8) uint8 {
+ return a >> 0
+}
+
+//go:noinline
+func rsh_0_uint8_ssa(a uint8) uint8 {
+ return 0 >> a
+}
+
+//go:noinline
+func rsh_uint8_1_ssa(a uint8) uint8 {
+ return a >> 1
+}
+
+//go:noinline
+func rsh_1_uint8_ssa(a uint8) uint8 {
+ return 1 >> a
+}
+
+//go:noinline
+func rsh_uint8_255_ssa(a uint8) uint8 {
+ return a >> 255
+}
+
+//go:noinline
+func rsh_255_uint8_ssa(a uint8) uint8 {
+ return 255 >> a
+}
+
+//go:noinline
+func add_int8_Neg128_ssa(a int8) int8 {
+ return a + -128
+}
+
+//go:noinline
+func add_Neg128_int8_ssa(a int8) int8 {
+ return -128 + a
+}
+
+//go:noinline
+func add_int8_Neg127_ssa(a int8) int8 {
+ return a + -127
+}
+
+//go:noinline
+func add_Neg127_int8_ssa(a int8) int8 {
+ return -127 + a
+}
+
+//go:noinline
+func add_int8_Neg1_ssa(a int8) int8 {
+ return a + -1
+}
+
+//go:noinline
+func add_Neg1_int8_ssa(a int8) int8 {
+ return -1 + a
+}
+
+//go:noinline
+func add_int8_0_ssa(a int8) int8 {
+ return a + 0
+}
+
+//go:noinline
+func add_0_int8_ssa(a int8) int8 {
+ return 0 + a
+}
+
+//go:noinline
+func add_int8_1_ssa(a int8) int8 {
+ return a + 1
+}
+
+//go:noinline
+func add_1_int8_ssa(a int8) int8 {
+ return 1 + a
+}
+
+//go:noinline
+func add_int8_126_ssa(a int8) int8 {
+ return a + 126
+}
+
+//go:noinline
+func add_126_int8_ssa(a int8) int8 {
+ return 126 + a
+}
+
+//go:noinline
+func add_int8_127_ssa(a int8) int8 {
+ return a + 127
+}
+
+//go:noinline
+func add_127_int8_ssa(a int8) int8 {
+ return 127 + a
+}
+
+//go:noinline
+func sub_int8_Neg128_ssa(a int8) int8 {
+ return a - -128
+}
+
+//go:noinline
+func sub_Neg128_int8_ssa(a int8) int8 {
+ return -128 - a
+}
+
+//go:noinline
+func sub_int8_Neg127_ssa(a int8) int8 {
+ return a - -127
+}
+
+//go:noinline
+func sub_Neg127_int8_ssa(a int8) int8 {
+ return -127 - a
+}
+
+//go:noinline
+func sub_int8_Neg1_ssa(a int8) int8 {
+ return a - -1
+}
+
+//go:noinline
+func sub_Neg1_int8_ssa(a int8) int8 {
+ return -1 - a
+}
+
+//go:noinline
+func sub_int8_0_ssa(a int8) int8 {
+ return a - 0
+}
+
+//go:noinline
+func sub_0_int8_ssa(a int8) int8 {
+ return 0 - a
+}
+
+//go:noinline
+func sub_int8_1_ssa(a int8) int8 {
+ return a - 1
+}
+
+//go:noinline
+func sub_1_int8_ssa(a int8) int8 {
+ return 1 - a
+}
+
+//go:noinline
+func sub_int8_126_ssa(a int8) int8 {
+ return a - 126
+}
+
+//go:noinline
+func sub_126_int8_ssa(a int8) int8 {
+ return 126 - a
+}
+
+//go:noinline
+func sub_int8_127_ssa(a int8) int8 {
+ return a - 127
+}
+
+//go:noinline
+func sub_127_int8_ssa(a int8) int8 {
+ return 127 - a
+}
+
+//go:noinline
+func div_int8_Neg128_ssa(a int8) int8 {
+ return a / -128
+}
+
+//go:noinline
+func div_Neg128_int8_ssa(a int8) int8 {
+ return -128 / a
+}
+
+//go:noinline
+func div_int8_Neg127_ssa(a int8) int8 {
+ return a / -127
+}
+
+//go:noinline
+func div_Neg127_int8_ssa(a int8) int8 {
+ return -127 / a
+}
+
+//go:noinline
+func div_int8_Neg1_ssa(a int8) int8 {
+ return a / -1
+}
+
+//go:noinline
+func div_Neg1_int8_ssa(a int8) int8 {
+ return -1 / a
+}
+
+//go:noinline
+func div_0_int8_ssa(a int8) int8 {
+ return 0 / a
+}
+
+//go:noinline
+func div_int8_1_ssa(a int8) int8 {
+ return a / 1
+}
+
+//go:noinline
+func div_1_int8_ssa(a int8) int8 {
+ return 1 / a
+}
+
+//go:noinline
+func div_int8_126_ssa(a int8) int8 {
+ return a / 126
+}
+
+//go:noinline
+func div_126_int8_ssa(a int8) int8 {
+ return 126 / a
+}
+
+//go:noinline
+func div_int8_127_ssa(a int8) int8 {
+ return a / 127
+}
+
+//go:noinline
+func div_127_int8_ssa(a int8) int8 {
+ return 127 / a
+}
+
+//go:noinline
+func mul_int8_Neg128_ssa(a int8) int8 {
+ return a * -128
+}
+
+//go:noinline
+func mul_Neg128_int8_ssa(a int8) int8 {
+ return -128 * a
+}
+
+//go:noinline
+func mul_int8_Neg127_ssa(a int8) int8 {
+ return a * -127
+}
+
+//go:noinline
+func mul_Neg127_int8_ssa(a int8) int8 {
+ return -127 * a
+}
+
+//go:noinline
+func mul_int8_Neg1_ssa(a int8) int8 {
+ return a * -1
+}
+
+//go:noinline
+func mul_Neg1_int8_ssa(a int8) int8 {
+ return -1 * a
+}
+
+//go:noinline
+func mul_int8_0_ssa(a int8) int8 {
+ return a * 0
+}
+
+//go:noinline
+func mul_0_int8_ssa(a int8) int8 {
+ return 0 * a
+}
+
+//go:noinline
+func mul_int8_1_ssa(a int8) int8 {
+ return a * 1
+}
+
+//go:noinline
+func mul_1_int8_ssa(a int8) int8 {
+ return 1 * a
+}
+
+//go:noinline
+func mul_int8_126_ssa(a int8) int8 {
+ return a * 126
+}
+
+//go:noinline
+func mul_126_int8_ssa(a int8) int8 {
+ return 126 * a
+}
+
+//go:noinline
+func mul_int8_127_ssa(a int8) int8 {
+ return a * 127
+}
+
+//go:noinline
+func mul_127_int8_ssa(a int8) int8 {
+ return 127 * a
+}
+
+var failed bool
+
+func main() {
+
+ if got := add_0_uint64_ssa(0); got != 0 {
+ fmt.Printf("add_uint64 0+0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_uint64_0_ssa(0); got != 0 {
+ fmt.Printf("add_uint64 0+0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_0_uint64_ssa(1); got != 1 {
+ fmt.Printf("add_uint64 0+1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_uint64_0_ssa(1); got != 1 {
+ fmt.Printf("add_uint64 1+0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_0_uint64_ssa(4294967296); got != 4294967296 {
+ fmt.Printf("add_uint64 0+4294967296 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := add_uint64_0_ssa(4294967296); got != 4294967296 {
+ fmt.Printf("add_uint64 4294967296+0 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := add_0_uint64_ssa(18446744073709551615); got != 18446744073709551615 {
+ fmt.Printf("add_uint64 0+18446744073709551615 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := add_uint64_0_ssa(18446744073709551615); got != 18446744073709551615 {
+ fmt.Printf("add_uint64 18446744073709551615+0 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := add_1_uint64_ssa(0); got != 1 {
+ fmt.Printf("add_uint64 1+0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_uint64_1_ssa(0); got != 1 {
+ fmt.Printf("add_uint64 0+1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_1_uint64_ssa(1); got != 2 {
+ fmt.Printf("add_uint64 1+1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_uint64_1_ssa(1); got != 2 {
+ fmt.Printf("add_uint64 1+1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_1_uint64_ssa(4294967296); got != 4294967297 {
+ fmt.Printf("add_uint64 1+4294967296 = %d, wanted 4294967297\n", got)
+ failed = true
+ }
+
+ if got := add_uint64_1_ssa(4294967296); got != 4294967297 {
+ fmt.Printf("add_uint64 4294967296+1 = %d, wanted 4294967297\n", got)
+ failed = true
+ }
+
+ if got := add_1_uint64_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("add_uint64 1+18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_uint64_1_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("add_uint64 18446744073709551615+1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_4294967296_uint64_ssa(0); got != 4294967296 {
+ fmt.Printf("add_uint64 4294967296+0 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := add_uint64_4294967296_ssa(0); got != 4294967296 {
+ fmt.Printf("add_uint64 0+4294967296 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := add_4294967296_uint64_ssa(1); got != 4294967297 {
+ fmt.Printf("add_uint64 4294967296+1 = %d, wanted 4294967297\n", got)
+ failed = true
+ }
+
+ if got := add_uint64_4294967296_ssa(1); got != 4294967297 {
+ fmt.Printf("add_uint64 1+4294967296 = %d, wanted 4294967297\n", got)
+ failed = true
+ }
+
+ if got := add_4294967296_uint64_ssa(4294967296); got != 8589934592 {
+ fmt.Printf("add_uint64 4294967296+4294967296 = %d, wanted 8589934592\n", got)
+ failed = true
+ }
+
+ if got := add_uint64_4294967296_ssa(4294967296); got != 8589934592 {
+ fmt.Printf("add_uint64 4294967296+4294967296 = %d, wanted 8589934592\n", got)
+ failed = true
+ }
+
+ if got := add_4294967296_uint64_ssa(18446744073709551615); got != 4294967295 {
+ fmt.Printf("add_uint64 4294967296+18446744073709551615 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := add_uint64_4294967296_ssa(18446744073709551615); got != 4294967295 {
+ fmt.Printf("add_uint64 18446744073709551615+4294967296 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := add_18446744073709551615_uint64_ssa(0); got != 18446744073709551615 {
+ fmt.Printf("add_uint64 18446744073709551615+0 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := add_uint64_18446744073709551615_ssa(0); got != 18446744073709551615 {
+ fmt.Printf("add_uint64 0+18446744073709551615 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := add_18446744073709551615_uint64_ssa(1); got != 0 {
+ fmt.Printf("add_uint64 18446744073709551615+1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_uint64_18446744073709551615_ssa(1); got != 0 {
+ fmt.Printf("add_uint64 1+18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_18446744073709551615_uint64_ssa(4294967296); got != 4294967295 {
+ fmt.Printf("add_uint64 18446744073709551615+4294967296 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := add_uint64_18446744073709551615_ssa(4294967296); got != 4294967295 {
+ fmt.Printf("add_uint64 4294967296+18446744073709551615 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := add_18446744073709551615_uint64_ssa(18446744073709551615); got != 18446744073709551614 {
+ fmt.Printf("add_uint64 18446744073709551615+18446744073709551615 = %d, wanted 18446744073709551614\n", got)
+ failed = true
+ }
+
+ if got := add_uint64_18446744073709551615_ssa(18446744073709551615); got != 18446744073709551614 {
+ fmt.Printf("add_uint64 18446744073709551615+18446744073709551615 = %d, wanted 18446744073709551614\n", got)
+ failed = true
+ }
+
+ if got := sub_0_uint64_ssa(0); got != 0 {
+ fmt.Printf("sub_uint64 0-0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_uint64_0_ssa(0); got != 0 {
+ fmt.Printf("sub_uint64 0-0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_0_uint64_ssa(1); got != 18446744073709551615 {
+ fmt.Printf("sub_uint64 0-1 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := sub_uint64_0_ssa(1); got != 1 {
+ fmt.Printf("sub_uint64 1-0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_0_uint64_ssa(4294967296); got != 18446744069414584320 {
+ fmt.Printf("sub_uint64 0-4294967296 = %d, wanted 18446744069414584320\n", got)
+ failed = true
+ }
+
+ if got := sub_uint64_0_ssa(4294967296); got != 4294967296 {
+ fmt.Printf("sub_uint64 4294967296-0 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := sub_0_uint64_ssa(18446744073709551615); got != 1 {
+ fmt.Printf("sub_uint64 0-18446744073709551615 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_uint64_0_ssa(18446744073709551615); got != 18446744073709551615 {
+ fmt.Printf("sub_uint64 18446744073709551615-0 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := sub_1_uint64_ssa(0); got != 1 {
+ fmt.Printf("sub_uint64 1-0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_uint64_1_ssa(0); got != 18446744073709551615 {
+ fmt.Printf("sub_uint64 0-1 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := sub_1_uint64_ssa(1); got != 0 {
+ fmt.Printf("sub_uint64 1-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_uint64_1_ssa(1); got != 0 {
+ fmt.Printf("sub_uint64 1-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_1_uint64_ssa(4294967296); got != 18446744069414584321 {
+ fmt.Printf("sub_uint64 1-4294967296 = %d, wanted 18446744069414584321\n", got)
+ failed = true
+ }
+
+ if got := sub_uint64_1_ssa(4294967296); got != 4294967295 {
+ fmt.Printf("sub_uint64 4294967296-1 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := sub_1_uint64_ssa(18446744073709551615); got != 2 {
+ fmt.Printf("sub_uint64 1-18446744073709551615 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_uint64_1_ssa(18446744073709551615); got != 18446744073709551614 {
+ fmt.Printf("sub_uint64 18446744073709551615-1 = %d, wanted 18446744073709551614\n", got)
+ failed = true
+ }
+
+ if got := sub_4294967296_uint64_ssa(0); got != 4294967296 {
+ fmt.Printf("sub_uint64 4294967296-0 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := sub_uint64_4294967296_ssa(0); got != 18446744069414584320 {
+ fmt.Printf("sub_uint64 0-4294967296 = %d, wanted 18446744069414584320\n", got)
+ failed = true
+ }
+
+ if got := sub_4294967296_uint64_ssa(1); got != 4294967295 {
+ fmt.Printf("sub_uint64 4294967296-1 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := sub_uint64_4294967296_ssa(1); got != 18446744069414584321 {
+ fmt.Printf("sub_uint64 1-4294967296 = %d, wanted 18446744069414584321\n", got)
+ failed = true
+ }
+
+ if got := sub_4294967296_uint64_ssa(4294967296); got != 0 {
+ fmt.Printf("sub_uint64 4294967296-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_uint64_4294967296_ssa(4294967296); got != 0 {
+ fmt.Printf("sub_uint64 4294967296-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_4294967296_uint64_ssa(18446744073709551615); got != 4294967297 {
+ fmt.Printf("sub_uint64 4294967296-18446744073709551615 = %d, wanted 4294967297\n", got)
+ failed = true
+ }
+
+ if got := sub_uint64_4294967296_ssa(18446744073709551615); got != 18446744069414584319 {
+ fmt.Printf("sub_uint64 18446744073709551615-4294967296 = %d, wanted 18446744069414584319\n", got)
+ failed = true
+ }
+
+ if got := sub_18446744073709551615_uint64_ssa(0); got != 18446744073709551615 {
+ fmt.Printf("sub_uint64 18446744073709551615-0 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := sub_uint64_18446744073709551615_ssa(0); got != 1 {
+ fmt.Printf("sub_uint64 0-18446744073709551615 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_18446744073709551615_uint64_ssa(1); got != 18446744073709551614 {
+ fmt.Printf("sub_uint64 18446744073709551615-1 = %d, wanted 18446744073709551614\n", got)
+ failed = true
+ }
+
+ if got := sub_uint64_18446744073709551615_ssa(1); got != 2 {
+ fmt.Printf("sub_uint64 1-18446744073709551615 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_18446744073709551615_uint64_ssa(4294967296); got != 18446744069414584319 {
+ fmt.Printf("sub_uint64 18446744073709551615-4294967296 = %d, wanted 18446744069414584319\n", got)
+ failed = true
+ }
+
+ if got := sub_uint64_18446744073709551615_ssa(4294967296); got != 4294967297 {
+ fmt.Printf("sub_uint64 4294967296-18446744073709551615 = %d, wanted 4294967297\n", got)
+ failed = true
+ }
+
+ if got := sub_18446744073709551615_uint64_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("sub_uint64 18446744073709551615-18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_uint64_18446744073709551615_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("sub_uint64 18446744073709551615-18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_uint64_ssa(1); got != 0 {
+ fmt.Printf("div_uint64 0/1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_uint64_ssa(4294967296); got != 0 {
+ fmt.Printf("div_uint64 0/4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_uint64_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("div_uint64 0/18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_uint64_1_ssa(0); got != 0 {
+ fmt.Printf("div_uint64 0/1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_1_uint64_ssa(1); got != 1 {
+ fmt.Printf("div_uint64 1/1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_uint64_1_ssa(1); got != 1 {
+ fmt.Printf("div_uint64 1/1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_1_uint64_ssa(4294967296); got != 0 {
+ fmt.Printf("div_uint64 1/4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_uint64_1_ssa(4294967296); got != 4294967296 {
+ fmt.Printf("div_uint64 4294967296/1 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := div_1_uint64_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("div_uint64 1/18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_uint64_1_ssa(18446744073709551615); got != 18446744073709551615 {
+ fmt.Printf("div_uint64 18446744073709551615/1 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := div_uint64_4294967296_ssa(0); got != 0 {
+ fmt.Printf("div_uint64 0/4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_4294967296_uint64_ssa(1); got != 4294967296 {
+ fmt.Printf("div_uint64 4294967296/1 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := div_uint64_4294967296_ssa(1); got != 0 {
+ fmt.Printf("div_uint64 1/4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_4294967296_uint64_ssa(4294967296); got != 1 {
+ fmt.Printf("div_uint64 4294967296/4294967296 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_uint64_4294967296_ssa(4294967296); got != 1 {
+ fmt.Printf("div_uint64 4294967296/4294967296 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_4294967296_uint64_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("div_uint64 4294967296/18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_uint64_4294967296_ssa(18446744073709551615); got != 4294967295 {
+ fmt.Printf("div_uint64 18446744073709551615/4294967296 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := div_uint64_18446744073709551615_ssa(0); got != 0 {
+ fmt.Printf("div_uint64 0/18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_18446744073709551615_uint64_ssa(1); got != 18446744073709551615 {
+ fmt.Printf("div_uint64 18446744073709551615/1 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := div_uint64_18446744073709551615_ssa(1); got != 0 {
+ fmt.Printf("div_uint64 1/18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_18446744073709551615_uint64_ssa(4294967296); got != 4294967295 {
+ fmt.Printf("div_uint64 18446744073709551615/4294967296 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := div_uint64_18446744073709551615_ssa(4294967296); got != 0 {
+ fmt.Printf("div_uint64 4294967296/18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_18446744073709551615_uint64_ssa(18446744073709551615); got != 1 {
+ fmt.Printf("div_uint64 18446744073709551615/18446744073709551615 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_uint64_18446744073709551615_ssa(18446744073709551615); got != 1 {
+ fmt.Printf("div_uint64 18446744073709551615/18446744073709551615 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_0_uint64_ssa(0); got != 0 {
+ fmt.Printf("mul_uint64 0*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint64_0_ssa(0); got != 0 {
+ fmt.Printf("mul_uint64 0*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_uint64_ssa(1); got != 0 {
+ fmt.Printf("mul_uint64 0*1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint64_0_ssa(1); got != 0 {
+ fmt.Printf("mul_uint64 1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_uint64_ssa(4294967296); got != 0 {
+ fmt.Printf("mul_uint64 0*4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint64_0_ssa(4294967296); got != 0 {
+ fmt.Printf("mul_uint64 4294967296*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_uint64_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("mul_uint64 0*18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint64_0_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("mul_uint64 18446744073709551615*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_1_uint64_ssa(0); got != 0 {
+ fmt.Printf("mul_uint64 1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint64_1_ssa(0); got != 0 {
+ fmt.Printf("mul_uint64 0*1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_1_uint64_ssa(1); got != 1 {
+ fmt.Printf("mul_uint64 1*1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_uint64_1_ssa(1); got != 1 {
+ fmt.Printf("mul_uint64 1*1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_1_uint64_ssa(4294967296); got != 4294967296 {
+ fmt.Printf("mul_uint64 1*4294967296 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_uint64_1_ssa(4294967296); got != 4294967296 {
+ fmt.Printf("mul_uint64 4294967296*1 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_1_uint64_ssa(18446744073709551615); got != 18446744073709551615 {
+ fmt.Printf("mul_uint64 1*18446744073709551615 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := mul_uint64_1_ssa(18446744073709551615); got != 18446744073709551615 {
+ fmt.Printf("mul_uint64 18446744073709551615*1 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := mul_4294967296_uint64_ssa(0); got != 0 {
+ fmt.Printf("mul_uint64 4294967296*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint64_4294967296_ssa(0); got != 0 {
+ fmt.Printf("mul_uint64 0*4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_4294967296_uint64_ssa(1); got != 4294967296 {
+ fmt.Printf("mul_uint64 4294967296*1 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_uint64_4294967296_ssa(1); got != 4294967296 {
+ fmt.Printf("mul_uint64 1*4294967296 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_4294967296_uint64_ssa(4294967296); got != 0 {
+ fmt.Printf("mul_uint64 4294967296*4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint64_4294967296_ssa(4294967296); got != 0 {
+ fmt.Printf("mul_uint64 4294967296*4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_4294967296_uint64_ssa(18446744073709551615); got != 18446744069414584320 {
+ fmt.Printf("mul_uint64 4294967296*18446744073709551615 = %d, wanted 18446744069414584320\n", got)
+ failed = true
+ }
+
+ if got := mul_uint64_4294967296_ssa(18446744073709551615); got != 18446744069414584320 {
+ fmt.Printf("mul_uint64 18446744073709551615*4294967296 = %d, wanted 18446744069414584320\n", got)
+ failed = true
+ }
+
+ if got := mul_18446744073709551615_uint64_ssa(0); got != 0 {
+ fmt.Printf("mul_uint64 18446744073709551615*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint64_18446744073709551615_ssa(0); got != 0 {
+ fmt.Printf("mul_uint64 0*18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_18446744073709551615_uint64_ssa(1); got != 18446744073709551615 {
+ fmt.Printf("mul_uint64 18446744073709551615*1 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := mul_uint64_18446744073709551615_ssa(1); got != 18446744073709551615 {
+ fmt.Printf("mul_uint64 1*18446744073709551615 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := mul_18446744073709551615_uint64_ssa(4294967296); got != 18446744069414584320 {
+ fmt.Printf("mul_uint64 18446744073709551615*4294967296 = %d, wanted 18446744069414584320\n", got)
+ failed = true
+ }
+
+ if got := mul_uint64_18446744073709551615_ssa(4294967296); got != 18446744069414584320 {
+ fmt.Printf("mul_uint64 4294967296*18446744073709551615 = %d, wanted 18446744069414584320\n", got)
+ failed = true
+ }
+
+ if got := mul_18446744073709551615_uint64_ssa(18446744073709551615); got != 1 {
+ fmt.Printf("mul_uint64 18446744073709551615*18446744073709551615 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_uint64_18446744073709551615_ssa(18446744073709551615); got != 1 {
+ fmt.Printf("mul_uint64 18446744073709551615*18446744073709551615 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := lsh_0_uint64_ssa(0); got != 0 {
+ fmt.Printf("lsh_uint64 0<<0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint64_0_ssa(0); got != 0 {
+ fmt.Printf("lsh_uint64 0<<0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_0_uint64_ssa(1); got != 0 {
+ fmt.Printf("lsh_uint64 0<<1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint64_0_ssa(1); got != 1 {
+ fmt.Printf("lsh_uint64 1<<0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := lsh_0_uint64_ssa(4294967296); got != 0 {
+ fmt.Printf("lsh_uint64 0<<4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint64_0_ssa(4294967296); got != 4294967296 {
+ fmt.Printf("lsh_uint64 4294967296<<0 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := lsh_0_uint64_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("lsh_uint64 0<<18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint64_0_ssa(18446744073709551615); got != 18446744073709551615 {
+ fmt.Printf("lsh_uint64 18446744073709551615<<0 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := lsh_1_uint64_ssa(0); got != 1 {
+ fmt.Printf("lsh_uint64 1<<0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint64_1_ssa(0); got != 0 {
+ fmt.Printf("lsh_uint64 0<<1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_1_uint64_ssa(1); got != 2 {
+ fmt.Printf("lsh_uint64 1<<1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint64_1_ssa(1); got != 2 {
+ fmt.Printf("lsh_uint64 1<<1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := lsh_1_uint64_ssa(4294967296); got != 0 {
+ fmt.Printf("lsh_uint64 1<<4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint64_1_ssa(4294967296); got != 8589934592 {
+ fmt.Printf("lsh_uint64 4294967296<<1 = %d, wanted 8589934592\n", got)
+ failed = true
+ }
+
+ if got := lsh_1_uint64_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("lsh_uint64 1<<18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint64_1_ssa(18446744073709551615); got != 18446744073709551614 {
+ fmt.Printf("lsh_uint64 18446744073709551615<<1 = %d, wanted 18446744073709551614\n", got)
+ failed = true
+ }
+
+ if got := lsh_4294967296_uint64_ssa(0); got != 4294967296 {
+ fmt.Printf("lsh_uint64 4294967296<<0 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint64_4294967296_ssa(0); got != 0 {
+ fmt.Printf("lsh_uint64 0<<4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_4294967296_uint64_ssa(1); got != 8589934592 {
+ fmt.Printf("lsh_uint64 4294967296<<1 = %d, wanted 8589934592\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint64_4294967296_ssa(1); got != 0 {
+ fmt.Printf("lsh_uint64 1<<4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_4294967296_uint64_ssa(4294967296); got != 0 {
+ fmt.Printf("lsh_uint64 4294967296<<4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint64_4294967296_ssa(4294967296); got != 0 {
+ fmt.Printf("lsh_uint64 4294967296<<4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_4294967296_uint64_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("lsh_uint64 4294967296<<18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint64_4294967296_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("lsh_uint64 18446744073709551615<<4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_18446744073709551615_uint64_ssa(0); got != 18446744073709551615 {
+ fmt.Printf("lsh_uint64 18446744073709551615<<0 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint64_18446744073709551615_ssa(0); got != 0 {
+ fmt.Printf("lsh_uint64 0<<18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_18446744073709551615_uint64_ssa(1); got != 18446744073709551614 {
+ fmt.Printf("lsh_uint64 18446744073709551615<<1 = %d, wanted 18446744073709551614\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint64_18446744073709551615_ssa(1); got != 0 {
+ fmt.Printf("lsh_uint64 1<<18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_18446744073709551615_uint64_ssa(4294967296); got != 0 {
+ fmt.Printf("lsh_uint64 18446744073709551615<<4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint64_18446744073709551615_ssa(4294967296); got != 0 {
+ fmt.Printf("lsh_uint64 4294967296<<18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_18446744073709551615_uint64_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("lsh_uint64 18446744073709551615<<18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint64_18446744073709551615_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("lsh_uint64 18446744073709551615<<18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_0_uint64_ssa(0); got != 0 {
+ fmt.Printf("rsh_uint64 0>>0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint64_0_ssa(0); got != 0 {
+ fmt.Printf("rsh_uint64 0>>0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_0_uint64_ssa(1); got != 0 {
+ fmt.Printf("rsh_uint64 0>>1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint64_0_ssa(1); got != 1 {
+ fmt.Printf("rsh_uint64 1>>0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := rsh_0_uint64_ssa(4294967296); got != 0 {
+ fmt.Printf("rsh_uint64 0>>4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint64_0_ssa(4294967296); got != 4294967296 {
+ fmt.Printf("rsh_uint64 4294967296>>0 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := rsh_0_uint64_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("rsh_uint64 0>>18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint64_0_ssa(18446744073709551615); got != 18446744073709551615 {
+ fmt.Printf("rsh_uint64 18446744073709551615>>0 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := rsh_1_uint64_ssa(0); got != 1 {
+ fmt.Printf("rsh_uint64 1>>0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint64_1_ssa(0); got != 0 {
+ fmt.Printf("rsh_uint64 0>>1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_1_uint64_ssa(1); got != 0 {
+ fmt.Printf("rsh_uint64 1>>1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint64_1_ssa(1); got != 0 {
+ fmt.Printf("rsh_uint64 1>>1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_1_uint64_ssa(4294967296); got != 0 {
+ fmt.Printf("rsh_uint64 1>>4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint64_1_ssa(4294967296); got != 2147483648 {
+ fmt.Printf("rsh_uint64 4294967296>>1 = %d, wanted 2147483648\n", got)
+ failed = true
+ }
+
+ if got := rsh_1_uint64_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("rsh_uint64 1>>18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint64_1_ssa(18446744073709551615); got != 9223372036854775807 {
+ fmt.Printf("rsh_uint64 18446744073709551615>>1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := rsh_4294967296_uint64_ssa(0); got != 4294967296 {
+ fmt.Printf("rsh_uint64 4294967296>>0 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint64_4294967296_ssa(0); got != 0 {
+ fmt.Printf("rsh_uint64 0>>4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_4294967296_uint64_ssa(1); got != 2147483648 {
+ fmt.Printf("rsh_uint64 4294967296>>1 = %d, wanted 2147483648\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint64_4294967296_ssa(1); got != 0 {
+ fmt.Printf("rsh_uint64 1>>4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_4294967296_uint64_ssa(4294967296); got != 0 {
+ fmt.Printf("rsh_uint64 4294967296>>4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint64_4294967296_ssa(4294967296); got != 0 {
+ fmt.Printf("rsh_uint64 4294967296>>4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_4294967296_uint64_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("rsh_uint64 4294967296>>18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint64_4294967296_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("rsh_uint64 18446744073709551615>>4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_18446744073709551615_uint64_ssa(0); got != 18446744073709551615 {
+ fmt.Printf("rsh_uint64 18446744073709551615>>0 = %d, wanted 18446744073709551615\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint64_18446744073709551615_ssa(0); got != 0 {
+ fmt.Printf("rsh_uint64 0>>18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_18446744073709551615_uint64_ssa(1); got != 9223372036854775807 {
+ fmt.Printf("rsh_uint64 18446744073709551615>>1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint64_18446744073709551615_ssa(1); got != 0 {
+ fmt.Printf("rsh_uint64 1>>18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_18446744073709551615_uint64_ssa(4294967296); got != 0 {
+ fmt.Printf("rsh_uint64 18446744073709551615>>4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint64_18446744073709551615_ssa(4294967296); got != 0 {
+ fmt.Printf("rsh_uint64 4294967296>>18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_18446744073709551615_uint64_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("rsh_uint64 18446744073709551615>>18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint64_18446744073709551615_ssa(18446744073709551615); got != 0 {
+ fmt.Printf("rsh_uint64 18446744073709551615>>18446744073709551615 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775808_int64_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("add_int64 -9223372036854775808+-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775808_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("add_int64 -9223372036854775808+-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775808_int64_ssa(-9223372036854775807); got != 1 {
+ fmt.Printf("add_int64 -9223372036854775808+-9223372036854775807 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775808_ssa(-9223372036854775807); got != 1 {
+ fmt.Printf("add_int64 -9223372036854775807+-9223372036854775808 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775808_int64_ssa(-4294967296); got != 9223372032559808512 {
+ fmt.Printf("add_int64 -9223372036854775808+-4294967296 = %d, wanted 9223372032559808512\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775808_ssa(-4294967296); got != 9223372032559808512 {
+ fmt.Printf("add_int64 -4294967296+-9223372036854775808 = %d, wanted 9223372032559808512\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775808_int64_ssa(-1); got != 9223372036854775807 {
+ fmt.Printf("add_int64 -9223372036854775808+-1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775808_ssa(-1); got != 9223372036854775807 {
+ fmt.Printf("add_int64 -1+-9223372036854775808 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775808_int64_ssa(0); got != -9223372036854775808 {
+ fmt.Printf("add_int64 -9223372036854775808+0 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775808_ssa(0); got != -9223372036854775808 {
+ fmt.Printf("add_int64 0+-9223372036854775808 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775808_int64_ssa(1); got != -9223372036854775807 {
+ fmt.Printf("add_int64 -9223372036854775808+1 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775808_ssa(1); got != -9223372036854775807 {
+ fmt.Printf("add_int64 1+-9223372036854775808 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775808_int64_ssa(4294967296); got != -9223372032559808512 {
+ fmt.Printf("add_int64 -9223372036854775808+4294967296 = %d, wanted -9223372032559808512\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775808_ssa(4294967296); got != -9223372032559808512 {
+ fmt.Printf("add_int64 4294967296+-9223372036854775808 = %d, wanted -9223372032559808512\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775808_int64_ssa(9223372036854775806); got != -2 {
+ fmt.Printf("add_int64 -9223372036854775808+9223372036854775806 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775808_ssa(9223372036854775806); got != -2 {
+ fmt.Printf("add_int64 9223372036854775806+-9223372036854775808 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775808_int64_ssa(9223372036854775807); got != -1 {
+ fmt.Printf("add_int64 -9223372036854775808+9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775808_ssa(9223372036854775807); got != -1 {
+ fmt.Printf("add_int64 9223372036854775807+-9223372036854775808 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775807_int64_ssa(-9223372036854775808); got != 1 {
+ fmt.Printf("add_int64 -9223372036854775807+-9223372036854775808 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775807_ssa(-9223372036854775808); got != 1 {
+ fmt.Printf("add_int64 -9223372036854775808+-9223372036854775807 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775807_int64_ssa(-9223372036854775807); got != 2 {
+ fmt.Printf("add_int64 -9223372036854775807+-9223372036854775807 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775807_ssa(-9223372036854775807); got != 2 {
+ fmt.Printf("add_int64 -9223372036854775807+-9223372036854775807 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775807_int64_ssa(-4294967296); got != 9223372032559808513 {
+ fmt.Printf("add_int64 -9223372036854775807+-4294967296 = %d, wanted 9223372032559808513\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775807_ssa(-4294967296); got != 9223372032559808513 {
+ fmt.Printf("add_int64 -4294967296+-9223372036854775807 = %d, wanted 9223372032559808513\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775807_int64_ssa(-1); got != -9223372036854775808 {
+ fmt.Printf("add_int64 -9223372036854775807+-1 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775807_ssa(-1); got != -9223372036854775808 {
+ fmt.Printf("add_int64 -1+-9223372036854775807 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775807_int64_ssa(0); got != -9223372036854775807 {
+ fmt.Printf("add_int64 -9223372036854775807+0 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775807_ssa(0); got != -9223372036854775807 {
+ fmt.Printf("add_int64 0+-9223372036854775807 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775807_int64_ssa(1); got != -9223372036854775806 {
+ fmt.Printf("add_int64 -9223372036854775807+1 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775807_ssa(1); got != -9223372036854775806 {
+ fmt.Printf("add_int64 1+-9223372036854775807 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775807_int64_ssa(4294967296); got != -9223372032559808511 {
+ fmt.Printf("add_int64 -9223372036854775807+4294967296 = %d, wanted -9223372032559808511\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775807_ssa(4294967296); got != -9223372032559808511 {
+ fmt.Printf("add_int64 4294967296+-9223372036854775807 = %d, wanted -9223372032559808511\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775807_int64_ssa(9223372036854775806); got != -1 {
+ fmt.Printf("add_int64 -9223372036854775807+9223372036854775806 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775807_ssa(9223372036854775806); got != -1 {
+ fmt.Printf("add_int64 9223372036854775806+-9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg9223372036854775807_int64_ssa(9223372036854775807); got != 0 {
+ fmt.Printf("add_int64 -9223372036854775807+9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg9223372036854775807_ssa(9223372036854775807); got != 0 {
+ fmt.Printf("add_int64 9223372036854775807+-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_Neg4294967296_int64_ssa(-9223372036854775808); got != 9223372032559808512 {
+ fmt.Printf("add_int64 -4294967296+-9223372036854775808 = %d, wanted 9223372032559808512\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg4294967296_ssa(-9223372036854775808); got != 9223372032559808512 {
+ fmt.Printf("add_int64 -9223372036854775808+-4294967296 = %d, wanted 9223372032559808512\n", got)
+ failed = true
+ }
+
+ if got := add_Neg4294967296_int64_ssa(-9223372036854775807); got != 9223372032559808513 {
+ fmt.Printf("add_int64 -4294967296+-9223372036854775807 = %d, wanted 9223372032559808513\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg4294967296_ssa(-9223372036854775807); got != 9223372032559808513 {
+ fmt.Printf("add_int64 -9223372036854775807+-4294967296 = %d, wanted 9223372032559808513\n", got)
+ failed = true
+ }
+
+ if got := add_Neg4294967296_int64_ssa(-4294967296); got != -8589934592 {
+ fmt.Printf("add_int64 -4294967296+-4294967296 = %d, wanted -8589934592\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg4294967296_ssa(-4294967296); got != -8589934592 {
+ fmt.Printf("add_int64 -4294967296+-4294967296 = %d, wanted -8589934592\n", got)
+ failed = true
+ }
+
+ if got := add_Neg4294967296_int64_ssa(-1); got != -4294967297 {
+ fmt.Printf("add_int64 -4294967296+-1 = %d, wanted -4294967297\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg4294967296_ssa(-1); got != -4294967297 {
+ fmt.Printf("add_int64 -1+-4294967296 = %d, wanted -4294967297\n", got)
+ failed = true
+ }
+
+ if got := add_Neg4294967296_int64_ssa(0); got != -4294967296 {
+ fmt.Printf("add_int64 -4294967296+0 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg4294967296_ssa(0); got != -4294967296 {
+ fmt.Printf("add_int64 0+-4294967296 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := add_Neg4294967296_int64_ssa(1); got != -4294967295 {
+ fmt.Printf("add_int64 -4294967296+1 = %d, wanted -4294967295\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg4294967296_ssa(1); got != -4294967295 {
+ fmt.Printf("add_int64 1+-4294967296 = %d, wanted -4294967295\n", got)
+ failed = true
+ }
+
+ if got := add_Neg4294967296_int64_ssa(4294967296); got != 0 {
+ fmt.Printf("add_int64 -4294967296+4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg4294967296_ssa(4294967296); got != 0 {
+ fmt.Printf("add_int64 4294967296+-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_Neg4294967296_int64_ssa(9223372036854775806); got != 9223372032559808510 {
+ fmt.Printf("add_int64 -4294967296+9223372036854775806 = %d, wanted 9223372032559808510\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg4294967296_ssa(9223372036854775806); got != 9223372032559808510 {
+ fmt.Printf("add_int64 9223372036854775806+-4294967296 = %d, wanted 9223372032559808510\n", got)
+ failed = true
+ }
+
+ if got := add_Neg4294967296_int64_ssa(9223372036854775807); got != 9223372032559808511 {
+ fmt.Printf("add_int64 -4294967296+9223372036854775807 = %d, wanted 9223372032559808511\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg4294967296_ssa(9223372036854775807); got != 9223372032559808511 {
+ fmt.Printf("add_int64 9223372036854775807+-4294967296 = %d, wanted 9223372032559808511\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int64_ssa(-9223372036854775808); got != 9223372036854775807 {
+ fmt.Printf("add_int64 -1+-9223372036854775808 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg1_ssa(-9223372036854775808); got != 9223372036854775807 {
+ fmt.Printf("add_int64 -9223372036854775808+-1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int64_ssa(-9223372036854775807); got != -9223372036854775808 {
+ fmt.Printf("add_int64 -1+-9223372036854775807 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg1_ssa(-9223372036854775807); got != -9223372036854775808 {
+ fmt.Printf("add_int64 -9223372036854775807+-1 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int64_ssa(-4294967296); got != -4294967297 {
+ fmt.Printf("add_int64 -1+-4294967296 = %d, wanted -4294967297\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg1_ssa(-4294967296); got != -4294967297 {
+ fmt.Printf("add_int64 -4294967296+-1 = %d, wanted -4294967297\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int64_ssa(-1); got != -2 {
+ fmt.Printf("add_int64 -1+-1 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg1_ssa(-1); got != -2 {
+ fmt.Printf("add_int64 -1+-1 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int64_ssa(0); got != -1 {
+ fmt.Printf("add_int64 -1+0 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg1_ssa(0); got != -1 {
+ fmt.Printf("add_int64 0+-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int64_ssa(1); got != 0 {
+ fmt.Printf("add_int64 -1+1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg1_ssa(1); got != 0 {
+ fmt.Printf("add_int64 1+-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int64_ssa(4294967296); got != 4294967295 {
+ fmt.Printf("add_int64 -1+4294967296 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg1_ssa(4294967296); got != 4294967295 {
+ fmt.Printf("add_int64 4294967296+-1 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int64_ssa(9223372036854775806); got != 9223372036854775805 {
+ fmt.Printf("add_int64 -1+9223372036854775806 = %d, wanted 9223372036854775805\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg1_ssa(9223372036854775806); got != 9223372036854775805 {
+ fmt.Printf("add_int64 9223372036854775806+-1 = %d, wanted 9223372036854775805\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int64_ssa(9223372036854775807); got != 9223372036854775806 {
+ fmt.Printf("add_int64 -1+9223372036854775807 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := add_int64_Neg1_ssa(9223372036854775807); got != 9223372036854775806 {
+ fmt.Printf("add_int64 9223372036854775807+-1 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := add_0_int64_ssa(-9223372036854775808); got != -9223372036854775808 {
+ fmt.Printf("add_int64 0+-9223372036854775808 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := add_int64_0_ssa(-9223372036854775808); got != -9223372036854775808 {
+ fmt.Printf("add_int64 -9223372036854775808+0 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := add_0_int64_ssa(-9223372036854775807); got != -9223372036854775807 {
+ fmt.Printf("add_int64 0+-9223372036854775807 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_int64_0_ssa(-9223372036854775807); got != -9223372036854775807 {
+ fmt.Printf("add_int64 -9223372036854775807+0 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_0_int64_ssa(-4294967296); got != -4294967296 {
+ fmt.Printf("add_int64 0+-4294967296 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := add_int64_0_ssa(-4294967296); got != -4294967296 {
+ fmt.Printf("add_int64 -4294967296+0 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := add_0_int64_ssa(-1); got != -1 {
+ fmt.Printf("add_int64 0+-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int64_0_ssa(-1); got != -1 {
+ fmt.Printf("add_int64 -1+0 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_0_int64_ssa(0); got != 0 {
+ fmt.Printf("add_int64 0+0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int64_0_ssa(0); got != 0 {
+ fmt.Printf("add_int64 0+0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_0_int64_ssa(1); got != 1 {
+ fmt.Printf("add_int64 0+1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_int64_0_ssa(1); got != 1 {
+ fmt.Printf("add_int64 1+0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_0_int64_ssa(4294967296); got != 4294967296 {
+ fmt.Printf("add_int64 0+4294967296 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := add_int64_0_ssa(4294967296); got != 4294967296 {
+ fmt.Printf("add_int64 4294967296+0 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := add_0_int64_ssa(9223372036854775806); got != 9223372036854775806 {
+ fmt.Printf("add_int64 0+9223372036854775806 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := add_int64_0_ssa(9223372036854775806); got != 9223372036854775806 {
+ fmt.Printf("add_int64 9223372036854775806+0 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := add_0_int64_ssa(9223372036854775807); got != 9223372036854775807 {
+ fmt.Printf("add_int64 0+9223372036854775807 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_int64_0_ssa(9223372036854775807); got != 9223372036854775807 {
+ fmt.Printf("add_int64 9223372036854775807+0 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_1_int64_ssa(-9223372036854775808); got != -9223372036854775807 {
+ fmt.Printf("add_int64 1+-9223372036854775808 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_int64_1_ssa(-9223372036854775808); got != -9223372036854775807 {
+ fmt.Printf("add_int64 -9223372036854775808+1 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_1_int64_ssa(-9223372036854775807); got != -9223372036854775806 {
+ fmt.Printf("add_int64 1+-9223372036854775807 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := add_int64_1_ssa(-9223372036854775807); got != -9223372036854775806 {
+ fmt.Printf("add_int64 -9223372036854775807+1 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := add_1_int64_ssa(-4294967296); got != -4294967295 {
+ fmt.Printf("add_int64 1+-4294967296 = %d, wanted -4294967295\n", got)
+ failed = true
+ }
+
+ if got := add_int64_1_ssa(-4294967296); got != -4294967295 {
+ fmt.Printf("add_int64 -4294967296+1 = %d, wanted -4294967295\n", got)
+ failed = true
+ }
+
+ if got := add_1_int64_ssa(-1); got != 0 {
+ fmt.Printf("add_int64 1+-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int64_1_ssa(-1); got != 0 {
+ fmt.Printf("add_int64 -1+1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_1_int64_ssa(0); got != 1 {
+ fmt.Printf("add_int64 1+0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_int64_1_ssa(0); got != 1 {
+ fmt.Printf("add_int64 0+1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_1_int64_ssa(1); got != 2 {
+ fmt.Printf("add_int64 1+1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_int64_1_ssa(1); got != 2 {
+ fmt.Printf("add_int64 1+1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_1_int64_ssa(4294967296); got != 4294967297 {
+ fmt.Printf("add_int64 1+4294967296 = %d, wanted 4294967297\n", got)
+ failed = true
+ }
+
+ if got := add_int64_1_ssa(4294967296); got != 4294967297 {
+ fmt.Printf("add_int64 4294967296+1 = %d, wanted 4294967297\n", got)
+ failed = true
+ }
+
+ if got := add_1_int64_ssa(9223372036854775806); got != 9223372036854775807 {
+ fmt.Printf("add_int64 1+9223372036854775806 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_int64_1_ssa(9223372036854775806); got != 9223372036854775807 {
+ fmt.Printf("add_int64 9223372036854775806+1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_1_int64_ssa(9223372036854775807); got != -9223372036854775808 {
+ fmt.Printf("add_int64 1+9223372036854775807 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := add_int64_1_ssa(9223372036854775807); got != -9223372036854775808 {
+ fmt.Printf("add_int64 9223372036854775807+1 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := add_4294967296_int64_ssa(-9223372036854775808); got != -9223372032559808512 {
+ fmt.Printf("add_int64 4294967296+-9223372036854775808 = %d, wanted -9223372032559808512\n", got)
+ failed = true
+ }
+
+ if got := add_int64_4294967296_ssa(-9223372036854775808); got != -9223372032559808512 {
+ fmt.Printf("add_int64 -9223372036854775808+4294967296 = %d, wanted -9223372032559808512\n", got)
+ failed = true
+ }
+
+ if got := add_4294967296_int64_ssa(-9223372036854775807); got != -9223372032559808511 {
+ fmt.Printf("add_int64 4294967296+-9223372036854775807 = %d, wanted -9223372032559808511\n", got)
+ failed = true
+ }
+
+ if got := add_int64_4294967296_ssa(-9223372036854775807); got != -9223372032559808511 {
+ fmt.Printf("add_int64 -9223372036854775807+4294967296 = %d, wanted -9223372032559808511\n", got)
+ failed = true
+ }
+
+ if got := add_4294967296_int64_ssa(-4294967296); got != 0 {
+ fmt.Printf("add_int64 4294967296+-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int64_4294967296_ssa(-4294967296); got != 0 {
+ fmt.Printf("add_int64 -4294967296+4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_4294967296_int64_ssa(-1); got != 4294967295 {
+ fmt.Printf("add_int64 4294967296+-1 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := add_int64_4294967296_ssa(-1); got != 4294967295 {
+ fmt.Printf("add_int64 -1+4294967296 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := add_4294967296_int64_ssa(0); got != 4294967296 {
+ fmt.Printf("add_int64 4294967296+0 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := add_int64_4294967296_ssa(0); got != 4294967296 {
+ fmt.Printf("add_int64 0+4294967296 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := add_4294967296_int64_ssa(1); got != 4294967297 {
+ fmt.Printf("add_int64 4294967296+1 = %d, wanted 4294967297\n", got)
+ failed = true
+ }
+
+ if got := add_int64_4294967296_ssa(1); got != 4294967297 {
+ fmt.Printf("add_int64 1+4294967296 = %d, wanted 4294967297\n", got)
+ failed = true
+ }
+
+ if got := add_4294967296_int64_ssa(4294967296); got != 8589934592 {
+ fmt.Printf("add_int64 4294967296+4294967296 = %d, wanted 8589934592\n", got)
+ failed = true
+ }
+
+ if got := add_int64_4294967296_ssa(4294967296); got != 8589934592 {
+ fmt.Printf("add_int64 4294967296+4294967296 = %d, wanted 8589934592\n", got)
+ failed = true
+ }
+
+ if got := add_4294967296_int64_ssa(9223372036854775806); got != -9223372032559808514 {
+ fmt.Printf("add_int64 4294967296+9223372036854775806 = %d, wanted -9223372032559808514\n", got)
+ failed = true
+ }
+
+ if got := add_int64_4294967296_ssa(9223372036854775806); got != -9223372032559808514 {
+ fmt.Printf("add_int64 9223372036854775806+4294967296 = %d, wanted -9223372032559808514\n", got)
+ failed = true
+ }
+
+ if got := add_4294967296_int64_ssa(9223372036854775807); got != -9223372032559808513 {
+ fmt.Printf("add_int64 4294967296+9223372036854775807 = %d, wanted -9223372032559808513\n", got)
+ failed = true
+ }
+
+ if got := add_int64_4294967296_ssa(9223372036854775807); got != -9223372032559808513 {
+ fmt.Printf("add_int64 9223372036854775807+4294967296 = %d, wanted -9223372032559808513\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775806_int64_ssa(-9223372036854775808); got != -2 {
+ fmt.Printf("add_int64 9223372036854775806+-9223372036854775808 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775806_ssa(-9223372036854775808); got != -2 {
+ fmt.Printf("add_int64 -9223372036854775808+9223372036854775806 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775806_int64_ssa(-9223372036854775807); got != -1 {
+ fmt.Printf("add_int64 9223372036854775806+-9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775806_ssa(-9223372036854775807); got != -1 {
+ fmt.Printf("add_int64 -9223372036854775807+9223372036854775806 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775806_int64_ssa(-4294967296); got != 9223372032559808510 {
+ fmt.Printf("add_int64 9223372036854775806+-4294967296 = %d, wanted 9223372032559808510\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775806_ssa(-4294967296); got != 9223372032559808510 {
+ fmt.Printf("add_int64 -4294967296+9223372036854775806 = %d, wanted 9223372032559808510\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775806_int64_ssa(-1); got != 9223372036854775805 {
+ fmt.Printf("add_int64 9223372036854775806+-1 = %d, wanted 9223372036854775805\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775806_ssa(-1); got != 9223372036854775805 {
+ fmt.Printf("add_int64 -1+9223372036854775806 = %d, wanted 9223372036854775805\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775806_int64_ssa(0); got != 9223372036854775806 {
+ fmt.Printf("add_int64 9223372036854775806+0 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775806_ssa(0); got != 9223372036854775806 {
+ fmt.Printf("add_int64 0+9223372036854775806 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775806_int64_ssa(1); got != 9223372036854775807 {
+ fmt.Printf("add_int64 9223372036854775806+1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775806_ssa(1); got != 9223372036854775807 {
+ fmt.Printf("add_int64 1+9223372036854775806 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775806_int64_ssa(4294967296); got != -9223372032559808514 {
+ fmt.Printf("add_int64 9223372036854775806+4294967296 = %d, wanted -9223372032559808514\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775806_ssa(4294967296); got != -9223372032559808514 {
+ fmt.Printf("add_int64 4294967296+9223372036854775806 = %d, wanted -9223372032559808514\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775806_int64_ssa(9223372036854775806); got != -4 {
+ fmt.Printf("add_int64 9223372036854775806+9223372036854775806 = %d, wanted -4\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775806_ssa(9223372036854775806); got != -4 {
+ fmt.Printf("add_int64 9223372036854775806+9223372036854775806 = %d, wanted -4\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775806_int64_ssa(9223372036854775807); got != -3 {
+ fmt.Printf("add_int64 9223372036854775806+9223372036854775807 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775806_ssa(9223372036854775807); got != -3 {
+ fmt.Printf("add_int64 9223372036854775807+9223372036854775806 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775807_int64_ssa(-9223372036854775808); got != -1 {
+ fmt.Printf("add_int64 9223372036854775807+-9223372036854775808 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775807_ssa(-9223372036854775808); got != -1 {
+ fmt.Printf("add_int64 -9223372036854775808+9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775807_int64_ssa(-9223372036854775807); got != 0 {
+ fmt.Printf("add_int64 9223372036854775807+-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775807_ssa(-9223372036854775807); got != 0 {
+ fmt.Printf("add_int64 -9223372036854775807+9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775807_int64_ssa(-4294967296); got != 9223372032559808511 {
+ fmt.Printf("add_int64 9223372036854775807+-4294967296 = %d, wanted 9223372032559808511\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775807_ssa(-4294967296); got != 9223372032559808511 {
+ fmt.Printf("add_int64 -4294967296+9223372036854775807 = %d, wanted 9223372032559808511\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775807_int64_ssa(-1); got != 9223372036854775806 {
+ fmt.Printf("add_int64 9223372036854775807+-1 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775807_ssa(-1); got != 9223372036854775806 {
+ fmt.Printf("add_int64 -1+9223372036854775807 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775807_int64_ssa(0); got != 9223372036854775807 {
+ fmt.Printf("add_int64 9223372036854775807+0 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775807_ssa(0); got != 9223372036854775807 {
+ fmt.Printf("add_int64 0+9223372036854775807 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775807_int64_ssa(1); got != -9223372036854775808 {
+ fmt.Printf("add_int64 9223372036854775807+1 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775807_ssa(1); got != -9223372036854775808 {
+ fmt.Printf("add_int64 1+9223372036854775807 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775807_int64_ssa(4294967296); got != -9223372032559808513 {
+ fmt.Printf("add_int64 9223372036854775807+4294967296 = %d, wanted -9223372032559808513\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775807_ssa(4294967296); got != -9223372032559808513 {
+ fmt.Printf("add_int64 4294967296+9223372036854775807 = %d, wanted -9223372032559808513\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775807_int64_ssa(9223372036854775806); got != -3 {
+ fmt.Printf("add_int64 9223372036854775807+9223372036854775806 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775807_ssa(9223372036854775806); got != -3 {
+ fmt.Printf("add_int64 9223372036854775806+9223372036854775807 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := add_9223372036854775807_int64_ssa(9223372036854775807); got != -2 {
+ fmt.Printf("add_int64 9223372036854775807+9223372036854775807 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_int64_9223372036854775807_ssa(9223372036854775807); got != -2 {
+ fmt.Printf("add_int64 9223372036854775807+9223372036854775807 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775808_int64_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("sub_int64 -9223372036854775808--9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775808_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("sub_int64 -9223372036854775808--9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775808_int64_ssa(-9223372036854775807); got != -1 {
+ fmt.Printf("sub_int64 -9223372036854775808--9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775808_ssa(-9223372036854775807); got != 1 {
+ fmt.Printf("sub_int64 -9223372036854775807--9223372036854775808 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775808_int64_ssa(-4294967296); got != -9223372032559808512 {
+ fmt.Printf("sub_int64 -9223372036854775808--4294967296 = %d, wanted -9223372032559808512\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775808_ssa(-4294967296); got != 9223372032559808512 {
+ fmt.Printf("sub_int64 -4294967296--9223372036854775808 = %d, wanted 9223372032559808512\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775808_int64_ssa(-1); got != -9223372036854775807 {
+ fmt.Printf("sub_int64 -9223372036854775808--1 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775808_ssa(-1); got != 9223372036854775807 {
+ fmt.Printf("sub_int64 -1--9223372036854775808 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775808_int64_ssa(0); got != -9223372036854775808 {
+ fmt.Printf("sub_int64 -9223372036854775808-0 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775808_ssa(0); got != -9223372036854775808 {
+ fmt.Printf("sub_int64 0--9223372036854775808 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775808_int64_ssa(1); got != 9223372036854775807 {
+ fmt.Printf("sub_int64 -9223372036854775808-1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775808_ssa(1); got != -9223372036854775807 {
+ fmt.Printf("sub_int64 1--9223372036854775808 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775808_int64_ssa(4294967296); got != 9223372032559808512 {
+ fmt.Printf("sub_int64 -9223372036854775808-4294967296 = %d, wanted 9223372032559808512\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775808_ssa(4294967296); got != -9223372032559808512 {
+ fmt.Printf("sub_int64 4294967296--9223372036854775808 = %d, wanted -9223372032559808512\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775808_int64_ssa(9223372036854775806); got != 2 {
+ fmt.Printf("sub_int64 -9223372036854775808-9223372036854775806 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775808_ssa(9223372036854775806); got != -2 {
+ fmt.Printf("sub_int64 9223372036854775806--9223372036854775808 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775808_int64_ssa(9223372036854775807); got != 1 {
+ fmt.Printf("sub_int64 -9223372036854775808-9223372036854775807 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775808_ssa(9223372036854775807); got != -1 {
+ fmt.Printf("sub_int64 9223372036854775807--9223372036854775808 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775807_int64_ssa(-9223372036854775808); got != 1 {
+ fmt.Printf("sub_int64 -9223372036854775807--9223372036854775808 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775807_ssa(-9223372036854775808); got != -1 {
+ fmt.Printf("sub_int64 -9223372036854775808--9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775807_int64_ssa(-9223372036854775807); got != 0 {
+ fmt.Printf("sub_int64 -9223372036854775807--9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775807_ssa(-9223372036854775807); got != 0 {
+ fmt.Printf("sub_int64 -9223372036854775807--9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775807_int64_ssa(-4294967296); got != -9223372032559808511 {
+ fmt.Printf("sub_int64 -9223372036854775807--4294967296 = %d, wanted -9223372032559808511\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775807_ssa(-4294967296); got != 9223372032559808511 {
+ fmt.Printf("sub_int64 -4294967296--9223372036854775807 = %d, wanted 9223372032559808511\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775807_int64_ssa(-1); got != -9223372036854775806 {
+ fmt.Printf("sub_int64 -9223372036854775807--1 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775807_ssa(-1); got != 9223372036854775806 {
+ fmt.Printf("sub_int64 -1--9223372036854775807 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775807_int64_ssa(0); got != -9223372036854775807 {
+ fmt.Printf("sub_int64 -9223372036854775807-0 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775807_ssa(0); got != 9223372036854775807 {
+ fmt.Printf("sub_int64 0--9223372036854775807 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775807_int64_ssa(1); got != -9223372036854775808 {
+ fmt.Printf("sub_int64 -9223372036854775807-1 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775807_ssa(1); got != -9223372036854775808 {
+ fmt.Printf("sub_int64 1--9223372036854775807 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775807_int64_ssa(4294967296); got != 9223372032559808513 {
+ fmt.Printf("sub_int64 -9223372036854775807-4294967296 = %d, wanted 9223372032559808513\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775807_ssa(4294967296); got != -9223372032559808513 {
+ fmt.Printf("sub_int64 4294967296--9223372036854775807 = %d, wanted -9223372032559808513\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775807_int64_ssa(9223372036854775806); got != 3 {
+ fmt.Printf("sub_int64 -9223372036854775807-9223372036854775806 = %d, wanted 3\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775807_ssa(9223372036854775806); got != -3 {
+ fmt.Printf("sub_int64 9223372036854775806--9223372036854775807 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg9223372036854775807_int64_ssa(9223372036854775807); got != 2 {
+ fmt.Printf("sub_int64 -9223372036854775807-9223372036854775807 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg9223372036854775807_ssa(9223372036854775807); got != -2 {
+ fmt.Printf("sub_int64 9223372036854775807--9223372036854775807 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg4294967296_int64_ssa(-9223372036854775808); got != 9223372032559808512 {
+ fmt.Printf("sub_int64 -4294967296--9223372036854775808 = %d, wanted 9223372032559808512\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg4294967296_ssa(-9223372036854775808); got != -9223372032559808512 {
+ fmt.Printf("sub_int64 -9223372036854775808--4294967296 = %d, wanted -9223372032559808512\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg4294967296_int64_ssa(-9223372036854775807); got != 9223372032559808511 {
+ fmt.Printf("sub_int64 -4294967296--9223372036854775807 = %d, wanted 9223372032559808511\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg4294967296_ssa(-9223372036854775807); got != -9223372032559808511 {
+ fmt.Printf("sub_int64 -9223372036854775807--4294967296 = %d, wanted -9223372032559808511\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg4294967296_int64_ssa(-4294967296); got != 0 {
+ fmt.Printf("sub_int64 -4294967296--4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg4294967296_ssa(-4294967296); got != 0 {
+ fmt.Printf("sub_int64 -4294967296--4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg4294967296_int64_ssa(-1); got != -4294967295 {
+ fmt.Printf("sub_int64 -4294967296--1 = %d, wanted -4294967295\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg4294967296_ssa(-1); got != 4294967295 {
+ fmt.Printf("sub_int64 -1--4294967296 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg4294967296_int64_ssa(0); got != -4294967296 {
+ fmt.Printf("sub_int64 -4294967296-0 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg4294967296_ssa(0); got != 4294967296 {
+ fmt.Printf("sub_int64 0--4294967296 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg4294967296_int64_ssa(1); got != -4294967297 {
+ fmt.Printf("sub_int64 -4294967296-1 = %d, wanted -4294967297\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg4294967296_ssa(1); got != 4294967297 {
+ fmt.Printf("sub_int64 1--4294967296 = %d, wanted 4294967297\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg4294967296_int64_ssa(4294967296); got != -8589934592 {
+ fmt.Printf("sub_int64 -4294967296-4294967296 = %d, wanted -8589934592\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg4294967296_ssa(4294967296); got != 8589934592 {
+ fmt.Printf("sub_int64 4294967296--4294967296 = %d, wanted 8589934592\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg4294967296_int64_ssa(9223372036854775806); got != 9223372032559808514 {
+ fmt.Printf("sub_int64 -4294967296-9223372036854775806 = %d, wanted 9223372032559808514\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg4294967296_ssa(9223372036854775806); got != -9223372032559808514 {
+ fmt.Printf("sub_int64 9223372036854775806--4294967296 = %d, wanted -9223372032559808514\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg4294967296_int64_ssa(9223372036854775807); got != 9223372032559808513 {
+ fmt.Printf("sub_int64 -4294967296-9223372036854775807 = %d, wanted 9223372032559808513\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg4294967296_ssa(9223372036854775807); got != -9223372032559808513 {
+ fmt.Printf("sub_int64 9223372036854775807--4294967296 = %d, wanted -9223372032559808513\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int64_ssa(-9223372036854775808); got != 9223372036854775807 {
+ fmt.Printf("sub_int64 -1--9223372036854775808 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg1_ssa(-9223372036854775808); got != -9223372036854775807 {
+ fmt.Printf("sub_int64 -9223372036854775808--1 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int64_ssa(-9223372036854775807); got != 9223372036854775806 {
+ fmt.Printf("sub_int64 -1--9223372036854775807 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg1_ssa(-9223372036854775807); got != -9223372036854775806 {
+ fmt.Printf("sub_int64 -9223372036854775807--1 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int64_ssa(-4294967296); got != 4294967295 {
+ fmt.Printf("sub_int64 -1--4294967296 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg1_ssa(-4294967296); got != -4294967295 {
+ fmt.Printf("sub_int64 -4294967296--1 = %d, wanted -4294967295\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int64_ssa(-1); got != 0 {
+ fmt.Printf("sub_int64 -1--1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg1_ssa(-1); got != 0 {
+ fmt.Printf("sub_int64 -1--1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int64_ssa(0); got != -1 {
+ fmt.Printf("sub_int64 -1-0 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg1_ssa(0); got != 1 {
+ fmt.Printf("sub_int64 0--1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int64_ssa(1); got != -2 {
+ fmt.Printf("sub_int64 -1-1 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg1_ssa(1); got != 2 {
+ fmt.Printf("sub_int64 1--1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int64_ssa(4294967296); got != -4294967297 {
+ fmt.Printf("sub_int64 -1-4294967296 = %d, wanted -4294967297\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg1_ssa(4294967296); got != 4294967297 {
+ fmt.Printf("sub_int64 4294967296--1 = %d, wanted 4294967297\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int64_ssa(9223372036854775806); got != -9223372036854775807 {
+ fmt.Printf("sub_int64 -1-9223372036854775806 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg1_ssa(9223372036854775806); got != 9223372036854775807 {
+ fmt.Printf("sub_int64 9223372036854775806--1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int64_ssa(9223372036854775807); got != -9223372036854775808 {
+ fmt.Printf("sub_int64 -1-9223372036854775807 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_Neg1_ssa(9223372036854775807); got != -9223372036854775808 {
+ fmt.Printf("sub_int64 9223372036854775807--1 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int64_ssa(-9223372036854775808); got != -9223372036854775808 {
+ fmt.Printf("sub_int64 0--9223372036854775808 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_0_ssa(-9223372036854775808); got != -9223372036854775808 {
+ fmt.Printf("sub_int64 -9223372036854775808-0 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int64_ssa(-9223372036854775807); got != 9223372036854775807 {
+ fmt.Printf("sub_int64 0--9223372036854775807 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_0_ssa(-9223372036854775807); got != -9223372036854775807 {
+ fmt.Printf("sub_int64 -9223372036854775807-0 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int64_ssa(-4294967296); got != 4294967296 {
+ fmt.Printf("sub_int64 0--4294967296 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_0_ssa(-4294967296); got != -4294967296 {
+ fmt.Printf("sub_int64 -4294967296-0 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int64_ssa(-1); got != 1 {
+ fmt.Printf("sub_int64 0--1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_0_ssa(-1); got != -1 {
+ fmt.Printf("sub_int64 -1-0 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int64_ssa(0); got != 0 {
+ fmt.Printf("sub_int64 0-0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_0_ssa(0); got != 0 {
+ fmt.Printf("sub_int64 0-0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int64_ssa(1); got != -1 {
+ fmt.Printf("sub_int64 0-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_0_ssa(1); got != 1 {
+ fmt.Printf("sub_int64 1-0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int64_ssa(4294967296); got != -4294967296 {
+ fmt.Printf("sub_int64 0-4294967296 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_0_ssa(4294967296); got != 4294967296 {
+ fmt.Printf("sub_int64 4294967296-0 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int64_ssa(9223372036854775806); got != -9223372036854775806 {
+ fmt.Printf("sub_int64 0-9223372036854775806 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_0_ssa(9223372036854775806); got != 9223372036854775806 {
+ fmt.Printf("sub_int64 9223372036854775806-0 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int64_ssa(9223372036854775807); got != -9223372036854775807 {
+ fmt.Printf("sub_int64 0-9223372036854775807 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_0_ssa(9223372036854775807); got != 9223372036854775807 {
+ fmt.Printf("sub_int64 9223372036854775807-0 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int64_ssa(-9223372036854775808); got != -9223372036854775807 {
+ fmt.Printf("sub_int64 1--9223372036854775808 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_1_ssa(-9223372036854775808); got != 9223372036854775807 {
+ fmt.Printf("sub_int64 -9223372036854775808-1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int64_ssa(-9223372036854775807); got != -9223372036854775808 {
+ fmt.Printf("sub_int64 1--9223372036854775807 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_1_ssa(-9223372036854775807); got != -9223372036854775808 {
+ fmt.Printf("sub_int64 -9223372036854775807-1 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int64_ssa(-4294967296); got != 4294967297 {
+ fmt.Printf("sub_int64 1--4294967296 = %d, wanted 4294967297\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_1_ssa(-4294967296); got != -4294967297 {
+ fmt.Printf("sub_int64 -4294967296-1 = %d, wanted -4294967297\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int64_ssa(-1); got != 2 {
+ fmt.Printf("sub_int64 1--1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_1_ssa(-1); got != -2 {
+ fmt.Printf("sub_int64 -1-1 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int64_ssa(0); got != 1 {
+ fmt.Printf("sub_int64 1-0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_1_ssa(0); got != -1 {
+ fmt.Printf("sub_int64 0-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int64_ssa(1); got != 0 {
+ fmt.Printf("sub_int64 1-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_1_ssa(1); got != 0 {
+ fmt.Printf("sub_int64 1-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int64_ssa(4294967296); got != -4294967295 {
+ fmt.Printf("sub_int64 1-4294967296 = %d, wanted -4294967295\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_1_ssa(4294967296); got != 4294967295 {
+ fmt.Printf("sub_int64 4294967296-1 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int64_ssa(9223372036854775806); got != -9223372036854775805 {
+ fmt.Printf("sub_int64 1-9223372036854775806 = %d, wanted -9223372036854775805\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_1_ssa(9223372036854775806); got != 9223372036854775805 {
+ fmt.Printf("sub_int64 9223372036854775806-1 = %d, wanted 9223372036854775805\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int64_ssa(9223372036854775807); got != -9223372036854775806 {
+ fmt.Printf("sub_int64 1-9223372036854775807 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_1_ssa(9223372036854775807); got != 9223372036854775806 {
+ fmt.Printf("sub_int64 9223372036854775807-1 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := sub_4294967296_int64_ssa(-9223372036854775808); got != -9223372032559808512 {
+ fmt.Printf("sub_int64 4294967296--9223372036854775808 = %d, wanted -9223372032559808512\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_4294967296_ssa(-9223372036854775808); got != 9223372032559808512 {
+ fmt.Printf("sub_int64 -9223372036854775808-4294967296 = %d, wanted 9223372032559808512\n", got)
+ failed = true
+ }
+
+ if got := sub_4294967296_int64_ssa(-9223372036854775807); got != -9223372032559808513 {
+ fmt.Printf("sub_int64 4294967296--9223372036854775807 = %d, wanted -9223372032559808513\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_4294967296_ssa(-9223372036854775807); got != 9223372032559808513 {
+ fmt.Printf("sub_int64 -9223372036854775807-4294967296 = %d, wanted 9223372032559808513\n", got)
+ failed = true
+ }
+
+ if got := sub_4294967296_int64_ssa(-4294967296); got != 8589934592 {
+ fmt.Printf("sub_int64 4294967296--4294967296 = %d, wanted 8589934592\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_4294967296_ssa(-4294967296); got != -8589934592 {
+ fmt.Printf("sub_int64 -4294967296-4294967296 = %d, wanted -8589934592\n", got)
+ failed = true
+ }
+
+ if got := sub_4294967296_int64_ssa(-1); got != 4294967297 {
+ fmt.Printf("sub_int64 4294967296--1 = %d, wanted 4294967297\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_4294967296_ssa(-1); got != -4294967297 {
+ fmt.Printf("sub_int64 -1-4294967296 = %d, wanted -4294967297\n", got)
+ failed = true
+ }
+
+ if got := sub_4294967296_int64_ssa(0); got != 4294967296 {
+ fmt.Printf("sub_int64 4294967296-0 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_4294967296_ssa(0); got != -4294967296 {
+ fmt.Printf("sub_int64 0-4294967296 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := sub_4294967296_int64_ssa(1); got != 4294967295 {
+ fmt.Printf("sub_int64 4294967296-1 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_4294967296_ssa(1); got != -4294967295 {
+ fmt.Printf("sub_int64 1-4294967296 = %d, wanted -4294967295\n", got)
+ failed = true
+ }
+
+ if got := sub_4294967296_int64_ssa(4294967296); got != 0 {
+ fmt.Printf("sub_int64 4294967296-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_4294967296_ssa(4294967296); got != 0 {
+ fmt.Printf("sub_int64 4294967296-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_4294967296_int64_ssa(9223372036854775806); got != -9223372032559808510 {
+ fmt.Printf("sub_int64 4294967296-9223372036854775806 = %d, wanted -9223372032559808510\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_4294967296_ssa(9223372036854775806); got != 9223372032559808510 {
+ fmt.Printf("sub_int64 9223372036854775806-4294967296 = %d, wanted 9223372032559808510\n", got)
+ failed = true
+ }
+
+ if got := sub_4294967296_int64_ssa(9223372036854775807); got != -9223372032559808511 {
+ fmt.Printf("sub_int64 4294967296-9223372036854775807 = %d, wanted -9223372032559808511\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_4294967296_ssa(9223372036854775807); got != 9223372032559808511 {
+ fmt.Printf("sub_int64 9223372036854775807-4294967296 = %d, wanted 9223372032559808511\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775806_int64_ssa(-9223372036854775808); got != -2 {
+ fmt.Printf("sub_int64 9223372036854775806--9223372036854775808 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775806_ssa(-9223372036854775808); got != 2 {
+ fmt.Printf("sub_int64 -9223372036854775808-9223372036854775806 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775806_int64_ssa(-9223372036854775807); got != -3 {
+ fmt.Printf("sub_int64 9223372036854775806--9223372036854775807 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775806_ssa(-9223372036854775807); got != 3 {
+ fmt.Printf("sub_int64 -9223372036854775807-9223372036854775806 = %d, wanted 3\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775806_int64_ssa(-4294967296); got != -9223372032559808514 {
+ fmt.Printf("sub_int64 9223372036854775806--4294967296 = %d, wanted -9223372032559808514\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775806_ssa(-4294967296); got != 9223372032559808514 {
+ fmt.Printf("sub_int64 -4294967296-9223372036854775806 = %d, wanted 9223372032559808514\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775806_int64_ssa(-1); got != 9223372036854775807 {
+ fmt.Printf("sub_int64 9223372036854775806--1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775806_ssa(-1); got != -9223372036854775807 {
+ fmt.Printf("sub_int64 -1-9223372036854775806 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775806_int64_ssa(0); got != 9223372036854775806 {
+ fmt.Printf("sub_int64 9223372036854775806-0 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775806_ssa(0); got != -9223372036854775806 {
+ fmt.Printf("sub_int64 0-9223372036854775806 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775806_int64_ssa(1); got != 9223372036854775805 {
+ fmt.Printf("sub_int64 9223372036854775806-1 = %d, wanted 9223372036854775805\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775806_ssa(1); got != -9223372036854775805 {
+ fmt.Printf("sub_int64 1-9223372036854775806 = %d, wanted -9223372036854775805\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775806_int64_ssa(4294967296); got != 9223372032559808510 {
+ fmt.Printf("sub_int64 9223372036854775806-4294967296 = %d, wanted 9223372032559808510\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775806_ssa(4294967296); got != -9223372032559808510 {
+ fmt.Printf("sub_int64 4294967296-9223372036854775806 = %d, wanted -9223372032559808510\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775806_int64_ssa(9223372036854775806); got != 0 {
+ fmt.Printf("sub_int64 9223372036854775806-9223372036854775806 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775806_ssa(9223372036854775806); got != 0 {
+ fmt.Printf("sub_int64 9223372036854775806-9223372036854775806 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775806_int64_ssa(9223372036854775807); got != -1 {
+ fmt.Printf("sub_int64 9223372036854775806-9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775806_ssa(9223372036854775807); got != 1 {
+ fmt.Printf("sub_int64 9223372036854775807-9223372036854775806 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775807_int64_ssa(-9223372036854775808); got != -1 {
+ fmt.Printf("sub_int64 9223372036854775807--9223372036854775808 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775807_ssa(-9223372036854775808); got != 1 {
+ fmt.Printf("sub_int64 -9223372036854775808-9223372036854775807 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775807_int64_ssa(-9223372036854775807); got != -2 {
+ fmt.Printf("sub_int64 9223372036854775807--9223372036854775807 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775807_ssa(-9223372036854775807); got != 2 {
+ fmt.Printf("sub_int64 -9223372036854775807-9223372036854775807 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775807_int64_ssa(-4294967296); got != -9223372032559808513 {
+ fmt.Printf("sub_int64 9223372036854775807--4294967296 = %d, wanted -9223372032559808513\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775807_ssa(-4294967296); got != 9223372032559808513 {
+ fmt.Printf("sub_int64 -4294967296-9223372036854775807 = %d, wanted 9223372032559808513\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775807_int64_ssa(-1); got != -9223372036854775808 {
+ fmt.Printf("sub_int64 9223372036854775807--1 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775807_ssa(-1); got != -9223372036854775808 {
+ fmt.Printf("sub_int64 -1-9223372036854775807 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775807_int64_ssa(0); got != 9223372036854775807 {
+ fmt.Printf("sub_int64 9223372036854775807-0 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775807_ssa(0); got != -9223372036854775807 {
+ fmt.Printf("sub_int64 0-9223372036854775807 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775807_int64_ssa(1); got != 9223372036854775806 {
+ fmt.Printf("sub_int64 9223372036854775807-1 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775807_ssa(1); got != -9223372036854775806 {
+ fmt.Printf("sub_int64 1-9223372036854775807 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775807_int64_ssa(4294967296); got != 9223372032559808511 {
+ fmt.Printf("sub_int64 9223372036854775807-4294967296 = %d, wanted 9223372032559808511\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775807_ssa(4294967296); got != -9223372032559808511 {
+ fmt.Printf("sub_int64 4294967296-9223372036854775807 = %d, wanted -9223372032559808511\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775807_int64_ssa(9223372036854775806); got != 1 {
+ fmt.Printf("sub_int64 9223372036854775807-9223372036854775806 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775807_ssa(9223372036854775806); got != -1 {
+ fmt.Printf("sub_int64 9223372036854775806-9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_9223372036854775807_int64_ssa(9223372036854775807); got != 0 {
+ fmt.Printf("sub_int64 9223372036854775807-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int64_9223372036854775807_ssa(9223372036854775807); got != 0 {
+ fmt.Printf("sub_int64 9223372036854775807-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg9223372036854775808_int64_ssa(-9223372036854775808); got != 1 {
+ fmt.Printf("div_int64 -9223372036854775808/-9223372036854775808 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775808_ssa(-9223372036854775808); got != 1 {
+ fmt.Printf("div_int64 -9223372036854775808/-9223372036854775808 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg9223372036854775808_int64_ssa(-9223372036854775807); got != 1 {
+ fmt.Printf("div_int64 -9223372036854775808/-9223372036854775807 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775808_ssa(-9223372036854775807); got != 0 {
+ fmt.Printf("div_int64 -9223372036854775807/-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg9223372036854775808_int64_ssa(-4294967296); got != 2147483648 {
+ fmt.Printf("div_int64 -9223372036854775808/-4294967296 = %d, wanted 2147483648\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775808_ssa(-4294967296); got != 0 {
+ fmt.Printf("div_int64 -4294967296/-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg9223372036854775808_int64_ssa(-1); got != -9223372036854775808 {
+ fmt.Printf("div_int64 -9223372036854775808/-1 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775808_ssa(-1); got != 0 {
+ fmt.Printf("div_int64 -1/-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775808_ssa(0); got != 0 {
+ fmt.Printf("div_int64 0/-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg9223372036854775808_int64_ssa(1); got != -9223372036854775808 {
+ fmt.Printf("div_int64 -9223372036854775808/1 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775808_ssa(1); got != 0 {
+ fmt.Printf("div_int64 1/-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg9223372036854775808_int64_ssa(4294967296); got != -2147483648 {
+ fmt.Printf("div_int64 -9223372036854775808/4294967296 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775808_ssa(4294967296); got != 0 {
+ fmt.Printf("div_int64 4294967296/-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg9223372036854775808_int64_ssa(9223372036854775806); got != -1 {
+ fmt.Printf("div_int64 -9223372036854775808/9223372036854775806 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775808_ssa(9223372036854775806); got != 0 {
+ fmt.Printf("div_int64 9223372036854775806/-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg9223372036854775808_int64_ssa(9223372036854775807); got != -1 {
+ fmt.Printf("div_int64 -9223372036854775808/9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775808_ssa(9223372036854775807); got != 0 {
+ fmt.Printf("div_int64 9223372036854775807/-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg9223372036854775807_int64_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("div_int64 -9223372036854775807/-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775807_ssa(-9223372036854775808); got != 1 {
+ fmt.Printf("div_int64 -9223372036854775808/-9223372036854775807 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg9223372036854775807_int64_ssa(-9223372036854775807); got != 1 {
+ fmt.Printf("div_int64 -9223372036854775807/-9223372036854775807 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775807_ssa(-9223372036854775807); got != 1 {
+ fmt.Printf("div_int64 -9223372036854775807/-9223372036854775807 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg9223372036854775807_int64_ssa(-4294967296); got != 2147483647 {
+ fmt.Printf("div_int64 -9223372036854775807/-4294967296 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775807_ssa(-4294967296); got != 0 {
+ fmt.Printf("div_int64 -4294967296/-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg9223372036854775807_int64_ssa(-1); got != 9223372036854775807 {
+ fmt.Printf("div_int64 -9223372036854775807/-1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775807_ssa(-1); got != 0 {
+ fmt.Printf("div_int64 -1/-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775807_ssa(0); got != 0 {
+ fmt.Printf("div_int64 0/-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg9223372036854775807_int64_ssa(1); got != -9223372036854775807 {
+ fmt.Printf("div_int64 -9223372036854775807/1 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775807_ssa(1); got != 0 {
+ fmt.Printf("div_int64 1/-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg9223372036854775807_int64_ssa(4294967296); got != -2147483647 {
+ fmt.Printf("div_int64 -9223372036854775807/4294967296 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775807_ssa(4294967296); got != 0 {
+ fmt.Printf("div_int64 4294967296/-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg9223372036854775807_int64_ssa(9223372036854775806); got != -1 {
+ fmt.Printf("div_int64 -9223372036854775807/9223372036854775806 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775807_ssa(9223372036854775806); got != 0 {
+ fmt.Printf("div_int64 9223372036854775806/-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg9223372036854775807_int64_ssa(9223372036854775807); got != -1 {
+ fmt.Printf("div_int64 -9223372036854775807/9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg9223372036854775807_ssa(9223372036854775807); got != -1 {
+ fmt.Printf("div_int64 9223372036854775807/-9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg4294967296_int64_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("div_int64 -4294967296/-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg4294967296_ssa(-9223372036854775808); got != 2147483648 {
+ fmt.Printf("div_int64 -9223372036854775808/-4294967296 = %d, wanted 2147483648\n", got)
+ failed = true
+ }
+
+ if got := div_Neg4294967296_int64_ssa(-9223372036854775807); got != 0 {
+ fmt.Printf("div_int64 -4294967296/-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg4294967296_ssa(-9223372036854775807); got != 2147483647 {
+ fmt.Printf("div_int64 -9223372036854775807/-4294967296 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_Neg4294967296_int64_ssa(-4294967296); got != 1 {
+ fmt.Printf("div_int64 -4294967296/-4294967296 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg4294967296_ssa(-4294967296); got != 1 {
+ fmt.Printf("div_int64 -4294967296/-4294967296 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg4294967296_int64_ssa(-1); got != 4294967296 {
+ fmt.Printf("div_int64 -4294967296/-1 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg4294967296_ssa(-1); got != 0 {
+ fmt.Printf("div_int64 -1/-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg4294967296_ssa(0); got != 0 {
+ fmt.Printf("div_int64 0/-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg4294967296_int64_ssa(1); got != -4294967296 {
+ fmt.Printf("div_int64 -4294967296/1 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg4294967296_ssa(1); got != 0 {
+ fmt.Printf("div_int64 1/-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg4294967296_int64_ssa(4294967296); got != -1 {
+ fmt.Printf("div_int64 -4294967296/4294967296 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg4294967296_ssa(4294967296); got != -1 {
+ fmt.Printf("div_int64 4294967296/-4294967296 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg4294967296_int64_ssa(9223372036854775806); got != 0 {
+ fmt.Printf("div_int64 -4294967296/9223372036854775806 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg4294967296_ssa(9223372036854775806); got != -2147483647 {
+ fmt.Printf("div_int64 9223372036854775806/-4294967296 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_Neg4294967296_int64_ssa(9223372036854775807); got != 0 {
+ fmt.Printf("div_int64 -4294967296/9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg4294967296_ssa(9223372036854775807); got != -2147483647 {
+ fmt.Printf("div_int64 9223372036854775807/-4294967296 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int64_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("div_int64 -1/-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg1_ssa(-9223372036854775808); got != -9223372036854775808 {
+ fmt.Printf("div_int64 -9223372036854775808/-1 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int64_ssa(-9223372036854775807); got != 0 {
+ fmt.Printf("div_int64 -1/-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg1_ssa(-9223372036854775807); got != 9223372036854775807 {
+ fmt.Printf("div_int64 -9223372036854775807/-1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int64_ssa(-4294967296); got != 0 {
+ fmt.Printf("div_int64 -1/-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg1_ssa(-4294967296); got != 4294967296 {
+ fmt.Printf("div_int64 -4294967296/-1 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int64_ssa(-1); got != 1 {
+ fmt.Printf("div_int64 -1/-1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg1_ssa(-1); got != 1 {
+ fmt.Printf("div_int64 -1/-1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg1_ssa(0); got != 0 {
+ fmt.Printf("div_int64 0/-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int64_ssa(1); got != -1 {
+ fmt.Printf("div_int64 -1/1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg1_ssa(1); got != -1 {
+ fmt.Printf("div_int64 1/-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int64_ssa(4294967296); got != 0 {
+ fmt.Printf("div_int64 -1/4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg1_ssa(4294967296); got != -4294967296 {
+ fmt.Printf("div_int64 4294967296/-1 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int64_ssa(9223372036854775806); got != 0 {
+ fmt.Printf("div_int64 -1/9223372036854775806 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg1_ssa(9223372036854775806); got != -9223372036854775806 {
+ fmt.Printf("div_int64 9223372036854775806/-1 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int64_ssa(9223372036854775807); got != 0 {
+ fmt.Printf("div_int64 -1/9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_Neg1_ssa(9223372036854775807); got != -9223372036854775807 {
+ fmt.Printf("div_int64 9223372036854775807/-1 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := div_0_int64_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("div_int64 0/-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int64_ssa(-9223372036854775807); got != 0 {
+ fmt.Printf("div_int64 0/-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int64_ssa(-4294967296); got != 0 {
+ fmt.Printf("div_int64 0/-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int64_ssa(-1); got != 0 {
+ fmt.Printf("div_int64 0/-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int64_ssa(1); got != 0 {
+ fmt.Printf("div_int64 0/1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int64_ssa(4294967296); got != 0 {
+ fmt.Printf("div_int64 0/4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int64_ssa(9223372036854775806); got != 0 {
+ fmt.Printf("div_int64 0/9223372036854775806 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int64_ssa(9223372036854775807); got != 0 {
+ fmt.Printf("div_int64 0/9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_1_int64_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("div_int64 1/-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_1_ssa(-9223372036854775808); got != -9223372036854775808 {
+ fmt.Printf("div_int64 -9223372036854775808/1 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := div_1_int64_ssa(-9223372036854775807); got != 0 {
+ fmt.Printf("div_int64 1/-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_1_ssa(-9223372036854775807); got != -9223372036854775807 {
+ fmt.Printf("div_int64 -9223372036854775807/1 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := div_1_int64_ssa(-4294967296); got != 0 {
+ fmt.Printf("div_int64 1/-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_1_ssa(-4294967296); got != -4294967296 {
+ fmt.Printf("div_int64 -4294967296/1 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := div_1_int64_ssa(-1); got != -1 {
+ fmt.Printf("div_int64 1/-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_1_ssa(-1); got != -1 {
+ fmt.Printf("div_int64 -1/1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_1_ssa(0); got != 0 {
+ fmt.Printf("div_int64 0/1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_1_int64_ssa(1); got != 1 {
+ fmt.Printf("div_int64 1/1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_1_ssa(1); got != 1 {
+ fmt.Printf("div_int64 1/1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_1_int64_ssa(4294967296); got != 0 {
+ fmt.Printf("div_int64 1/4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_1_ssa(4294967296); got != 4294967296 {
+ fmt.Printf("div_int64 4294967296/1 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := div_1_int64_ssa(9223372036854775806); got != 0 {
+ fmt.Printf("div_int64 1/9223372036854775806 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_1_ssa(9223372036854775806); got != 9223372036854775806 {
+ fmt.Printf("div_int64 9223372036854775806/1 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := div_1_int64_ssa(9223372036854775807); got != 0 {
+ fmt.Printf("div_int64 1/9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_1_ssa(9223372036854775807); got != 9223372036854775807 {
+ fmt.Printf("div_int64 9223372036854775807/1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := div_4294967296_int64_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("div_int64 4294967296/-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_4294967296_ssa(-9223372036854775808); got != -2147483648 {
+ fmt.Printf("div_int64 -9223372036854775808/4294967296 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := div_4294967296_int64_ssa(-9223372036854775807); got != 0 {
+ fmt.Printf("div_int64 4294967296/-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_4294967296_ssa(-9223372036854775807); got != -2147483647 {
+ fmt.Printf("div_int64 -9223372036854775807/4294967296 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_4294967296_int64_ssa(-4294967296); got != -1 {
+ fmt.Printf("div_int64 4294967296/-4294967296 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_4294967296_ssa(-4294967296); got != -1 {
+ fmt.Printf("div_int64 -4294967296/4294967296 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_4294967296_int64_ssa(-1); got != -4294967296 {
+ fmt.Printf("div_int64 4294967296/-1 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := div_int64_4294967296_ssa(-1); got != 0 {
+ fmt.Printf("div_int64 -1/4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_4294967296_ssa(0); got != 0 {
+ fmt.Printf("div_int64 0/4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_4294967296_int64_ssa(1); got != 4294967296 {
+ fmt.Printf("div_int64 4294967296/1 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := div_int64_4294967296_ssa(1); got != 0 {
+ fmt.Printf("div_int64 1/4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_4294967296_int64_ssa(4294967296); got != 1 {
+ fmt.Printf("div_int64 4294967296/4294967296 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_4294967296_ssa(4294967296); got != 1 {
+ fmt.Printf("div_int64 4294967296/4294967296 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_4294967296_int64_ssa(9223372036854775806); got != 0 {
+ fmt.Printf("div_int64 4294967296/9223372036854775806 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_4294967296_ssa(9223372036854775806); got != 2147483647 {
+ fmt.Printf("div_int64 9223372036854775806/4294967296 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_4294967296_int64_ssa(9223372036854775807); got != 0 {
+ fmt.Printf("div_int64 4294967296/9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_4294967296_ssa(9223372036854775807); got != 2147483647 {
+ fmt.Printf("div_int64 9223372036854775807/4294967296 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_9223372036854775806_int64_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("div_int64 9223372036854775806/-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775806_ssa(-9223372036854775808); got != -1 {
+ fmt.Printf("div_int64 -9223372036854775808/9223372036854775806 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_9223372036854775806_int64_ssa(-9223372036854775807); got != 0 {
+ fmt.Printf("div_int64 9223372036854775806/-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775806_ssa(-9223372036854775807); got != -1 {
+ fmt.Printf("div_int64 -9223372036854775807/9223372036854775806 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_9223372036854775806_int64_ssa(-4294967296); got != -2147483647 {
+ fmt.Printf("div_int64 9223372036854775806/-4294967296 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775806_ssa(-4294967296); got != 0 {
+ fmt.Printf("div_int64 -4294967296/9223372036854775806 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_9223372036854775806_int64_ssa(-1); got != -9223372036854775806 {
+ fmt.Printf("div_int64 9223372036854775806/-1 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775806_ssa(-1); got != 0 {
+ fmt.Printf("div_int64 -1/9223372036854775806 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775806_ssa(0); got != 0 {
+ fmt.Printf("div_int64 0/9223372036854775806 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_9223372036854775806_int64_ssa(1); got != 9223372036854775806 {
+ fmt.Printf("div_int64 9223372036854775806/1 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775806_ssa(1); got != 0 {
+ fmt.Printf("div_int64 1/9223372036854775806 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_9223372036854775806_int64_ssa(4294967296); got != 2147483647 {
+ fmt.Printf("div_int64 9223372036854775806/4294967296 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775806_ssa(4294967296); got != 0 {
+ fmt.Printf("div_int64 4294967296/9223372036854775806 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_9223372036854775806_int64_ssa(9223372036854775806); got != 1 {
+ fmt.Printf("div_int64 9223372036854775806/9223372036854775806 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775806_ssa(9223372036854775806); got != 1 {
+ fmt.Printf("div_int64 9223372036854775806/9223372036854775806 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_9223372036854775806_int64_ssa(9223372036854775807); got != 0 {
+ fmt.Printf("div_int64 9223372036854775806/9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775806_ssa(9223372036854775807); got != 1 {
+ fmt.Printf("div_int64 9223372036854775807/9223372036854775806 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_9223372036854775807_int64_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("div_int64 9223372036854775807/-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775807_ssa(-9223372036854775808); got != -1 {
+ fmt.Printf("div_int64 -9223372036854775808/9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_9223372036854775807_int64_ssa(-9223372036854775807); got != -1 {
+ fmt.Printf("div_int64 9223372036854775807/-9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775807_ssa(-9223372036854775807); got != -1 {
+ fmt.Printf("div_int64 -9223372036854775807/9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_9223372036854775807_int64_ssa(-4294967296); got != -2147483647 {
+ fmt.Printf("div_int64 9223372036854775807/-4294967296 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775807_ssa(-4294967296); got != 0 {
+ fmt.Printf("div_int64 -4294967296/9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_9223372036854775807_int64_ssa(-1); got != -9223372036854775807 {
+ fmt.Printf("div_int64 9223372036854775807/-1 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775807_ssa(-1); got != 0 {
+ fmt.Printf("div_int64 -1/9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775807_ssa(0); got != 0 {
+ fmt.Printf("div_int64 0/9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_9223372036854775807_int64_ssa(1); got != 9223372036854775807 {
+ fmt.Printf("div_int64 9223372036854775807/1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775807_ssa(1); got != 0 {
+ fmt.Printf("div_int64 1/9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_9223372036854775807_int64_ssa(4294967296); got != 2147483647 {
+ fmt.Printf("div_int64 9223372036854775807/4294967296 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775807_ssa(4294967296); got != 0 {
+ fmt.Printf("div_int64 4294967296/9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_9223372036854775807_int64_ssa(9223372036854775806); got != 1 {
+ fmt.Printf("div_int64 9223372036854775807/9223372036854775806 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775807_ssa(9223372036854775806); got != 0 {
+ fmt.Printf("div_int64 9223372036854775806/9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_9223372036854775807_int64_ssa(9223372036854775807); got != 1 {
+ fmt.Printf("div_int64 9223372036854775807/9223372036854775807 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int64_9223372036854775807_ssa(9223372036854775807); got != 1 {
+ fmt.Printf("div_int64 9223372036854775807/9223372036854775807 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775808_int64_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("mul_int64 -9223372036854775808*-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775808_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("mul_int64 -9223372036854775808*-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775808_int64_ssa(-9223372036854775807); got != -9223372036854775808 {
+ fmt.Printf("mul_int64 -9223372036854775808*-9223372036854775807 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775808_ssa(-9223372036854775807); got != -9223372036854775808 {
+ fmt.Printf("mul_int64 -9223372036854775807*-9223372036854775808 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775808_int64_ssa(-4294967296); got != 0 {
+ fmt.Printf("mul_int64 -9223372036854775808*-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775808_ssa(-4294967296); got != 0 {
+ fmt.Printf("mul_int64 -4294967296*-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775808_int64_ssa(-1); got != -9223372036854775808 {
+ fmt.Printf("mul_int64 -9223372036854775808*-1 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775808_ssa(-1); got != -9223372036854775808 {
+ fmt.Printf("mul_int64 -1*-9223372036854775808 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775808_int64_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 -9223372036854775808*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775808_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 0*-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775808_int64_ssa(1); got != -9223372036854775808 {
+ fmt.Printf("mul_int64 -9223372036854775808*1 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775808_ssa(1); got != -9223372036854775808 {
+ fmt.Printf("mul_int64 1*-9223372036854775808 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775808_int64_ssa(4294967296); got != 0 {
+ fmt.Printf("mul_int64 -9223372036854775808*4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775808_ssa(4294967296); got != 0 {
+ fmt.Printf("mul_int64 4294967296*-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775808_int64_ssa(9223372036854775806); got != 0 {
+ fmt.Printf("mul_int64 -9223372036854775808*9223372036854775806 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775808_ssa(9223372036854775806); got != 0 {
+ fmt.Printf("mul_int64 9223372036854775806*-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775808_int64_ssa(9223372036854775807); got != -9223372036854775808 {
+ fmt.Printf("mul_int64 -9223372036854775808*9223372036854775807 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775808_ssa(9223372036854775807); got != -9223372036854775808 {
+ fmt.Printf("mul_int64 9223372036854775807*-9223372036854775808 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775807_int64_ssa(-9223372036854775808); got != -9223372036854775808 {
+ fmt.Printf("mul_int64 -9223372036854775807*-9223372036854775808 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775807_ssa(-9223372036854775808); got != -9223372036854775808 {
+ fmt.Printf("mul_int64 -9223372036854775808*-9223372036854775807 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775807_int64_ssa(-9223372036854775807); got != 1 {
+ fmt.Printf("mul_int64 -9223372036854775807*-9223372036854775807 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775807_ssa(-9223372036854775807); got != 1 {
+ fmt.Printf("mul_int64 -9223372036854775807*-9223372036854775807 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775807_int64_ssa(-4294967296); got != -4294967296 {
+ fmt.Printf("mul_int64 -9223372036854775807*-4294967296 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775807_ssa(-4294967296); got != -4294967296 {
+ fmt.Printf("mul_int64 -4294967296*-9223372036854775807 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775807_int64_ssa(-1); got != 9223372036854775807 {
+ fmt.Printf("mul_int64 -9223372036854775807*-1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775807_ssa(-1); got != 9223372036854775807 {
+ fmt.Printf("mul_int64 -1*-9223372036854775807 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775807_int64_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 -9223372036854775807*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775807_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 0*-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775807_int64_ssa(1); got != -9223372036854775807 {
+ fmt.Printf("mul_int64 -9223372036854775807*1 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775807_ssa(1); got != -9223372036854775807 {
+ fmt.Printf("mul_int64 1*-9223372036854775807 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775807_int64_ssa(4294967296); got != 4294967296 {
+ fmt.Printf("mul_int64 -9223372036854775807*4294967296 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775807_ssa(4294967296); got != 4294967296 {
+ fmt.Printf("mul_int64 4294967296*-9223372036854775807 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775807_int64_ssa(9223372036854775806); got != 9223372036854775806 {
+ fmt.Printf("mul_int64 -9223372036854775807*9223372036854775806 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775807_ssa(9223372036854775806); got != 9223372036854775806 {
+ fmt.Printf("mul_int64 9223372036854775806*-9223372036854775807 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg9223372036854775807_int64_ssa(9223372036854775807); got != -1 {
+ fmt.Printf("mul_int64 -9223372036854775807*9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg9223372036854775807_ssa(9223372036854775807); got != -1 {
+ fmt.Printf("mul_int64 9223372036854775807*-9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg4294967296_int64_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("mul_int64 -4294967296*-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg4294967296_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("mul_int64 -9223372036854775808*-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg4294967296_int64_ssa(-9223372036854775807); got != -4294967296 {
+ fmt.Printf("mul_int64 -4294967296*-9223372036854775807 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg4294967296_ssa(-9223372036854775807); got != -4294967296 {
+ fmt.Printf("mul_int64 -9223372036854775807*-4294967296 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg4294967296_int64_ssa(-4294967296); got != 0 {
+ fmt.Printf("mul_int64 -4294967296*-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg4294967296_ssa(-4294967296); got != 0 {
+ fmt.Printf("mul_int64 -4294967296*-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg4294967296_int64_ssa(-1); got != 4294967296 {
+ fmt.Printf("mul_int64 -4294967296*-1 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg4294967296_ssa(-1); got != 4294967296 {
+ fmt.Printf("mul_int64 -1*-4294967296 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg4294967296_int64_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 -4294967296*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg4294967296_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 0*-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg4294967296_int64_ssa(1); got != -4294967296 {
+ fmt.Printf("mul_int64 -4294967296*1 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg4294967296_ssa(1); got != -4294967296 {
+ fmt.Printf("mul_int64 1*-4294967296 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg4294967296_int64_ssa(4294967296); got != 0 {
+ fmt.Printf("mul_int64 -4294967296*4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg4294967296_ssa(4294967296); got != 0 {
+ fmt.Printf("mul_int64 4294967296*-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg4294967296_int64_ssa(9223372036854775806); got != 8589934592 {
+ fmt.Printf("mul_int64 -4294967296*9223372036854775806 = %d, wanted 8589934592\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg4294967296_ssa(9223372036854775806); got != 8589934592 {
+ fmt.Printf("mul_int64 9223372036854775806*-4294967296 = %d, wanted 8589934592\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg4294967296_int64_ssa(9223372036854775807); got != 4294967296 {
+ fmt.Printf("mul_int64 -4294967296*9223372036854775807 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg4294967296_ssa(9223372036854775807); got != 4294967296 {
+ fmt.Printf("mul_int64 9223372036854775807*-4294967296 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int64_ssa(-9223372036854775808); got != -9223372036854775808 {
+ fmt.Printf("mul_int64 -1*-9223372036854775808 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg1_ssa(-9223372036854775808); got != -9223372036854775808 {
+ fmt.Printf("mul_int64 -9223372036854775808*-1 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int64_ssa(-9223372036854775807); got != 9223372036854775807 {
+ fmt.Printf("mul_int64 -1*-9223372036854775807 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg1_ssa(-9223372036854775807); got != 9223372036854775807 {
+ fmt.Printf("mul_int64 -9223372036854775807*-1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int64_ssa(-4294967296); got != 4294967296 {
+ fmt.Printf("mul_int64 -1*-4294967296 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg1_ssa(-4294967296); got != 4294967296 {
+ fmt.Printf("mul_int64 -4294967296*-1 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int64_ssa(-1); got != 1 {
+ fmt.Printf("mul_int64 -1*-1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg1_ssa(-1); got != 1 {
+ fmt.Printf("mul_int64 -1*-1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int64_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 -1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg1_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 0*-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int64_ssa(1); got != -1 {
+ fmt.Printf("mul_int64 -1*1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg1_ssa(1); got != -1 {
+ fmt.Printf("mul_int64 1*-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int64_ssa(4294967296); got != -4294967296 {
+ fmt.Printf("mul_int64 -1*4294967296 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg1_ssa(4294967296); got != -4294967296 {
+ fmt.Printf("mul_int64 4294967296*-1 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int64_ssa(9223372036854775806); got != -9223372036854775806 {
+ fmt.Printf("mul_int64 -1*9223372036854775806 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg1_ssa(9223372036854775806); got != -9223372036854775806 {
+ fmt.Printf("mul_int64 9223372036854775806*-1 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int64_ssa(9223372036854775807); got != -9223372036854775807 {
+ fmt.Printf("mul_int64 -1*9223372036854775807 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_Neg1_ssa(9223372036854775807); got != -9223372036854775807 {
+ fmt.Printf("mul_int64 9223372036854775807*-1 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int64_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("mul_int64 0*-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_0_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("mul_int64 -9223372036854775808*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int64_ssa(-9223372036854775807); got != 0 {
+ fmt.Printf("mul_int64 0*-9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_0_ssa(-9223372036854775807); got != 0 {
+ fmt.Printf("mul_int64 -9223372036854775807*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int64_ssa(-4294967296); got != 0 {
+ fmt.Printf("mul_int64 0*-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_0_ssa(-4294967296); got != 0 {
+ fmt.Printf("mul_int64 -4294967296*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int64_ssa(-1); got != 0 {
+ fmt.Printf("mul_int64 0*-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_0_ssa(-1); got != 0 {
+ fmt.Printf("mul_int64 -1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int64_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 0*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_0_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 0*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int64_ssa(1); got != 0 {
+ fmt.Printf("mul_int64 0*1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_0_ssa(1); got != 0 {
+ fmt.Printf("mul_int64 1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int64_ssa(4294967296); got != 0 {
+ fmt.Printf("mul_int64 0*4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_0_ssa(4294967296); got != 0 {
+ fmt.Printf("mul_int64 4294967296*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int64_ssa(9223372036854775806); got != 0 {
+ fmt.Printf("mul_int64 0*9223372036854775806 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_0_ssa(9223372036854775806); got != 0 {
+ fmt.Printf("mul_int64 9223372036854775806*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int64_ssa(9223372036854775807); got != 0 {
+ fmt.Printf("mul_int64 0*9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_0_ssa(9223372036854775807); got != 0 {
+ fmt.Printf("mul_int64 9223372036854775807*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int64_ssa(-9223372036854775808); got != -9223372036854775808 {
+ fmt.Printf("mul_int64 1*-9223372036854775808 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_1_ssa(-9223372036854775808); got != -9223372036854775808 {
+ fmt.Printf("mul_int64 -9223372036854775808*1 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int64_ssa(-9223372036854775807); got != -9223372036854775807 {
+ fmt.Printf("mul_int64 1*-9223372036854775807 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_1_ssa(-9223372036854775807); got != -9223372036854775807 {
+ fmt.Printf("mul_int64 -9223372036854775807*1 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int64_ssa(-4294967296); got != -4294967296 {
+ fmt.Printf("mul_int64 1*-4294967296 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_1_ssa(-4294967296); got != -4294967296 {
+ fmt.Printf("mul_int64 -4294967296*1 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int64_ssa(-1); got != -1 {
+ fmt.Printf("mul_int64 1*-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_1_ssa(-1); got != -1 {
+ fmt.Printf("mul_int64 -1*1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int64_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_1_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 0*1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int64_ssa(1); got != 1 {
+ fmt.Printf("mul_int64 1*1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_1_ssa(1); got != 1 {
+ fmt.Printf("mul_int64 1*1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int64_ssa(4294967296); got != 4294967296 {
+ fmt.Printf("mul_int64 1*4294967296 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_1_ssa(4294967296); got != 4294967296 {
+ fmt.Printf("mul_int64 4294967296*1 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int64_ssa(9223372036854775806); got != 9223372036854775806 {
+ fmt.Printf("mul_int64 1*9223372036854775806 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_1_ssa(9223372036854775806); got != 9223372036854775806 {
+ fmt.Printf("mul_int64 9223372036854775806*1 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int64_ssa(9223372036854775807); got != 9223372036854775807 {
+ fmt.Printf("mul_int64 1*9223372036854775807 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_1_ssa(9223372036854775807); got != 9223372036854775807 {
+ fmt.Printf("mul_int64 9223372036854775807*1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := mul_4294967296_int64_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("mul_int64 4294967296*-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_4294967296_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("mul_int64 -9223372036854775808*4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_4294967296_int64_ssa(-9223372036854775807); got != 4294967296 {
+ fmt.Printf("mul_int64 4294967296*-9223372036854775807 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_4294967296_ssa(-9223372036854775807); got != 4294967296 {
+ fmt.Printf("mul_int64 -9223372036854775807*4294967296 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_4294967296_int64_ssa(-4294967296); got != 0 {
+ fmt.Printf("mul_int64 4294967296*-4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_4294967296_ssa(-4294967296); got != 0 {
+ fmt.Printf("mul_int64 -4294967296*4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_4294967296_int64_ssa(-1); got != -4294967296 {
+ fmt.Printf("mul_int64 4294967296*-1 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_4294967296_ssa(-1); got != -4294967296 {
+ fmt.Printf("mul_int64 -1*4294967296 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_4294967296_int64_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 4294967296*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_4294967296_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 0*4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_4294967296_int64_ssa(1); got != 4294967296 {
+ fmt.Printf("mul_int64 4294967296*1 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_4294967296_ssa(1); got != 4294967296 {
+ fmt.Printf("mul_int64 1*4294967296 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_4294967296_int64_ssa(4294967296); got != 0 {
+ fmt.Printf("mul_int64 4294967296*4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_4294967296_ssa(4294967296); got != 0 {
+ fmt.Printf("mul_int64 4294967296*4294967296 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_4294967296_int64_ssa(9223372036854775806); got != -8589934592 {
+ fmt.Printf("mul_int64 4294967296*9223372036854775806 = %d, wanted -8589934592\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_4294967296_ssa(9223372036854775806); got != -8589934592 {
+ fmt.Printf("mul_int64 9223372036854775806*4294967296 = %d, wanted -8589934592\n", got)
+ failed = true
+ }
+
+ if got := mul_4294967296_int64_ssa(9223372036854775807); got != -4294967296 {
+ fmt.Printf("mul_int64 4294967296*9223372036854775807 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_4294967296_ssa(9223372036854775807); got != -4294967296 {
+ fmt.Printf("mul_int64 9223372036854775807*4294967296 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775806_int64_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("mul_int64 9223372036854775806*-9223372036854775808 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775806_ssa(-9223372036854775808); got != 0 {
+ fmt.Printf("mul_int64 -9223372036854775808*9223372036854775806 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775806_int64_ssa(-9223372036854775807); got != 9223372036854775806 {
+ fmt.Printf("mul_int64 9223372036854775806*-9223372036854775807 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775806_ssa(-9223372036854775807); got != 9223372036854775806 {
+ fmt.Printf("mul_int64 -9223372036854775807*9223372036854775806 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775806_int64_ssa(-4294967296); got != 8589934592 {
+ fmt.Printf("mul_int64 9223372036854775806*-4294967296 = %d, wanted 8589934592\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775806_ssa(-4294967296); got != 8589934592 {
+ fmt.Printf("mul_int64 -4294967296*9223372036854775806 = %d, wanted 8589934592\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775806_int64_ssa(-1); got != -9223372036854775806 {
+ fmt.Printf("mul_int64 9223372036854775806*-1 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775806_ssa(-1); got != -9223372036854775806 {
+ fmt.Printf("mul_int64 -1*9223372036854775806 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775806_int64_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 9223372036854775806*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775806_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 0*9223372036854775806 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775806_int64_ssa(1); got != 9223372036854775806 {
+ fmt.Printf("mul_int64 9223372036854775806*1 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775806_ssa(1); got != 9223372036854775806 {
+ fmt.Printf("mul_int64 1*9223372036854775806 = %d, wanted 9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775806_int64_ssa(4294967296); got != -8589934592 {
+ fmt.Printf("mul_int64 9223372036854775806*4294967296 = %d, wanted -8589934592\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775806_ssa(4294967296); got != -8589934592 {
+ fmt.Printf("mul_int64 4294967296*9223372036854775806 = %d, wanted -8589934592\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775806_int64_ssa(9223372036854775806); got != 4 {
+ fmt.Printf("mul_int64 9223372036854775806*9223372036854775806 = %d, wanted 4\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775806_ssa(9223372036854775806); got != 4 {
+ fmt.Printf("mul_int64 9223372036854775806*9223372036854775806 = %d, wanted 4\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775806_int64_ssa(9223372036854775807); got != -9223372036854775806 {
+ fmt.Printf("mul_int64 9223372036854775806*9223372036854775807 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775806_ssa(9223372036854775807); got != -9223372036854775806 {
+ fmt.Printf("mul_int64 9223372036854775807*9223372036854775806 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775807_int64_ssa(-9223372036854775808); got != -9223372036854775808 {
+ fmt.Printf("mul_int64 9223372036854775807*-9223372036854775808 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775807_ssa(-9223372036854775808); got != -9223372036854775808 {
+ fmt.Printf("mul_int64 -9223372036854775808*9223372036854775807 = %d, wanted -9223372036854775808\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775807_int64_ssa(-9223372036854775807); got != -1 {
+ fmt.Printf("mul_int64 9223372036854775807*-9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775807_ssa(-9223372036854775807); got != -1 {
+ fmt.Printf("mul_int64 -9223372036854775807*9223372036854775807 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775807_int64_ssa(-4294967296); got != 4294967296 {
+ fmt.Printf("mul_int64 9223372036854775807*-4294967296 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775807_ssa(-4294967296); got != 4294967296 {
+ fmt.Printf("mul_int64 -4294967296*9223372036854775807 = %d, wanted 4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775807_int64_ssa(-1); got != -9223372036854775807 {
+ fmt.Printf("mul_int64 9223372036854775807*-1 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775807_ssa(-1); got != -9223372036854775807 {
+ fmt.Printf("mul_int64 -1*9223372036854775807 = %d, wanted -9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775807_int64_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 9223372036854775807*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775807_ssa(0); got != 0 {
+ fmt.Printf("mul_int64 0*9223372036854775807 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775807_int64_ssa(1); got != 9223372036854775807 {
+ fmt.Printf("mul_int64 9223372036854775807*1 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775807_ssa(1); got != 9223372036854775807 {
+ fmt.Printf("mul_int64 1*9223372036854775807 = %d, wanted 9223372036854775807\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775807_int64_ssa(4294967296); got != -4294967296 {
+ fmt.Printf("mul_int64 9223372036854775807*4294967296 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775807_ssa(4294967296); got != -4294967296 {
+ fmt.Printf("mul_int64 4294967296*9223372036854775807 = %d, wanted -4294967296\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775807_int64_ssa(9223372036854775806); got != -9223372036854775806 {
+ fmt.Printf("mul_int64 9223372036854775807*9223372036854775806 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775807_ssa(9223372036854775806); got != -9223372036854775806 {
+ fmt.Printf("mul_int64 9223372036854775806*9223372036854775807 = %d, wanted -9223372036854775806\n", got)
+ failed = true
+ }
+
+ if got := mul_9223372036854775807_int64_ssa(9223372036854775807); got != 1 {
+ fmt.Printf("mul_int64 9223372036854775807*9223372036854775807 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_int64_9223372036854775807_ssa(9223372036854775807); got != 1 {
+ fmt.Printf("mul_int64 9223372036854775807*9223372036854775807 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_0_uint32_ssa(0); got != 0 {
+ fmt.Printf("add_uint32 0+0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_uint32_0_ssa(0); got != 0 {
+ fmt.Printf("add_uint32 0+0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_0_uint32_ssa(1); got != 1 {
+ fmt.Printf("add_uint32 0+1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_uint32_0_ssa(1); got != 1 {
+ fmt.Printf("add_uint32 1+0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_0_uint32_ssa(4294967295); got != 4294967295 {
+ fmt.Printf("add_uint32 0+4294967295 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := add_uint32_0_ssa(4294967295); got != 4294967295 {
+ fmt.Printf("add_uint32 4294967295+0 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := add_1_uint32_ssa(0); got != 1 {
+ fmt.Printf("add_uint32 1+0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_uint32_1_ssa(0); got != 1 {
+ fmt.Printf("add_uint32 0+1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_1_uint32_ssa(1); got != 2 {
+ fmt.Printf("add_uint32 1+1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_uint32_1_ssa(1); got != 2 {
+ fmt.Printf("add_uint32 1+1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_1_uint32_ssa(4294967295); got != 0 {
+ fmt.Printf("add_uint32 1+4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_uint32_1_ssa(4294967295); got != 0 {
+ fmt.Printf("add_uint32 4294967295+1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_4294967295_uint32_ssa(0); got != 4294967295 {
+ fmt.Printf("add_uint32 4294967295+0 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := add_uint32_4294967295_ssa(0); got != 4294967295 {
+ fmt.Printf("add_uint32 0+4294967295 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := add_4294967295_uint32_ssa(1); got != 0 {
+ fmt.Printf("add_uint32 4294967295+1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_uint32_4294967295_ssa(1); got != 0 {
+ fmt.Printf("add_uint32 1+4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_4294967295_uint32_ssa(4294967295); got != 4294967294 {
+ fmt.Printf("add_uint32 4294967295+4294967295 = %d, wanted 4294967294\n", got)
+ failed = true
+ }
+
+ if got := add_uint32_4294967295_ssa(4294967295); got != 4294967294 {
+ fmt.Printf("add_uint32 4294967295+4294967295 = %d, wanted 4294967294\n", got)
+ failed = true
+ }
+
+ if got := sub_0_uint32_ssa(0); got != 0 {
+ fmt.Printf("sub_uint32 0-0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_uint32_0_ssa(0); got != 0 {
+ fmt.Printf("sub_uint32 0-0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_0_uint32_ssa(1); got != 4294967295 {
+ fmt.Printf("sub_uint32 0-1 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := sub_uint32_0_ssa(1); got != 1 {
+ fmt.Printf("sub_uint32 1-0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_0_uint32_ssa(4294967295); got != 1 {
+ fmt.Printf("sub_uint32 0-4294967295 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_uint32_0_ssa(4294967295); got != 4294967295 {
+ fmt.Printf("sub_uint32 4294967295-0 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := sub_1_uint32_ssa(0); got != 1 {
+ fmt.Printf("sub_uint32 1-0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_uint32_1_ssa(0); got != 4294967295 {
+ fmt.Printf("sub_uint32 0-1 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := sub_1_uint32_ssa(1); got != 0 {
+ fmt.Printf("sub_uint32 1-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_uint32_1_ssa(1); got != 0 {
+ fmt.Printf("sub_uint32 1-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_1_uint32_ssa(4294967295); got != 2 {
+ fmt.Printf("sub_uint32 1-4294967295 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_uint32_1_ssa(4294967295); got != 4294967294 {
+ fmt.Printf("sub_uint32 4294967295-1 = %d, wanted 4294967294\n", got)
+ failed = true
+ }
+
+ if got := sub_4294967295_uint32_ssa(0); got != 4294967295 {
+ fmt.Printf("sub_uint32 4294967295-0 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := sub_uint32_4294967295_ssa(0); got != 1 {
+ fmt.Printf("sub_uint32 0-4294967295 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_4294967295_uint32_ssa(1); got != 4294967294 {
+ fmt.Printf("sub_uint32 4294967295-1 = %d, wanted 4294967294\n", got)
+ failed = true
+ }
+
+ if got := sub_uint32_4294967295_ssa(1); got != 2 {
+ fmt.Printf("sub_uint32 1-4294967295 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_4294967295_uint32_ssa(4294967295); got != 0 {
+ fmt.Printf("sub_uint32 4294967295-4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_uint32_4294967295_ssa(4294967295); got != 0 {
+ fmt.Printf("sub_uint32 4294967295-4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_uint32_ssa(1); got != 0 {
+ fmt.Printf("div_uint32 0/1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_uint32_ssa(4294967295); got != 0 {
+ fmt.Printf("div_uint32 0/4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_uint32_1_ssa(0); got != 0 {
+ fmt.Printf("div_uint32 0/1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_1_uint32_ssa(1); got != 1 {
+ fmt.Printf("div_uint32 1/1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_uint32_1_ssa(1); got != 1 {
+ fmt.Printf("div_uint32 1/1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_1_uint32_ssa(4294967295); got != 0 {
+ fmt.Printf("div_uint32 1/4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_uint32_1_ssa(4294967295); got != 4294967295 {
+ fmt.Printf("div_uint32 4294967295/1 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := div_uint32_4294967295_ssa(0); got != 0 {
+ fmt.Printf("div_uint32 0/4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_4294967295_uint32_ssa(1); got != 4294967295 {
+ fmt.Printf("div_uint32 4294967295/1 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := div_uint32_4294967295_ssa(1); got != 0 {
+ fmt.Printf("div_uint32 1/4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_4294967295_uint32_ssa(4294967295); got != 1 {
+ fmt.Printf("div_uint32 4294967295/4294967295 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_uint32_4294967295_ssa(4294967295); got != 1 {
+ fmt.Printf("div_uint32 4294967295/4294967295 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_0_uint32_ssa(0); got != 0 {
+ fmt.Printf("mul_uint32 0*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint32_0_ssa(0); got != 0 {
+ fmt.Printf("mul_uint32 0*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_uint32_ssa(1); got != 0 {
+ fmt.Printf("mul_uint32 0*1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint32_0_ssa(1); got != 0 {
+ fmt.Printf("mul_uint32 1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_uint32_ssa(4294967295); got != 0 {
+ fmt.Printf("mul_uint32 0*4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint32_0_ssa(4294967295); got != 0 {
+ fmt.Printf("mul_uint32 4294967295*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_1_uint32_ssa(0); got != 0 {
+ fmt.Printf("mul_uint32 1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint32_1_ssa(0); got != 0 {
+ fmt.Printf("mul_uint32 0*1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_1_uint32_ssa(1); got != 1 {
+ fmt.Printf("mul_uint32 1*1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_uint32_1_ssa(1); got != 1 {
+ fmt.Printf("mul_uint32 1*1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_1_uint32_ssa(4294967295); got != 4294967295 {
+ fmt.Printf("mul_uint32 1*4294967295 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := mul_uint32_1_ssa(4294967295); got != 4294967295 {
+ fmt.Printf("mul_uint32 4294967295*1 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := mul_4294967295_uint32_ssa(0); got != 0 {
+ fmt.Printf("mul_uint32 4294967295*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint32_4294967295_ssa(0); got != 0 {
+ fmt.Printf("mul_uint32 0*4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_4294967295_uint32_ssa(1); got != 4294967295 {
+ fmt.Printf("mul_uint32 4294967295*1 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := mul_uint32_4294967295_ssa(1); got != 4294967295 {
+ fmt.Printf("mul_uint32 1*4294967295 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := mul_4294967295_uint32_ssa(4294967295); got != 1 {
+ fmt.Printf("mul_uint32 4294967295*4294967295 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_uint32_4294967295_ssa(4294967295); got != 1 {
+ fmt.Printf("mul_uint32 4294967295*4294967295 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := lsh_0_uint32_ssa(0); got != 0 {
+ fmt.Printf("lsh_uint32 0<<0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint32_0_ssa(0); got != 0 {
+ fmt.Printf("lsh_uint32 0<<0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_0_uint32_ssa(1); got != 0 {
+ fmt.Printf("lsh_uint32 0<<1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint32_0_ssa(1); got != 1 {
+ fmt.Printf("lsh_uint32 1<<0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := lsh_0_uint32_ssa(4294967295); got != 0 {
+ fmt.Printf("lsh_uint32 0<<4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint32_0_ssa(4294967295); got != 4294967295 {
+ fmt.Printf("lsh_uint32 4294967295<<0 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := lsh_1_uint32_ssa(0); got != 1 {
+ fmt.Printf("lsh_uint32 1<<0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint32_1_ssa(0); got != 0 {
+ fmt.Printf("lsh_uint32 0<<1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_1_uint32_ssa(1); got != 2 {
+ fmt.Printf("lsh_uint32 1<<1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint32_1_ssa(1); got != 2 {
+ fmt.Printf("lsh_uint32 1<<1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := lsh_1_uint32_ssa(4294967295); got != 0 {
+ fmt.Printf("lsh_uint32 1<<4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint32_1_ssa(4294967295); got != 4294967294 {
+ fmt.Printf("lsh_uint32 4294967295<<1 = %d, wanted 4294967294\n", got)
+ failed = true
+ }
+
+ if got := lsh_4294967295_uint32_ssa(0); got != 4294967295 {
+ fmt.Printf("lsh_uint32 4294967295<<0 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint32_4294967295_ssa(0); got != 0 {
+ fmt.Printf("lsh_uint32 0<<4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_4294967295_uint32_ssa(1); got != 4294967294 {
+ fmt.Printf("lsh_uint32 4294967295<<1 = %d, wanted 4294967294\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint32_4294967295_ssa(1); got != 0 {
+ fmt.Printf("lsh_uint32 1<<4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_4294967295_uint32_ssa(4294967295); got != 0 {
+ fmt.Printf("lsh_uint32 4294967295<<4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint32_4294967295_ssa(4294967295); got != 0 {
+ fmt.Printf("lsh_uint32 4294967295<<4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_0_uint32_ssa(0); got != 0 {
+ fmt.Printf("rsh_uint32 0>>0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint32_0_ssa(0); got != 0 {
+ fmt.Printf("rsh_uint32 0>>0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_0_uint32_ssa(1); got != 0 {
+ fmt.Printf("rsh_uint32 0>>1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint32_0_ssa(1); got != 1 {
+ fmt.Printf("rsh_uint32 1>>0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := rsh_0_uint32_ssa(4294967295); got != 0 {
+ fmt.Printf("rsh_uint32 0>>4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint32_0_ssa(4294967295); got != 4294967295 {
+ fmt.Printf("rsh_uint32 4294967295>>0 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := rsh_1_uint32_ssa(0); got != 1 {
+ fmt.Printf("rsh_uint32 1>>0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint32_1_ssa(0); got != 0 {
+ fmt.Printf("rsh_uint32 0>>1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_1_uint32_ssa(1); got != 0 {
+ fmt.Printf("rsh_uint32 1>>1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint32_1_ssa(1); got != 0 {
+ fmt.Printf("rsh_uint32 1>>1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_1_uint32_ssa(4294967295); got != 0 {
+ fmt.Printf("rsh_uint32 1>>4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint32_1_ssa(4294967295); got != 2147483647 {
+ fmt.Printf("rsh_uint32 4294967295>>1 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := rsh_4294967295_uint32_ssa(0); got != 4294967295 {
+ fmt.Printf("rsh_uint32 4294967295>>0 = %d, wanted 4294967295\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint32_4294967295_ssa(0); got != 0 {
+ fmt.Printf("rsh_uint32 0>>4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_4294967295_uint32_ssa(1); got != 2147483647 {
+ fmt.Printf("rsh_uint32 4294967295>>1 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint32_4294967295_ssa(1); got != 0 {
+ fmt.Printf("rsh_uint32 1>>4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_4294967295_uint32_ssa(4294967295); got != 0 {
+ fmt.Printf("rsh_uint32 4294967295>>4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint32_4294967295_ssa(4294967295); got != 0 {
+ fmt.Printf("rsh_uint32 4294967295>>4294967295 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_Neg2147483648_int32_ssa(-2147483648); got != 0 {
+ fmt.Printf("add_int32 -2147483648+-2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg2147483648_ssa(-2147483648); got != 0 {
+ fmt.Printf("add_int32 -2147483648+-2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_Neg2147483648_int32_ssa(-2147483647); got != 1 {
+ fmt.Printf("add_int32 -2147483648+-2147483647 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg2147483648_ssa(-2147483647); got != 1 {
+ fmt.Printf("add_int32 -2147483647+-2147483648 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg2147483648_int32_ssa(-1); got != 2147483647 {
+ fmt.Printf("add_int32 -2147483648+-1 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg2147483648_ssa(-1); got != 2147483647 {
+ fmt.Printf("add_int32 -1+-2147483648 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := add_Neg2147483648_int32_ssa(0); got != -2147483648 {
+ fmt.Printf("add_int32 -2147483648+0 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg2147483648_ssa(0); got != -2147483648 {
+ fmt.Printf("add_int32 0+-2147483648 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := add_Neg2147483648_int32_ssa(1); got != -2147483647 {
+ fmt.Printf("add_int32 -2147483648+1 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg2147483648_ssa(1); got != -2147483647 {
+ fmt.Printf("add_int32 1+-2147483648 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := add_Neg2147483648_int32_ssa(2147483647); got != -1 {
+ fmt.Printf("add_int32 -2147483648+2147483647 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg2147483648_ssa(2147483647); got != -1 {
+ fmt.Printf("add_int32 2147483647+-2147483648 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg2147483647_int32_ssa(-2147483648); got != 1 {
+ fmt.Printf("add_int32 -2147483647+-2147483648 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg2147483647_ssa(-2147483648); got != 1 {
+ fmt.Printf("add_int32 -2147483648+-2147483647 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg2147483647_int32_ssa(-2147483647); got != 2 {
+ fmt.Printf("add_int32 -2147483647+-2147483647 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg2147483647_ssa(-2147483647); got != 2 {
+ fmt.Printf("add_int32 -2147483647+-2147483647 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_Neg2147483647_int32_ssa(-1); got != -2147483648 {
+ fmt.Printf("add_int32 -2147483647+-1 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg2147483647_ssa(-1); got != -2147483648 {
+ fmt.Printf("add_int32 -1+-2147483647 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := add_Neg2147483647_int32_ssa(0); got != -2147483647 {
+ fmt.Printf("add_int32 -2147483647+0 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg2147483647_ssa(0); got != -2147483647 {
+ fmt.Printf("add_int32 0+-2147483647 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := add_Neg2147483647_int32_ssa(1); got != -2147483646 {
+ fmt.Printf("add_int32 -2147483647+1 = %d, wanted -2147483646\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg2147483647_ssa(1); got != -2147483646 {
+ fmt.Printf("add_int32 1+-2147483647 = %d, wanted -2147483646\n", got)
+ failed = true
+ }
+
+ if got := add_Neg2147483647_int32_ssa(2147483647); got != 0 {
+ fmt.Printf("add_int32 -2147483647+2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg2147483647_ssa(2147483647); got != 0 {
+ fmt.Printf("add_int32 2147483647+-2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int32_ssa(-2147483648); got != 2147483647 {
+ fmt.Printf("add_int32 -1+-2147483648 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg1_ssa(-2147483648); got != 2147483647 {
+ fmt.Printf("add_int32 -2147483648+-1 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int32_ssa(-2147483647); got != -2147483648 {
+ fmt.Printf("add_int32 -1+-2147483647 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg1_ssa(-2147483647); got != -2147483648 {
+ fmt.Printf("add_int32 -2147483647+-1 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int32_ssa(-1); got != -2 {
+ fmt.Printf("add_int32 -1+-1 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg1_ssa(-1); got != -2 {
+ fmt.Printf("add_int32 -1+-1 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int32_ssa(0); got != -1 {
+ fmt.Printf("add_int32 -1+0 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg1_ssa(0); got != -1 {
+ fmt.Printf("add_int32 0+-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int32_ssa(1); got != 0 {
+ fmt.Printf("add_int32 -1+1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg1_ssa(1); got != 0 {
+ fmt.Printf("add_int32 1+-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int32_ssa(2147483647); got != 2147483646 {
+ fmt.Printf("add_int32 -1+2147483647 = %d, wanted 2147483646\n", got)
+ failed = true
+ }
+
+ if got := add_int32_Neg1_ssa(2147483647); got != 2147483646 {
+ fmt.Printf("add_int32 2147483647+-1 = %d, wanted 2147483646\n", got)
+ failed = true
+ }
+
+ if got := add_0_int32_ssa(-2147483648); got != -2147483648 {
+ fmt.Printf("add_int32 0+-2147483648 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := add_int32_0_ssa(-2147483648); got != -2147483648 {
+ fmt.Printf("add_int32 -2147483648+0 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := add_0_int32_ssa(-2147483647); got != -2147483647 {
+ fmt.Printf("add_int32 0+-2147483647 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := add_int32_0_ssa(-2147483647); got != -2147483647 {
+ fmt.Printf("add_int32 -2147483647+0 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := add_0_int32_ssa(-1); got != -1 {
+ fmt.Printf("add_int32 0+-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int32_0_ssa(-1); got != -1 {
+ fmt.Printf("add_int32 -1+0 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_0_int32_ssa(0); got != 0 {
+ fmt.Printf("add_int32 0+0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int32_0_ssa(0); got != 0 {
+ fmt.Printf("add_int32 0+0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_0_int32_ssa(1); got != 1 {
+ fmt.Printf("add_int32 0+1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_int32_0_ssa(1); got != 1 {
+ fmt.Printf("add_int32 1+0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_0_int32_ssa(2147483647); got != 2147483647 {
+ fmt.Printf("add_int32 0+2147483647 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := add_int32_0_ssa(2147483647); got != 2147483647 {
+ fmt.Printf("add_int32 2147483647+0 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := add_1_int32_ssa(-2147483648); got != -2147483647 {
+ fmt.Printf("add_int32 1+-2147483648 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := add_int32_1_ssa(-2147483648); got != -2147483647 {
+ fmt.Printf("add_int32 -2147483648+1 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := add_1_int32_ssa(-2147483647); got != -2147483646 {
+ fmt.Printf("add_int32 1+-2147483647 = %d, wanted -2147483646\n", got)
+ failed = true
+ }
+
+ if got := add_int32_1_ssa(-2147483647); got != -2147483646 {
+ fmt.Printf("add_int32 -2147483647+1 = %d, wanted -2147483646\n", got)
+ failed = true
+ }
+
+ if got := add_1_int32_ssa(-1); got != 0 {
+ fmt.Printf("add_int32 1+-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int32_1_ssa(-1); got != 0 {
+ fmt.Printf("add_int32 -1+1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_1_int32_ssa(0); got != 1 {
+ fmt.Printf("add_int32 1+0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_int32_1_ssa(0); got != 1 {
+ fmt.Printf("add_int32 0+1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_1_int32_ssa(1); got != 2 {
+ fmt.Printf("add_int32 1+1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_int32_1_ssa(1); got != 2 {
+ fmt.Printf("add_int32 1+1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_1_int32_ssa(2147483647); got != -2147483648 {
+ fmt.Printf("add_int32 1+2147483647 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := add_int32_1_ssa(2147483647); got != -2147483648 {
+ fmt.Printf("add_int32 2147483647+1 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := add_2147483647_int32_ssa(-2147483648); got != -1 {
+ fmt.Printf("add_int32 2147483647+-2147483648 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int32_2147483647_ssa(-2147483648); got != -1 {
+ fmt.Printf("add_int32 -2147483648+2147483647 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_2147483647_int32_ssa(-2147483647); got != 0 {
+ fmt.Printf("add_int32 2147483647+-2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int32_2147483647_ssa(-2147483647); got != 0 {
+ fmt.Printf("add_int32 -2147483647+2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_2147483647_int32_ssa(-1); got != 2147483646 {
+ fmt.Printf("add_int32 2147483647+-1 = %d, wanted 2147483646\n", got)
+ failed = true
+ }
+
+ if got := add_int32_2147483647_ssa(-1); got != 2147483646 {
+ fmt.Printf("add_int32 -1+2147483647 = %d, wanted 2147483646\n", got)
+ failed = true
+ }
+
+ if got := add_2147483647_int32_ssa(0); got != 2147483647 {
+ fmt.Printf("add_int32 2147483647+0 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := add_int32_2147483647_ssa(0); got != 2147483647 {
+ fmt.Printf("add_int32 0+2147483647 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := add_2147483647_int32_ssa(1); got != -2147483648 {
+ fmt.Printf("add_int32 2147483647+1 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := add_int32_2147483647_ssa(1); got != -2147483648 {
+ fmt.Printf("add_int32 1+2147483647 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := add_2147483647_int32_ssa(2147483647); got != -2 {
+ fmt.Printf("add_int32 2147483647+2147483647 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_int32_2147483647_ssa(2147483647); got != -2 {
+ fmt.Printf("add_int32 2147483647+2147483647 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg2147483648_int32_ssa(-2147483648); got != 0 {
+ fmt.Printf("sub_int32 -2147483648--2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg2147483648_ssa(-2147483648); got != 0 {
+ fmt.Printf("sub_int32 -2147483648--2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg2147483648_int32_ssa(-2147483647); got != -1 {
+ fmt.Printf("sub_int32 -2147483648--2147483647 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg2147483648_ssa(-2147483647); got != 1 {
+ fmt.Printf("sub_int32 -2147483647--2147483648 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg2147483648_int32_ssa(-1); got != -2147483647 {
+ fmt.Printf("sub_int32 -2147483648--1 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg2147483648_ssa(-1); got != 2147483647 {
+ fmt.Printf("sub_int32 -1--2147483648 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg2147483648_int32_ssa(0); got != -2147483648 {
+ fmt.Printf("sub_int32 -2147483648-0 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg2147483648_ssa(0); got != -2147483648 {
+ fmt.Printf("sub_int32 0--2147483648 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg2147483648_int32_ssa(1); got != 2147483647 {
+ fmt.Printf("sub_int32 -2147483648-1 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg2147483648_ssa(1); got != -2147483647 {
+ fmt.Printf("sub_int32 1--2147483648 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg2147483648_int32_ssa(2147483647); got != 1 {
+ fmt.Printf("sub_int32 -2147483648-2147483647 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg2147483648_ssa(2147483647); got != -1 {
+ fmt.Printf("sub_int32 2147483647--2147483648 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg2147483647_int32_ssa(-2147483648); got != 1 {
+ fmt.Printf("sub_int32 -2147483647--2147483648 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg2147483647_ssa(-2147483648); got != -1 {
+ fmt.Printf("sub_int32 -2147483648--2147483647 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg2147483647_int32_ssa(-2147483647); got != 0 {
+ fmt.Printf("sub_int32 -2147483647--2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg2147483647_ssa(-2147483647); got != 0 {
+ fmt.Printf("sub_int32 -2147483647--2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg2147483647_int32_ssa(-1); got != -2147483646 {
+ fmt.Printf("sub_int32 -2147483647--1 = %d, wanted -2147483646\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg2147483647_ssa(-1); got != 2147483646 {
+ fmt.Printf("sub_int32 -1--2147483647 = %d, wanted 2147483646\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg2147483647_int32_ssa(0); got != -2147483647 {
+ fmt.Printf("sub_int32 -2147483647-0 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg2147483647_ssa(0); got != 2147483647 {
+ fmt.Printf("sub_int32 0--2147483647 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg2147483647_int32_ssa(1); got != -2147483648 {
+ fmt.Printf("sub_int32 -2147483647-1 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg2147483647_ssa(1); got != -2147483648 {
+ fmt.Printf("sub_int32 1--2147483647 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg2147483647_int32_ssa(2147483647); got != 2 {
+ fmt.Printf("sub_int32 -2147483647-2147483647 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg2147483647_ssa(2147483647); got != -2 {
+ fmt.Printf("sub_int32 2147483647--2147483647 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int32_ssa(-2147483648); got != 2147483647 {
+ fmt.Printf("sub_int32 -1--2147483648 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg1_ssa(-2147483648); got != -2147483647 {
+ fmt.Printf("sub_int32 -2147483648--1 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int32_ssa(-2147483647); got != 2147483646 {
+ fmt.Printf("sub_int32 -1--2147483647 = %d, wanted 2147483646\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg1_ssa(-2147483647); got != -2147483646 {
+ fmt.Printf("sub_int32 -2147483647--1 = %d, wanted -2147483646\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int32_ssa(-1); got != 0 {
+ fmt.Printf("sub_int32 -1--1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg1_ssa(-1); got != 0 {
+ fmt.Printf("sub_int32 -1--1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int32_ssa(0); got != -1 {
+ fmt.Printf("sub_int32 -1-0 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg1_ssa(0); got != 1 {
+ fmt.Printf("sub_int32 0--1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int32_ssa(1); got != -2 {
+ fmt.Printf("sub_int32 -1-1 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg1_ssa(1); got != 2 {
+ fmt.Printf("sub_int32 1--1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int32_ssa(2147483647); got != -2147483648 {
+ fmt.Printf("sub_int32 -1-2147483647 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_Neg1_ssa(2147483647); got != -2147483648 {
+ fmt.Printf("sub_int32 2147483647--1 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int32_ssa(-2147483648); got != -2147483648 {
+ fmt.Printf("sub_int32 0--2147483648 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_0_ssa(-2147483648); got != -2147483648 {
+ fmt.Printf("sub_int32 -2147483648-0 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int32_ssa(-2147483647); got != 2147483647 {
+ fmt.Printf("sub_int32 0--2147483647 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_0_ssa(-2147483647); got != -2147483647 {
+ fmt.Printf("sub_int32 -2147483647-0 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int32_ssa(-1); got != 1 {
+ fmt.Printf("sub_int32 0--1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_0_ssa(-1); got != -1 {
+ fmt.Printf("sub_int32 -1-0 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int32_ssa(0); got != 0 {
+ fmt.Printf("sub_int32 0-0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_0_ssa(0); got != 0 {
+ fmt.Printf("sub_int32 0-0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int32_ssa(1); got != -1 {
+ fmt.Printf("sub_int32 0-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_0_ssa(1); got != 1 {
+ fmt.Printf("sub_int32 1-0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int32_ssa(2147483647); got != -2147483647 {
+ fmt.Printf("sub_int32 0-2147483647 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_0_ssa(2147483647); got != 2147483647 {
+ fmt.Printf("sub_int32 2147483647-0 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int32_ssa(-2147483648); got != -2147483647 {
+ fmt.Printf("sub_int32 1--2147483648 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_1_ssa(-2147483648); got != 2147483647 {
+ fmt.Printf("sub_int32 -2147483648-1 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int32_ssa(-2147483647); got != -2147483648 {
+ fmt.Printf("sub_int32 1--2147483647 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_1_ssa(-2147483647); got != -2147483648 {
+ fmt.Printf("sub_int32 -2147483647-1 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int32_ssa(-1); got != 2 {
+ fmt.Printf("sub_int32 1--1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_1_ssa(-1); got != -2 {
+ fmt.Printf("sub_int32 -1-1 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int32_ssa(0); got != 1 {
+ fmt.Printf("sub_int32 1-0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_1_ssa(0); got != -1 {
+ fmt.Printf("sub_int32 0-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int32_ssa(1); got != 0 {
+ fmt.Printf("sub_int32 1-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_1_ssa(1); got != 0 {
+ fmt.Printf("sub_int32 1-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int32_ssa(2147483647); got != -2147483646 {
+ fmt.Printf("sub_int32 1-2147483647 = %d, wanted -2147483646\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_1_ssa(2147483647); got != 2147483646 {
+ fmt.Printf("sub_int32 2147483647-1 = %d, wanted 2147483646\n", got)
+ failed = true
+ }
+
+ if got := sub_2147483647_int32_ssa(-2147483648); got != -1 {
+ fmt.Printf("sub_int32 2147483647--2147483648 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_2147483647_ssa(-2147483648); got != 1 {
+ fmt.Printf("sub_int32 -2147483648-2147483647 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_2147483647_int32_ssa(-2147483647); got != -2 {
+ fmt.Printf("sub_int32 2147483647--2147483647 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_2147483647_ssa(-2147483647); got != 2 {
+ fmt.Printf("sub_int32 -2147483647-2147483647 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_2147483647_int32_ssa(-1); got != -2147483648 {
+ fmt.Printf("sub_int32 2147483647--1 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_2147483647_ssa(-1); got != -2147483648 {
+ fmt.Printf("sub_int32 -1-2147483647 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := sub_2147483647_int32_ssa(0); got != 2147483647 {
+ fmt.Printf("sub_int32 2147483647-0 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_2147483647_ssa(0); got != -2147483647 {
+ fmt.Printf("sub_int32 0-2147483647 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := sub_2147483647_int32_ssa(1); got != 2147483646 {
+ fmt.Printf("sub_int32 2147483647-1 = %d, wanted 2147483646\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_2147483647_ssa(1); got != -2147483646 {
+ fmt.Printf("sub_int32 1-2147483647 = %d, wanted -2147483646\n", got)
+ failed = true
+ }
+
+ if got := sub_2147483647_int32_ssa(2147483647); got != 0 {
+ fmt.Printf("sub_int32 2147483647-2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int32_2147483647_ssa(2147483647); got != 0 {
+ fmt.Printf("sub_int32 2147483647-2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg2147483648_int32_ssa(-2147483648); got != 1 {
+ fmt.Printf("div_int32 -2147483648/-2147483648 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg2147483648_ssa(-2147483648); got != 1 {
+ fmt.Printf("div_int32 -2147483648/-2147483648 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg2147483648_int32_ssa(-2147483647); got != 1 {
+ fmt.Printf("div_int32 -2147483648/-2147483647 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg2147483648_ssa(-2147483647); got != 0 {
+ fmt.Printf("div_int32 -2147483647/-2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg2147483648_int32_ssa(-1); got != -2147483648 {
+ fmt.Printf("div_int32 -2147483648/-1 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg2147483648_ssa(-1); got != 0 {
+ fmt.Printf("div_int32 -1/-2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg2147483648_ssa(0); got != 0 {
+ fmt.Printf("div_int32 0/-2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg2147483648_int32_ssa(1); got != -2147483648 {
+ fmt.Printf("div_int32 -2147483648/1 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg2147483648_ssa(1); got != 0 {
+ fmt.Printf("div_int32 1/-2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg2147483648_int32_ssa(2147483647); got != -1 {
+ fmt.Printf("div_int32 -2147483648/2147483647 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg2147483648_ssa(2147483647); got != 0 {
+ fmt.Printf("div_int32 2147483647/-2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg2147483647_int32_ssa(-2147483648); got != 0 {
+ fmt.Printf("div_int32 -2147483647/-2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg2147483647_ssa(-2147483648); got != 1 {
+ fmt.Printf("div_int32 -2147483648/-2147483647 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg2147483647_int32_ssa(-2147483647); got != 1 {
+ fmt.Printf("div_int32 -2147483647/-2147483647 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg2147483647_ssa(-2147483647); got != 1 {
+ fmt.Printf("div_int32 -2147483647/-2147483647 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg2147483647_int32_ssa(-1); got != 2147483647 {
+ fmt.Printf("div_int32 -2147483647/-1 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg2147483647_ssa(-1); got != 0 {
+ fmt.Printf("div_int32 -1/-2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg2147483647_ssa(0); got != 0 {
+ fmt.Printf("div_int32 0/-2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg2147483647_int32_ssa(1); got != -2147483647 {
+ fmt.Printf("div_int32 -2147483647/1 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg2147483647_ssa(1); got != 0 {
+ fmt.Printf("div_int32 1/-2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg2147483647_int32_ssa(2147483647); got != -1 {
+ fmt.Printf("div_int32 -2147483647/2147483647 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg2147483647_ssa(2147483647); got != -1 {
+ fmt.Printf("div_int32 2147483647/-2147483647 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int32_ssa(-2147483648); got != 0 {
+ fmt.Printf("div_int32 -1/-2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg1_ssa(-2147483648); got != -2147483648 {
+ fmt.Printf("div_int32 -2147483648/-1 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int32_ssa(-2147483647); got != 0 {
+ fmt.Printf("div_int32 -1/-2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg1_ssa(-2147483647); got != 2147483647 {
+ fmt.Printf("div_int32 -2147483647/-1 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int32_ssa(-1); got != 1 {
+ fmt.Printf("div_int32 -1/-1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg1_ssa(-1); got != 1 {
+ fmt.Printf("div_int32 -1/-1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg1_ssa(0); got != 0 {
+ fmt.Printf("div_int32 0/-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int32_ssa(1); got != -1 {
+ fmt.Printf("div_int32 -1/1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg1_ssa(1); got != -1 {
+ fmt.Printf("div_int32 1/-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int32_ssa(2147483647); got != 0 {
+ fmt.Printf("div_int32 -1/2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int32_Neg1_ssa(2147483647); got != -2147483647 {
+ fmt.Printf("div_int32 2147483647/-1 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_0_int32_ssa(-2147483648); got != 0 {
+ fmt.Printf("div_int32 0/-2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int32_ssa(-2147483647); got != 0 {
+ fmt.Printf("div_int32 0/-2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int32_ssa(-1); got != 0 {
+ fmt.Printf("div_int32 0/-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int32_ssa(1); got != 0 {
+ fmt.Printf("div_int32 0/1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int32_ssa(2147483647); got != 0 {
+ fmt.Printf("div_int32 0/2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_1_int32_ssa(-2147483648); got != 0 {
+ fmt.Printf("div_int32 1/-2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int32_1_ssa(-2147483648); got != -2147483648 {
+ fmt.Printf("div_int32 -2147483648/1 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := div_1_int32_ssa(-2147483647); got != 0 {
+ fmt.Printf("div_int32 1/-2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int32_1_ssa(-2147483647); got != -2147483647 {
+ fmt.Printf("div_int32 -2147483647/1 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_1_int32_ssa(-1); got != -1 {
+ fmt.Printf("div_int32 1/-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int32_1_ssa(-1); got != -1 {
+ fmt.Printf("div_int32 -1/1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int32_1_ssa(0); got != 0 {
+ fmt.Printf("div_int32 0/1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_1_int32_ssa(1); got != 1 {
+ fmt.Printf("div_int32 1/1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int32_1_ssa(1); got != 1 {
+ fmt.Printf("div_int32 1/1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_1_int32_ssa(2147483647); got != 0 {
+ fmt.Printf("div_int32 1/2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int32_1_ssa(2147483647); got != 2147483647 {
+ fmt.Printf("div_int32 2147483647/1 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_2147483647_int32_ssa(-2147483648); got != 0 {
+ fmt.Printf("div_int32 2147483647/-2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int32_2147483647_ssa(-2147483648); got != -1 {
+ fmt.Printf("div_int32 -2147483648/2147483647 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_2147483647_int32_ssa(-2147483647); got != -1 {
+ fmt.Printf("div_int32 2147483647/-2147483647 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int32_2147483647_ssa(-2147483647); got != -1 {
+ fmt.Printf("div_int32 -2147483647/2147483647 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_2147483647_int32_ssa(-1); got != -2147483647 {
+ fmt.Printf("div_int32 2147483647/-1 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_int32_2147483647_ssa(-1); got != 0 {
+ fmt.Printf("div_int32 -1/2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int32_2147483647_ssa(0); got != 0 {
+ fmt.Printf("div_int32 0/2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_2147483647_int32_ssa(1); got != 2147483647 {
+ fmt.Printf("div_int32 2147483647/1 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := div_int32_2147483647_ssa(1); got != 0 {
+ fmt.Printf("div_int32 1/2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_2147483647_int32_ssa(2147483647); got != 1 {
+ fmt.Printf("div_int32 2147483647/2147483647 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int32_2147483647_ssa(2147483647); got != 1 {
+ fmt.Printf("div_int32 2147483647/2147483647 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg2147483648_int32_ssa(-2147483648); got != 0 {
+ fmt.Printf("mul_int32 -2147483648*-2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg2147483648_ssa(-2147483648); got != 0 {
+ fmt.Printf("mul_int32 -2147483648*-2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg2147483648_int32_ssa(-2147483647); got != -2147483648 {
+ fmt.Printf("mul_int32 -2147483648*-2147483647 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg2147483648_ssa(-2147483647); got != -2147483648 {
+ fmt.Printf("mul_int32 -2147483647*-2147483648 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg2147483648_int32_ssa(-1); got != -2147483648 {
+ fmt.Printf("mul_int32 -2147483648*-1 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg2147483648_ssa(-1); got != -2147483648 {
+ fmt.Printf("mul_int32 -1*-2147483648 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg2147483648_int32_ssa(0); got != 0 {
+ fmt.Printf("mul_int32 -2147483648*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg2147483648_ssa(0); got != 0 {
+ fmt.Printf("mul_int32 0*-2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg2147483648_int32_ssa(1); got != -2147483648 {
+ fmt.Printf("mul_int32 -2147483648*1 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg2147483648_ssa(1); got != -2147483648 {
+ fmt.Printf("mul_int32 1*-2147483648 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg2147483648_int32_ssa(2147483647); got != -2147483648 {
+ fmt.Printf("mul_int32 -2147483648*2147483647 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg2147483648_ssa(2147483647); got != -2147483648 {
+ fmt.Printf("mul_int32 2147483647*-2147483648 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg2147483647_int32_ssa(-2147483648); got != -2147483648 {
+ fmt.Printf("mul_int32 -2147483647*-2147483648 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg2147483647_ssa(-2147483648); got != -2147483648 {
+ fmt.Printf("mul_int32 -2147483648*-2147483647 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg2147483647_int32_ssa(-2147483647); got != 1 {
+ fmt.Printf("mul_int32 -2147483647*-2147483647 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg2147483647_ssa(-2147483647); got != 1 {
+ fmt.Printf("mul_int32 -2147483647*-2147483647 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg2147483647_int32_ssa(-1); got != 2147483647 {
+ fmt.Printf("mul_int32 -2147483647*-1 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg2147483647_ssa(-1); got != 2147483647 {
+ fmt.Printf("mul_int32 -1*-2147483647 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg2147483647_int32_ssa(0); got != 0 {
+ fmt.Printf("mul_int32 -2147483647*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg2147483647_ssa(0); got != 0 {
+ fmt.Printf("mul_int32 0*-2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg2147483647_int32_ssa(1); got != -2147483647 {
+ fmt.Printf("mul_int32 -2147483647*1 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg2147483647_ssa(1); got != -2147483647 {
+ fmt.Printf("mul_int32 1*-2147483647 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg2147483647_int32_ssa(2147483647); got != -1 {
+ fmt.Printf("mul_int32 -2147483647*2147483647 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg2147483647_ssa(2147483647); got != -1 {
+ fmt.Printf("mul_int32 2147483647*-2147483647 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int32_ssa(-2147483648); got != -2147483648 {
+ fmt.Printf("mul_int32 -1*-2147483648 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg1_ssa(-2147483648); got != -2147483648 {
+ fmt.Printf("mul_int32 -2147483648*-1 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int32_ssa(-2147483647); got != 2147483647 {
+ fmt.Printf("mul_int32 -1*-2147483647 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg1_ssa(-2147483647); got != 2147483647 {
+ fmt.Printf("mul_int32 -2147483647*-1 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int32_ssa(-1); got != 1 {
+ fmt.Printf("mul_int32 -1*-1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg1_ssa(-1); got != 1 {
+ fmt.Printf("mul_int32 -1*-1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int32_ssa(0); got != 0 {
+ fmt.Printf("mul_int32 -1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg1_ssa(0); got != 0 {
+ fmt.Printf("mul_int32 0*-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int32_ssa(1); got != -1 {
+ fmt.Printf("mul_int32 -1*1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg1_ssa(1); got != -1 {
+ fmt.Printf("mul_int32 1*-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int32_ssa(2147483647); got != -2147483647 {
+ fmt.Printf("mul_int32 -1*2147483647 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_Neg1_ssa(2147483647); got != -2147483647 {
+ fmt.Printf("mul_int32 2147483647*-1 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int32_ssa(-2147483648); got != 0 {
+ fmt.Printf("mul_int32 0*-2147483648 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_0_ssa(-2147483648); got != 0 {
+ fmt.Printf("mul_int32 -2147483648*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int32_ssa(-2147483647); got != 0 {
+ fmt.Printf("mul_int32 0*-2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_0_ssa(-2147483647); got != 0 {
+ fmt.Printf("mul_int32 -2147483647*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int32_ssa(-1); got != 0 {
+ fmt.Printf("mul_int32 0*-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_0_ssa(-1); got != 0 {
+ fmt.Printf("mul_int32 -1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int32_ssa(0); got != 0 {
+ fmt.Printf("mul_int32 0*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_0_ssa(0); got != 0 {
+ fmt.Printf("mul_int32 0*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int32_ssa(1); got != 0 {
+ fmt.Printf("mul_int32 0*1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_0_ssa(1); got != 0 {
+ fmt.Printf("mul_int32 1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int32_ssa(2147483647); got != 0 {
+ fmt.Printf("mul_int32 0*2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_0_ssa(2147483647); got != 0 {
+ fmt.Printf("mul_int32 2147483647*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int32_ssa(-2147483648); got != -2147483648 {
+ fmt.Printf("mul_int32 1*-2147483648 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_1_ssa(-2147483648); got != -2147483648 {
+ fmt.Printf("mul_int32 -2147483648*1 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int32_ssa(-2147483647); got != -2147483647 {
+ fmt.Printf("mul_int32 1*-2147483647 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_1_ssa(-2147483647); got != -2147483647 {
+ fmt.Printf("mul_int32 -2147483647*1 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int32_ssa(-1); got != -1 {
+ fmt.Printf("mul_int32 1*-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_1_ssa(-1); got != -1 {
+ fmt.Printf("mul_int32 -1*1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int32_ssa(0); got != 0 {
+ fmt.Printf("mul_int32 1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_1_ssa(0); got != 0 {
+ fmt.Printf("mul_int32 0*1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int32_ssa(1); got != 1 {
+ fmt.Printf("mul_int32 1*1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_1_ssa(1); got != 1 {
+ fmt.Printf("mul_int32 1*1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int32_ssa(2147483647); got != 2147483647 {
+ fmt.Printf("mul_int32 1*2147483647 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_1_ssa(2147483647); got != 2147483647 {
+ fmt.Printf("mul_int32 2147483647*1 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := mul_2147483647_int32_ssa(-2147483648); got != -2147483648 {
+ fmt.Printf("mul_int32 2147483647*-2147483648 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_2147483647_ssa(-2147483648); got != -2147483648 {
+ fmt.Printf("mul_int32 -2147483648*2147483647 = %d, wanted -2147483648\n", got)
+ failed = true
+ }
+
+ if got := mul_2147483647_int32_ssa(-2147483647); got != -1 {
+ fmt.Printf("mul_int32 2147483647*-2147483647 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_2147483647_ssa(-2147483647); got != -1 {
+ fmt.Printf("mul_int32 -2147483647*2147483647 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_2147483647_int32_ssa(-1); got != -2147483647 {
+ fmt.Printf("mul_int32 2147483647*-1 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_2147483647_ssa(-1); got != -2147483647 {
+ fmt.Printf("mul_int32 -1*2147483647 = %d, wanted -2147483647\n", got)
+ failed = true
+ }
+
+ if got := mul_2147483647_int32_ssa(0); got != 0 {
+ fmt.Printf("mul_int32 2147483647*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_2147483647_ssa(0); got != 0 {
+ fmt.Printf("mul_int32 0*2147483647 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_2147483647_int32_ssa(1); got != 2147483647 {
+ fmt.Printf("mul_int32 2147483647*1 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_2147483647_ssa(1); got != 2147483647 {
+ fmt.Printf("mul_int32 1*2147483647 = %d, wanted 2147483647\n", got)
+ failed = true
+ }
+
+ if got := mul_2147483647_int32_ssa(2147483647); got != 1 {
+ fmt.Printf("mul_int32 2147483647*2147483647 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_int32_2147483647_ssa(2147483647); got != 1 {
+ fmt.Printf("mul_int32 2147483647*2147483647 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_0_uint16_ssa(0); got != 0 {
+ fmt.Printf("add_uint16 0+0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_uint16_0_ssa(0); got != 0 {
+ fmt.Printf("add_uint16 0+0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_0_uint16_ssa(1); got != 1 {
+ fmt.Printf("add_uint16 0+1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_uint16_0_ssa(1); got != 1 {
+ fmt.Printf("add_uint16 1+0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_0_uint16_ssa(65535); got != 65535 {
+ fmt.Printf("add_uint16 0+65535 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := add_uint16_0_ssa(65535); got != 65535 {
+ fmt.Printf("add_uint16 65535+0 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := add_1_uint16_ssa(0); got != 1 {
+ fmt.Printf("add_uint16 1+0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_uint16_1_ssa(0); got != 1 {
+ fmt.Printf("add_uint16 0+1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_1_uint16_ssa(1); got != 2 {
+ fmt.Printf("add_uint16 1+1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_uint16_1_ssa(1); got != 2 {
+ fmt.Printf("add_uint16 1+1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_1_uint16_ssa(65535); got != 0 {
+ fmt.Printf("add_uint16 1+65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_uint16_1_ssa(65535); got != 0 {
+ fmt.Printf("add_uint16 65535+1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_65535_uint16_ssa(0); got != 65535 {
+ fmt.Printf("add_uint16 65535+0 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := add_uint16_65535_ssa(0); got != 65535 {
+ fmt.Printf("add_uint16 0+65535 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := add_65535_uint16_ssa(1); got != 0 {
+ fmt.Printf("add_uint16 65535+1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_uint16_65535_ssa(1); got != 0 {
+ fmt.Printf("add_uint16 1+65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_65535_uint16_ssa(65535); got != 65534 {
+ fmt.Printf("add_uint16 65535+65535 = %d, wanted 65534\n", got)
+ failed = true
+ }
+
+ if got := add_uint16_65535_ssa(65535); got != 65534 {
+ fmt.Printf("add_uint16 65535+65535 = %d, wanted 65534\n", got)
+ failed = true
+ }
+
+ if got := sub_0_uint16_ssa(0); got != 0 {
+ fmt.Printf("sub_uint16 0-0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_uint16_0_ssa(0); got != 0 {
+ fmt.Printf("sub_uint16 0-0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_0_uint16_ssa(1); got != 65535 {
+ fmt.Printf("sub_uint16 0-1 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := sub_uint16_0_ssa(1); got != 1 {
+ fmt.Printf("sub_uint16 1-0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_0_uint16_ssa(65535); got != 1 {
+ fmt.Printf("sub_uint16 0-65535 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_uint16_0_ssa(65535); got != 65535 {
+ fmt.Printf("sub_uint16 65535-0 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := sub_1_uint16_ssa(0); got != 1 {
+ fmt.Printf("sub_uint16 1-0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_uint16_1_ssa(0); got != 65535 {
+ fmt.Printf("sub_uint16 0-1 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := sub_1_uint16_ssa(1); got != 0 {
+ fmt.Printf("sub_uint16 1-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_uint16_1_ssa(1); got != 0 {
+ fmt.Printf("sub_uint16 1-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_1_uint16_ssa(65535); got != 2 {
+ fmt.Printf("sub_uint16 1-65535 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_uint16_1_ssa(65535); got != 65534 {
+ fmt.Printf("sub_uint16 65535-1 = %d, wanted 65534\n", got)
+ failed = true
+ }
+
+ if got := sub_65535_uint16_ssa(0); got != 65535 {
+ fmt.Printf("sub_uint16 65535-0 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := sub_uint16_65535_ssa(0); got != 1 {
+ fmt.Printf("sub_uint16 0-65535 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_65535_uint16_ssa(1); got != 65534 {
+ fmt.Printf("sub_uint16 65535-1 = %d, wanted 65534\n", got)
+ failed = true
+ }
+
+ if got := sub_uint16_65535_ssa(1); got != 2 {
+ fmt.Printf("sub_uint16 1-65535 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_65535_uint16_ssa(65535); got != 0 {
+ fmt.Printf("sub_uint16 65535-65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_uint16_65535_ssa(65535); got != 0 {
+ fmt.Printf("sub_uint16 65535-65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_uint16_ssa(1); got != 0 {
+ fmt.Printf("div_uint16 0/1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_uint16_ssa(65535); got != 0 {
+ fmt.Printf("div_uint16 0/65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_uint16_1_ssa(0); got != 0 {
+ fmt.Printf("div_uint16 0/1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_1_uint16_ssa(1); got != 1 {
+ fmt.Printf("div_uint16 1/1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_uint16_1_ssa(1); got != 1 {
+ fmt.Printf("div_uint16 1/1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_1_uint16_ssa(65535); got != 0 {
+ fmt.Printf("div_uint16 1/65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_uint16_1_ssa(65535); got != 65535 {
+ fmt.Printf("div_uint16 65535/1 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := div_uint16_65535_ssa(0); got != 0 {
+ fmt.Printf("div_uint16 0/65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_65535_uint16_ssa(1); got != 65535 {
+ fmt.Printf("div_uint16 65535/1 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := div_uint16_65535_ssa(1); got != 0 {
+ fmt.Printf("div_uint16 1/65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_65535_uint16_ssa(65535); got != 1 {
+ fmt.Printf("div_uint16 65535/65535 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_uint16_65535_ssa(65535); got != 1 {
+ fmt.Printf("div_uint16 65535/65535 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_0_uint16_ssa(0); got != 0 {
+ fmt.Printf("mul_uint16 0*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint16_0_ssa(0); got != 0 {
+ fmt.Printf("mul_uint16 0*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_uint16_ssa(1); got != 0 {
+ fmt.Printf("mul_uint16 0*1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint16_0_ssa(1); got != 0 {
+ fmt.Printf("mul_uint16 1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_uint16_ssa(65535); got != 0 {
+ fmt.Printf("mul_uint16 0*65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint16_0_ssa(65535); got != 0 {
+ fmt.Printf("mul_uint16 65535*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_1_uint16_ssa(0); got != 0 {
+ fmt.Printf("mul_uint16 1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint16_1_ssa(0); got != 0 {
+ fmt.Printf("mul_uint16 0*1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_1_uint16_ssa(1); got != 1 {
+ fmt.Printf("mul_uint16 1*1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_uint16_1_ssa(1); got != 1 {
+ fmt.Printf("mul_uint16 1*1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_1_uint16_ssa(65535); got != 65535 {
+ fmt.Printf("mul_uint16 1*65535 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := mul_uint16_1_ssa(65535); got != 65535 {
+ fmt.Printf("mul_uint16 65535*1 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := mul_65535_uint16_ssa(0); got != 0 {
+ fmt.Printf("mul_uint16 65535*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint16_65535_ssa(0); got != 0 {
+ fmt.Printf("mul_uint16 0*65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_65535_uint16_ssa(1); got != 65535 {
+ fmt.Printf("mul_uint16 65535*1 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := mul_uint16_65535_ssa(1); got != 65535 {
+ fmt.Printf("mul_uint16 1*65535 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := mul_65535_uint16_ssa(65535); got != 1 {
+ fmt.Printf("mul_uint16 65535*65535 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_uint16_65535_ssa(65535); got != 1 {
+ fmt.Printf("mul_uint16 65535*65535 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := lsh_0_uint16_ssa(0); got != 0 {
+ fmt.Printf("lsh_uint16 0<<0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint16_0_ssa(0); got != 0 {
+ fmt.Printf("lsh_uint16 0<<0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_0_uint16_ssa(1); got != 0 {
+ fmt.Printf("lsh_uint16 0<<1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint16_0_ssa(1); got != 1 {
+ fmt.Printf("lsh_uint16 1<<0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := lsh_0_uint16_ssa(65535); got != 0 {
+ fmt.Printf("lsh_uint16 0<<65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint16_0_ssa(65535); got != 65535 {
+ fmt.Printf("lsh_uint16 65535<<0 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := lsh_1_uint16_ssa(0); got != 1 {
+ fmt.Printf("lsh_uint16 1<<0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint16_1_ssa(0); got != 0 {
+ fmt.Printf("lsh_uint16 0<<1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_1_uint16_ssa(1); got != 2 {
+ fmt.Printf("lsh_uint16 1<<1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint16_1_ssa(1); got != 2 {
+ fmt.Printf("lsh_uint16 1<<1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := lsh_1_uint16_ssa(65535); got != 0 {
+ fmt.Printf("lsh_uint16 1<<65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint16_1_ssa(65535); got != 65534 {
+ fmt.Printf("lsh_uint16 65535<<1 = %d, wanted 65534\n", got)
+ failed = true
+ }
+
+ if got := lsh_65535_uint16_ssa(0); got != 65535 {
+ fmt.Printf("lsh_uint16 65535<<0 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint16_65535_ssa(0); got != 0 {
+ fmt.Printf("lsh_uint16 0<<65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_65535_uint16_ssa(1); got != 65534 {
+ fmt.Printf("lsh_uint16 65535<<1 = %d, wanted 65534\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint16_65535_ssa(1); got != 0 {
+ fmt.Printf("lsh_uint16 1<<65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_65535_uint16_ssa(65535); got != 0 {
+ fmt.Printf("lsh_uint16 65535<<65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint16_65535_ssa(65535); got != 0 {
+ fmt.Printf("lsh_uint16 65535<<65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_0_uint16_ssa(0); got != 0 {
+ fmt.Printf("rsh_uint16 0>>0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint16_0_ssa(0); got != 0 {
+ fmt.Printf("rsh_uint16 0>>0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_0_uint16_ssa(1); got != 0 {
+ fmt.Printf("rsh_uint16 0>>1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint16_0_ssa(1); got != 1 {
+ fmt.Printf("rsh_uint16 1>>0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := rsh_0_uint16_ssa(65535); got != 0 {
+ fmt.Printf("rsh_uint16 0>>65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint16_0_ssa(65535); got != 65535 {
+ fmt.Printf("rsh_uint16 65535>>0 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := rsh_1_uint16_ssa(0); got != 1 {
+ fmt.Printf("rsh_uint16 1>>0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint16_1_ssa(0); got != 0 {
+ fmt.Printf("rsh_uint16 0>>1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_1_uint16_ssa(1); got != 0 {
+ fmt.Printf("rsh_uint16 1>>1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint16_1_ssa(1); got != 0 {
+ fmt.Printf("rsh_uint16 1>>1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_1_uint16_ssa(65535); got != 0 {
+ fmt.Printf("rsh_uint16 1>>65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint16_1_ssa(65535); got != 32767 {
+ fmt.Printf("rsh_uint16 65535>>1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := rsh_65535_uint16_ssa(0); got != 65535 {
+ fmt.Printf("rsh_uint16 65535>>0 = %d, wanted 65535\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint16_65535_ssa(0); got != 0 {
+ fmt.Printf("rsh_uint16 0>>65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_65535_uint16_ssa(1); got != 32767 {
+ fmt.Printf("rsh_uint16 65535>>1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint16_65535_ssa(1); got != 0 {
+ fmt.Printf("rsh_uint16 1>>65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_65535_uint16_ssa(65535); got != 0 {
+ fmt.Printf("rsh_uint16 65535>>65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint16_65535_ssa(65535); got != 0 {
+ fmt.Printf("rsh_uint16 65535>>65535 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_Neg32768_int16_ssa(-32768); got != 0 {
+ fmt.Printf("add_int16 -32768+-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg32768_ssa(-32768); got != 0 {
+ fmt.Printf("add_int16 -32768+-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_Neg32768_int16_ssa(-32767); got != 1 {
+ fmt.Printf("add_int16 -32768+-32767 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg32768_ssa(-32767); got != 1 {
+ fmt.Printf("add_int16 -32767+-32768 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg32768_int16_ssa(-1); got != 32767 {
+ fmt.Printf("add_int16 -32768+-1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg32768_ssa(-1); got != 32767 {
+ fmt.Printf("add_int16 -1+-32768 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := add_Neg32768_int16_ssa(0); got != -32768 {
+ fmt.Printf("add_int16 -32768+0 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg32768_ssa(0); got != -32768 {
+ fmt.Printf("add_int16 0+-32768 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := add_Neg32768_int16_ssa(1); got != -32767 {
+ fmt.Printf("add_int16 -32768+1 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg32768_ssa(1); got != -32767 {
+ fmt.Printf("add_int16 1+-32768 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := add_Neg32768_int16_ssa(32766); got != -2 {
+ fmt.Printf("add_int16 -32768+32766 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg32768_ssa(32766); got != -2 {
+ fmt.Printf("add_int16 32766+-32768 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_Neg32768_int16_ssa(32767); got != -1 {
+ fmt.Printf("add_int16 -32768+32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg32768_ssa(32767); got != -1 {
+ fmt.Printf("add_int16 32767+-32768 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg32767_int16_ssa(-32768); got != 1 {
+ fmt.Printf("add_int16 -32767+-32768 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg32767_ssa(-32768); got != 1 {
+ fmt.Printf("add_int16 -32768+-32767 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg32767_int16_ssa(-32767); got != 2 {
+ fmt.Printf("add_int16 -32767+-32767 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg32767_ssa(-32767); got != 2 {
+ fmt.Printf("add_int16 -32767+-32767 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_Neg32767_int16_ssa(-1); got != -32768 {
+ fmt.Printf("add_int16 -32767+-1 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg32767_ssa(-1); got != -32768 {
+ fmt.Printf("add_int16 -1+-32767 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := add_Neg32767_int16_ssa(0); got != -32767 {
+ fmt.Printf("add_int16 -32767+0 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg32767_ssa(0); got != -32767 {
+ fmt.Printf("add_int16 0+-32767 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := add_Neg32767_int16_ssa(1); got != -32766 {
+ fmt.Printf("add_int16 -32767+1 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg32767_ssa(1); got != -32766 {
+ fmt.Printf("add_int16 1+-32767 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := add_Neg32767_int16_ssa(32766); got != -1 {
+ fmt.Printf("add_int16 -32767+32766 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg32767_ssa(32766); got != -1 {
+ fmt.Printf("add_int16 32766+-32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg32767_int16_ssa(32767); got != 0 {
+ fmt.Printf("add_int16 -32767+32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg32767_ssa(32767); got != 0 {
+ fmt.Printf("add_int16 32767+-32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int16_ssa(-32768); got != 32767 {
+ fmt.Printf("add_int16 -1+-32768 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg1_ssa(-32768); got != 32767 {
+ fmt.Printf("add_int16 -32768+-1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int16_ssa(-32767); got != -32768 {
+ fmt.Printf("add_int16 -1+-32767 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg1_ssa(-32767); got != -32768 {
+ fmt.Printf("add_int16 -32767+-1 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int16_ssa(-1); got != -2 {
+ fmt.Printf("add_int16 -1+-1 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg1_ssa(-1); got != -2 {
+ fmt.Printf("add_int16 -1+-1 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int16_ssa(0); got != -1 {
+ fmt.Printf("add_int16 -1+0 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg1_ssa(0); got != -1 {
+ fmt.Printf("add_int16 0+-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int16_ssa(1); got != 0 {
+ fmt.Printf("add_int16 -1+1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg1_ssa(1); got != 0 {
+ fmt.Printf("add_int16 1+-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int16_ssa(32766); got != 32765 {
+ fmt.Printf("add_int16 -1+32766 = %d, wanted 32765\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg1_ssa(32766); got != 32765 {
+ fmt.Printf("add_int16 32766+-1 = %d, wanted 32765\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int16_ssa(32767); got != 32766 {
+ fmt.Printf("add_int16 -1+32767 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := add_int16_Neg1_ssa(32767); got != 32766 {
+ fmt.Printf("add_int16 32767+-1 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := add_0_int16_ssa(-32768); got != -32768 {
+ fmt.Printf("add_int16 0+-32768 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := add_int16_0_ssa(-32768); got != -32768 {
+ fmt.Printf("add_int16 -32768+0 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := add_0_int16_ssa(-32767); got != -32767 {
+ fmt.Printf("add_int16 0+-32767 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := add_int16_0_ssa(-32767); got != -32767 {
+ fmt.Printf("add_int16 -32767+0 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := add_0_int16_ssa(-1); got != -1 {
+ fmt.Printf("add_int16 0+-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int16_0_ssa(-1); got != -1 {
+ fmt.Printf("add_int16 -1+0 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_0_int16_ssa(0); got != 0 {
+ fmt.Printf("add_int16 0+0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int16_0_ssa(0); got != 0 {
+ fmt.Printf("add_int16 0+0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_0_int16_ssa(1); got != 1 {
+ fmt.Printf("add_int16 0+1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_int16_0_ssa(1); got != 1 {
+ fmt.Printf("add_int16 1+0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_0_int16_ssa(32766); got != 32766 {
+ fmt.Printf("add_int16 0+32766 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := add_int16_0_ssa(32766); got != 32766 {
+ fmt.Printf("add_int16 32766+0 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := add_0_int16_ssa(32767); got != 32767 {
+ fmt.Printf("add_int16 0+32767 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := add_int16_0_ssa(32767); got != 32767 {
+ fmt.Printf("add_int16 32767+0 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := add_1_int16_ssa(-32768); got != -32767 {
+ fmt.Printf("add_int16 1+-32768 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := add_int16_1_ssa(-32768); got != -32767 {
+ fmt.Printf("add_int16 -32768+1 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := add_1_int16_ssa(-32767); got != -32766 {
+ fmt.Printf("add_int16 1+-32767 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := add_int16_1_ssa(-32767); got != -32766 {
+ fmt.Printf("add_int16 -32767+1 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := add_1_int16_ssa(-1); got != 0 {
+ fmt.Printf("add_int16 1+-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int16_1_ssa(-1); got != 0 {
+ fmt.Printf("add_int16 -1+1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_1_int16_ssa(0); got != 1 {
+ fmt.Printf("add_int16 1+0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_int16_1_ssa(0); got != 1 {
+ fmt.Printf("add_int16 0+1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_1_int16_ssa(1); got != 2 {
+ fmt.Printf("add_int16 1+1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_int16_1_ssa(1); got != 2 {
+ fmt.Printf("add_int16 1+1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_1_int16_ssa(32766); got != 32767 {
+ fmt.Printf("add_int16 1+32766 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := add_int16_1_ssa(32766); got != 32767 {
+ fmt.Printf("add_int16 32766+1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := add_1_int16_ssa(32767); got != -32768 {
+ fmt.Printf("add_int16 1+32767 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := add_int16_1_ssa(32767); got != -32768 {
+ fmt.Printf("add_int16 32767+1 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := add_32766_int16_ssa(-32768); got != -2 {
+ fmt.Printf("add_int16 32766+-32768 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_int16_32766_ssa(-32768); got != -2 {
+ fmt.Printf("add_int16 -32768+32766 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_32766_int16_ssa(-32767); got != -1 {
+ fmt.Printf("add_int16 32766+-32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int16_32766_ssa(-32767); got != -1 {
+ fmt.Printf("add_int16 -32767+32766 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_32766_int16_ssa(-1); got != 32765 {
+ fmt.Printf("add_int16 32766+-1 = %d, wanted 32765\n", got)
+ failed = true
+ }
+
+ if got := add_int16_32766_ssa(-1); got != 32765 {
+ fmt.Printf("add_int16 -1+32766 = %d, wanted 32765\n", got)
+ failed = true
+ }
+
+ if got := add_32766_int16_ssa(0); got != 32766 {
+ fmt.Printf("add_int16 32766+0 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := add_int16_32766_ssa(0); got != 32766 {
+ fmt.Printf("add_int16 0+32766 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := add_32766_int16_ssa(1); got != 32767 {
+ fmt.Printf("add_int16 32766+1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := add_int16_32766_ssa(1); got != 32767 {
+ fmt.Printf("add_int16 1+32766 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := add_32766_int16_ssa(32766); got != -4 {
+ fmt.Printf("add_int16 32766+32766 = %d, wanted -4\n", got)
+ failed = true
+ }
+
+ if got := add_int16_32766_ssa(32766); got != -4 {
+ fmt.Printf("add_int16 32766+32766 = %d, wanted -4\n", got)
+ failed = true
+ }
+
+ if got := add_32766_int16_ssa(32767); got != -3 {
+ fmt.Printf("add_int16 32766+32767 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := add_int16_32766_ssa(32767); got != -3 {
+ fmt.Printf("add_int16 32767+32766 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := add_32767_int16_ssa(-32768); got != -1 {
+ fmt.Printf("add_int16 32767+-32768 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int16_32767_ssa(-32768); got != -1 {
+ fmt.Printf("add_int16 -32768+32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_32767_int16_ssa(-32767); got != 0 {
+ fmt.Printf("add_int16 32767+-32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int16_32767_ssa(-32767); got != 0 {
+ fmt.Printf("add_int16 -32767+32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_32767_int16_ssa(-1); got != 32766 {
+ fmt.Printf("add_int16 32767+-1 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := add_int16_32767_ssa(-1); got != 32766 {
+ fmt.Printf("add_int16 -1+32767 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := add_32767_int16_ssa(0); got != 32767 {
+ fmt.Printf("add_int16 32767+0 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := add_int16_32767_ssa(0); got != 32767 {
+ fmt.Printf("add_int16 0+32767 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := add_32767_int16_ssa(1); got != -32768 {
+ fmt.Printf("add_int16 32767+1 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := add_int16_32767_ssa(1); got != -32768 {
+ fmt.Printf("add_int16 1+32767 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := add_32767_int16_ssa(32766); got != -3 {
+ fmt.Printf("add_int16 32767+32766 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := add_int16_32767_ssa(32766); got != -3 {
+ fmt.Printf("add_int16 32766+32767 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := add_32767_int16_ssa(32767); got != -2 {
+ fmt.Printf("add_int16 32767+32767 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_int16_32767_ssa(32767); got != -2 {
+ fmt.Printf("add_int16 32767+32767 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg32768_int16_ssa(-32768); got != 0 {
+ fmt.Printf("sub_int16 -32768--32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg32768_ssa(-32768); got != 0 {
+ fmt.Printf("sub_int16 -32768--32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg32768_int16_ssa(-32767); got != -1 {
+ fmt.Printf("sub_int16 -32768--32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg32768_ssa(-32767); got != 1 {
+ fmt.Printf("sub_int16 -32767--32768 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg32768_int16_ssa(-1); got != -32767 {
+ fmt.Printf("sub_int16 -32768--1 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg32768_ssa(-1); got != 32767 {
+ fmt.Printf("sub_int16 -1--32768 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg32768_int16_ssa(0); got != -32768 {
+ fmt.Printf("sub_int16 -32768-0 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg32768_ssa(0); got != -32768 {
+ fmt.Printf("sub_int16 0--32768 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg32768_int16_ssa(1); got != 32767 {
+ fmt.Printf("sub_int16 -32768-1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg32768_ssa(1); got != -32767 {
+ fmt.Printf("sub_int16 1--32768 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg32768_int16_ssa(32766); got != 2 {
+ fmt.Printf("sub_int16 -32768-32766 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg32768_ssa(32766); got != -2 {
+ fmt.Printf("sub_int16 32766--32768 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg32768_int16_ssa(32767); got != 1 {
+ fmt.Printf("sub_int16 -32768-32767 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg32768_ssa(32767); got != -1 {
+ fmt.Printf("sub_int16 32767--32768 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg32767_int16_ssa(-32768); got != 1 {
+ fmt.Printf("sub_int16 -32767--32768 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg32767_ssa(-32768); got != -1 {
+ fmt.Printf("sub_int16 -32768--32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg32767_int16_ssa(-32767); got != 0 {
+ fmt.Printf("sub_int16 -32767--32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg32767_ssa(-32767); got != 0 {
+ fmt.Printf("sub_int16 -32767--32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg32767_int16_ssa(-1); got != -32766 {
+ fmt.Printf("sub_int16 -32767--1 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg32767_ssa(-1); got != 32766 {
+ fmt.Printf("sub_int16 -1--32767 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg32767_int16_ssa(0); got != -32767 {
+ fmt.Printf("sub_int16 -32767-0 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg32767_ssa(0); got != 32767 {
+ fmt.Printf("sub_int16 0--32767 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg32767_int16_ssa(1); got != -32768 {
+ fmt.Printf("sub_int16 -32767-1 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg32767_ssa(1); got != -32768 {
+ fmt.Printf("sub_int16 1--32767 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg32767_int16_ssa(32766); got != 3 {
+ fmt.Printf("sub_int16 -32767-32766 = %d, wanted 3\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg32767_ssa(32766); got != -3 {
+ fmt.Printf("sub_int16 32766--32767 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg32767_int16_ssa(32767); got != 2 {
+ fmt.Printf("sub_int16 -32767-32767 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg32767_ssa(32767); got != -2 {
+ fmt.Printf("sub_int16 32767--32767 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int16_ssa(-32768); got != 32767 {
+ fmt.Printf("sub_int16 -1--32768 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg1_ssa(-32768); got != -32767 {
+ fmt.Printf("sub_int16 -32768--1 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int16_ssa(-32767); got != 32766 {
+ fmt.Printf("sub_int16 -1--32767 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg1_ssa(-32767); got != -32766 {
+ fmt.Printf("sub_int16 -32767--1 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int16_ssa(-1); got != 0 {
+ fmt.Printf("sub_int16 -1--1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg1_ssa(-1); got != 0 {
+ fmt.Printf("sub_int16 -1--1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int16_ssa(0); got != -1 {
+ fmt.Printf("sub_int16 -1-0 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg1_ssa(0); got != 1 {
+ fmt.Printf("sub_int16 0--1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int16_ssa(1); got != -2 {
+ fmt.Printf("sub_int16 -1-1 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg1_ssa(1); got != 2 {
+ fmt.Printf("sub_int16 1--1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int16_ssa(32766); got != -32767 {
+ fmt.Printf("sub_int16 -1-32766 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg1_ssa(32766); got != 32767 {
+ fmt.Printf("sub_int16 32766--1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int16_ssa(32767); got != -32768 {
+ fmt.Printf("sub_int16 -1-32767 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_Neg1_ssa(32767); got != -32768 {
+ fmt.Printf("sub_int16 32767--1 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int16_ssa(-32768); got != -32768 {
+ fmt.Printf("sub_int16 0--32768 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_0_ssa(-32768); got != -32768 {
+ fmt.Printf("sub_int16 -32768-0 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int16_ssa(-32767); got != 32767 {
+ fmt.Printf("sub_int16 0--32767 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_0_ssa(-32767); got != -32767 {
+ fmt.Printf("sub_int16 -32767-0 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int16_ssa(-1); got != 1 {
+ fmt.Printf("sub_int16 0--1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_0_ssa(-1); got != -1 {
+ fmt.Printf("sub_int16 -1-0 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int16_ssa(0); got != 0 {
+ fmt.Printf("sub_int16 0-0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_0_ssa(0); got != 0 {
+ fmt.Printf("sub_int16 0-0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int16_ssa(1); got != -1 {
+ fmt.Printf("sub_int16 0-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_0_ssa(1); got != 1 {
+ fmt.Printf("sub_int16 1-0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int16_ssa(32766); got != -32766 {
+ fmt.Printf("sub_int16 0-32766 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_0_ssa(32766); got != 32766 {
+ fmt.Printf("sub_int16 32766-0 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int16_ssa(32767); got != -32767 {
+ fmt.Printf("sub_int16 0-32767 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_0_ssa(32767); got != 32767 {
+ fmt.Printf("sub_int16 32767-0 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int16_ssa(-32768); got != -32767 {
+ fmt.Printf("sub_int16 1--32768 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_1_ssa(-32768); got != 32767 {
+ fmt.Printf("sub_int16 -32768-1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int16_ssa(-32767); got != -32768 {
+ fmt.Printf("sub_int16 1--32767 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_1_ssa(-32767); got != -32768 {
+ fmt.Printf("sub_int16 -32767-1 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int16_ssa(-1); got != 2 {
+ fmt.Printf("sub_int16 1--1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_1_ssa(-1); got != -2 {
+ fmt.Printf("sub_int16 -1-1 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int16_ssa(0); got != 1 {
+ fmt.Printf("sub_int16 1-0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_1_ssa(0); got != -1 {
+ fmt.Printf("sub_int16 0-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int16_ssa(1); got != 0 {
+ fmt.Printf("sub_int16 1-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_1_ssa(1); got != 0 {
+ fmt.Printf("sub_int16 1-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int16_ssa(32766); got != -32765 {
+ fmt.Printf("sub_int16 1-32766 = %d, wanted -32765\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_1_ssa(32766); got != 32765 {
+ fmt.Printf("sub_int16 32766-1 = %d, wanted 32765\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int16_ssa(32767); got != -32766 {
+ fmt.Printf("sub_int16 1-32767 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_1_ssa(32767); got != 32766 {
+ fmt.Printf("sub_int16 32767-1 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := sub_32766_int16_ssa(-32768); got != -2 {
+ fmt.Printf("sub_int16 32766--32768 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_32766_ssa(-32768); got != 2 {
+ fmt.Printf("sub_int16 -32768-32766 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_32766_int16_ssa(-32767); got != -3 {
+ fmt.Printf("sub_int16 32766--32767 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_32766_ssa(-32767); got != 3 {
+ fmt.Printf("sub_int16 -32767-32766 = %d, wanted 3\n", got)
+ failed = true
+ }
+
+ if got := sub_32766_int16_ssa(-1); got != 32767 {
+ fmt.Printf("sub_int16 32766--1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_32766_ssa(-1); got != -32767 {
+ fmt.Printf("sub_int16 -1-32766 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := sub_32766_int16_ssa(0); got != 32766 {
+ fmt.Printf("sub_int16 32766-0 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_32766_ssa(0); got != -32766 {
+ fmt.Printf("sub_int16 0-32766 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := sub_32766_int16_ssa(1); got != 32765 {
+ fmt.Printf("sub_int16 32766-1 = %d, wanted 32765\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_32766_ssa(1); got != -32765 {
+ fmt.Printf("sub_int16 1-32766 = %d, wanted -32765\n", got)
+ failed = true
+ }
+
+ if got := sub_32766_int16_ssa(32766); got != 0 {
+ fmt.Printf("sub_int16 32766-32766 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_32766_ssa(32766); got != 0 {
+ fmt.Printf("sub_int16 32766-32766 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_32766_int16_ssa(32767); got != -1 {
+ fmt.Printf("sub_int16 32766-32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_32766_ssa(32767); got != 1 {
+ fmt.Printf("sub_int16 32767-32766 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_32767_int16_ssa(-32768); got != -1 {
+ fmt.Printf("sub_int16 32767--32768 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_32767_ssa(-32768); got != 1 {
+ fmt.Printf("sub_int16 -32768-32767 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_32767_int16_ssa(-32767); got != -2 {
+ fmt.Printf("sub_int16 32767--32767 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_32767_ssa(-32767); got != 2 {
+ fmt.Printf("sub_int16 -32767-32767 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_32767_int16_ssa(-1); got != -32768 {
+ fmt.Printf("sub_int16 32767--1 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_32767_ssa(-1); got != -32768 {
+ fmt.Printf("sub_int16 -1-32767 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := sub_32767_int16_ssa(0); got != 32767 {
+ fmt.Printf("sub_int16 32767-0 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_32767_ssa(0); got != -32767 {
+ fmt.Printf("sub_int16 0-32767 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := sub_32767_int16_ssa(1); got != 32766 {
+ fmt.Printf("sub_int16 32767-1 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_32767_ssa(1); got != -32766 {
+ fmt.Printf("sub_int16 1-32767 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := sub_32767_int16_ssa(32766); got != 1 {
+ fmt.Printf("sub_int16 32767-32766 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_32767_ssa(32766); got != -1 {
+ fmt.Printf("sub_int16 32766-32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_32767_int16_ssa(32767); got != 0 {
+ fmt.Printf("sub_int16 32767-32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int16_32767_ssa(32767); got != 0 {
+ fmt.Printf("sub_int16 32767-32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg32768_int16_ssa(-32768); got != 1 {
+ fmt.Printf("div_int16 -32768/-32768 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg32768_ssa(-32768); got != 1 {
+ fmt.Printf("div_int16 -32768/-32768 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg32768_int16_ssa(-32767); got != 1 {
+ fmt.Printf("div_int16 -32768/-32767 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg32768_ssa(-32767); got != 0 {
+ fmt.Printf("div_int16 -32767/-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg32768_int16_ssa(-1); got != -32768 {
+ fmt.Printf("div_int16 -32768/-1 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg32768_ssa(-1); got != 0 {
+ fmt.Printf("div_int16 -1/-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg32768_ssa(0); got != 0 {
+ fmt.Printf("div_int16 0/-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg32768_int16_ssa(1); got != -32768 {
+ fmt.Printf("div_int16 -32768/1 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg32768_ssa(1); got != 0 {
+ fmt.Printf("div_int16 1/-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg32768_int16_ssa(32766); got != -1 {
+ fmt.Printf("div_int16 -32768/32766 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg32768_ssa(32766); got != 0 {
+ fmt.Printf("div_int16 32766/-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg32768_int16_ssa(32767); got != -1 {
+ fmt.Printf("div_int16 -32768/32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg32768_ssa(32767); got != 0 {
+ fmt.Printf("div_int16 32767/-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg32767_int16_ssa(-32768); got != 0 {
+ fmt.Printf("div_int16 -32767/-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg32767_ssa(-32768); got != 1 {
+ fmt.Printf("div_int16 -32768/-32767 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg32767_int16_ssa(-32767); got != 1 {
+ fmt.Printf("div_int16 -32767/-32767 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg32767_ssa(-32767); got != 1 {
+ fmt.Printf("div_int16 -32767/-32767 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg32767_int16_ssa(-1); got != 32767 {
+ fmt.Printf("div_int16 -32767/-1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg32767_ssa(-1); got != 0 {
+ fmt.Printf("div_int16 -1/-32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg32767_ssa(0); got != 0 {
+ fmt.Printf("div_int16 0/-32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg32767_int16_ssa(1); got != -32767 {
+ fmt.Printf("div_int16 -32767/1 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg32767_ssa(1); got != 0 {
+ fmt.Printf("div_int16 1/-32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg32767_int16_ssa(32766); got != -1 {
+ fmt.Printf("div_int16 -32767/32766 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg32767_ssa(32766); got != 0 {
+ fmt.Printf("div_int16 32766/-32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg32767_int16_ssa(32767); got != -1 {
+ fmt.Printf("div_int16 -32767/32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg32767_ssa(32767); got != -1 {
+ fmt.Printf("div_int16 32767/-32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int16_ssa(-32768); got != 0 {
+ fmt.Printf("div_int16 -1/-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg1_ssa(-32768); got != -32768 {
+ fmt.Printf("div_int16 -32768/-1 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int16_ssa(-32767); got != 0 {
+ fmt.Printf("div_int16 -1/-32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg1_ssa(-32767); got != 32767 {
+ fmt.Printf("div_int16 -32767/-1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int16_ssa(-1); got != 1 {
+ fmt.Printf("div_int16 -1/-1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg1_ssa(-1); got != 1 {
+ fmt.Printf("div_int16 -1/-1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg1_ssa(0); got != 0 {
+ fmt.Printf("div_int16 0/-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int16_ssa(1); got != -1 {
+ fmt.Printf("div_int16 -1/1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg1_ssa(1); got != -1 {
+ fmt.Printf("div_int16 1/-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int16_ssa(32766); got != 0 {
+ fmt.Printf("div_int16 -1/32766 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg1_ssa(32766); got != -32766 {
+ fmt.Printf("div_int16 32766/-1 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int16_ssa(32767); got != 0 {
+ fmt.Printf("div_int16 -1/32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int16_Neg1_ssa(32767); got != -32767 {
+ fmt.Printf("div_int16 32767/-1 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := div_0_int16_ssa(-32768); got != 0 {
+ fmt.Printf("div_int16 0/-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int16_ssa(-32767); got != 0 {
+ fmt.Printf("div_int16 0/-32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int16_ssa(-1); got != 0 {
+ fmt.Printf("div_int16 0/-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int16_ssa(1); got != 0 {
+ fmt.Printf("div_int16 0/1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int16_ssa(32766); got != 0 {
+ fmt.Printf("div_int16 0/32766 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int16_ssa(32767); got != 0 {
+ fmt.Printf("div_int16 0/32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_1_int16_ssa(-32768); got != 0 {
+ fmt.Printf("div_int16 1/-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int16_1_ssa(-32768); got != -32768 {
+ fmt.Printf("div_int16 -32768/1 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := div_1_int16_ssa(-32767); got != 0 {
+ fmt.Printf("div_int16 1/-32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int16_1_ssa(-32767); got != -32767 {
+ fmt.Printf("div_int16 -32767/1 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := div_1_int16_ssa(-1); got != -1 {
+ fmt.Printf("div_int16 1/-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int16_1_ssa(-1); got != -1 {
+ fmt.Printf("div_int16 -1/1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int16_1_ssa(0); got != 0 {
+ fmt.Printf("div_int16 0/1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_1_int16_ssa(1); got != 1 {
+ fmt.Printf("div_int16 1/1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int16_1_ssa(1); got != 1 {
+ fmt.Printf("div_int16 1/1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_1_int16_ssa(32766); got != 0 {
+ fmt.Printf("div_int16 1/32766 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int16_1_ssa(32766); got != 32766 {
+ fmt.Printf("div_int16 32766/1 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := div_1_int16_ssa(32767); got != 0 {
+ fmt.Printf("div_int16 1/32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int16_1_ssa(32767); got != 32767 {
+ fmt.Printf("div_int16 32767/1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := div_32766_int16_ssa(-32768); got != 0 {
+ fmt.Printf("div_int16 32766/-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int16_32766_ssa(-32768); got != -1 {
+ fmt.Printf("div_int16 -32768/32766 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_32766_int16_ssa(-32767); got != 0 {
+ fmt.Printf("div_int16 32766/-32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int16_32766_ssa(-32767); got != -1 {
+ fmt.Printf("div_int16 -32767/32766 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_32766_int16_ssa(-1); got != -32766 {
+ fmt.Printf("div_int16 32766/-1 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := div_int16_32766_ssa(-1); got != 0 {
+ fmt.Printf("div_int16 -1/32766 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int16_32766_ssa(0); got != 0 {
+ fmt.Printf("div_int16 0/32766 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_32766_int16_ssa(1); got != 32766 {
+ fmt.Printf("div_int16 32766/1 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := div_int16_32766_ssa(1); got != 0 {
+ fmt.Printf("div_int16 1/32766 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_32766_int16_ssa(32766); got != 1 {
+ fmt.Printf("div_int16 32766/32766 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int16_32766_ssa(32766); got != 1 {
+ fmt.Printf("div_int16 32766/32766 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_32766_int16_ssa(32767); got != 0 {
+ fmt.Printf("div_int16 32766/32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int16_32766_ssa(32767); got != 1 {
+ fmt.Printf("div_int16 32767/32766 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_32767_int16_ssa(-32768); got != 0 {
+ fmt.Printf("div_int16 32767/-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int16_32767_ssa(-32768); got != -1 {
+ fmt.Printf("div_int16 -32768/32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_32767_int16_ssa(-32767); got != -1 {
+ fmt.Printf("div_int16 32767/-32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int16_32767_ssa(-32767); got != -1 {
+ fmt.Printf("div_int16 -32767/32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_32767_int16_ssa(-1); got != -32767 {
+ fmt.Printf("div_int16 32767/-1 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := div_int16_32767_ssa(-1); got != 0 {
+ fmt.Printf("div_int16 -1/32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int16_32767_ssa(0); got != 0 {
+ fmt.Printf("div_int16 0/32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_32767_int16_ssa(1); got != 32767 {
+ fmt.Printf("div_int16 32767/1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := div_int16_32767_ssa(1); got != 0 {
+ fmt.Printf("div_int16 1/32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_32767_int16_ssa(32766); got != 1 {
+ fmt.Printf("div_int16 32767/32766 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int16_32767_ssa(32766); got != 0 {
+ fmt.Printf("div_int16 32766/32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_32767_int16_ssa(32767); got != 1 {
+ fmt.Printf("div_int16 32767/32767 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int16_32767_ssa(32767); got != 1 {
+ fmt.Printf("div_int16 32767/32767 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg32768_int16_ssa(-32768); got != 0 {
+ fmt.Printf("mul_int16 -32768*-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg32768_ssa(-32768); got != 0 {
+ fmt.Printf("mul_int16 -32768*-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg32768_int16_ssa(-32767); got != -32768 {
+ fmt.Printf("mul_int16 -32768*-32767 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg32768_ssa(-32767); got != -32768 {
+ fmt.Printf("mul_int16 -32767*-32768 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg32768_int16_ssa(-1); got != -32768 {
+ fmt.Printf("mul_int16 -32768*-1 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg32768_ssa(-1); got != -32768 {
+ fmt.Printf("mul_int16 -1*-32768 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg32768_int16_ssa(0); got != 0 {
+ fmt.Printf("mul_int16 -32768*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg32768_ssa(0); got != 0 {
+ fmt.Printf("mul_int16 0*-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg32768_int16_ssa(1); got != -32768 {
+ fmt.Printf("mul_int16 -32768*1 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg32768_ssa(1); got != -32768 {
+ fmt.Printf("mul_int16 1*-32768 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg32768_int16_ssa(32766); got != 0 {
+ fmt.Printf("mul_int16 -32768*32766 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg32768_ssa(32766); got != 0 {
+ fmt.Printf("mul_int16 32766*-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg32768_int16_ssa(32767); got != -32768 {
+ fmt.Printf("mul_int16 -32768*32767 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg32768_ssa(32767); got != -32768 {
+ fmt.Printf("mul_int16 32767*-32768 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg32767_int16_ssa(-32768); got != -32768 {
+ fmt.Printf("mul_int16 -32767*-32768 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg32767_ssa(-32768); got != -32768 {
+ fmt.Printf("mul_int16 -32768*-32767 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg32767_int16_ssa(-32767); got != 1 {
+ fmt.Printf("mul_int16 -32767*-32767 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg32767_ssa(-32767); got != 1 {
+ fmt.Printf("mul_int16 -32767*-32767 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg32767_int16_ssa(-1); got != 32767 {
+ fmt.Printf("mul_int16 -32767*-1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg32767_ssa(-1); got != 32767 {
+ fmt.Printf("mul_int16 -1*-32767 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg32767_int16_ssa(0); got != 0 {
+ fmt.Printf("mul_int16 -32767*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg32767_ssa(0); got != 0 {
+ fmt.Printf("mul_int16 0*-32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg32767_int16_ssa(1); got != -32767 {
+ fmt.Printf("mul_int16 -32767*1 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg32767_ssa(1); got != -32767 {
+ fmt.Printf("mul_int16 1*-32767 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg32767_int16_ssa(32766); got != 32766 {
+ fmt.Printf("mul_int16 -32767*32766 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg32767_ssa(32766); got != 32766 {
+ fmt.Printf("mul_int16 32766*-32767 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg32767_int16_ssa(32767); got != -1 {
+ fmt.Printf("mul_int16 -32767*32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg32767_ssa(32767); got != -1 {
+ fmt.Printf("mul_int16 32767*-32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int16_ssa(-32768); got != -32768 {
+ fmt.Printf("mul_int16 -1*-32768 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg1_ssa(-32768); got != -32768 {
+ fmt.Printf("mul_int16 -32768*-1 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int16_ssa(-32767); got != 32767 {
+ fmt.Printf("mul_int16 -1*-32767 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg1_ssa(-32767); got != 32767 {
+ fmt.Printf("mul_int16 -32767*-1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int16_ssa(-1); got != 1 {
+ fmt.Printf("mul_int16 -1*-1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg1_ssa(-1); got != 1 {
+ fmt.Printf("mul_int16 -1*-1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int16_ssa(0); got != 0 {
+ fmt.Printf("mul_int16 -1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg1_ssa(0); got != 0 {
+ fmt.Printf("mul_int16 0*-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int16_ssa(1); got != -1 {
+ fmt.Printf("mul_int16 -1*1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg1_ssa(1); got != -1 {
+ fmt.Printf("mul_int16 1*-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int16_ssa(32766); got != -32766 {
+ fmt.Printf("mul_int16 -1*32766 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg1_ssa(32766); got != -32766 {
+ fmt.Printf("mul_int16 32766*-1 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int16_ssa(32767); got != -32767 {
+ fmt.Printf("mul_int16 -1*32767 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_Neg1_ssa(32767); got != -32767 {
+ fmt.Printf("mul_int16 32767*-1 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int16_ssa(-32768); got != 0 {
+ fmt.Printf("mul_int16 0*-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_0_ssa(-32768); got != 0 {
+ fmt.Printf("mul_int16 -32768*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int16_ssa(-32767); got != 0 {
+ fmt.Printf("mul_int16 0*-32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_0_ssa(-32767); got != 0 {
+ fmt.Printf("mul_int16 -32767*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int16_ssa(-1); got != 0 {
+ fmt.Printf("mul_int16 0*-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_0_ssa(-1); got != 0 {
+ fmt.Printf("mul_int16 -1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int16_ssa(0); got != 0 {
+ fmt.Printf("mul_int16 0*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_0_ssa(0); got != 0 {
+ fmt.Printf("mul_int16 0*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int16_ssa(1); got != 0 {
+ fmt.Printf("mul_int16 0*1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_0_ssa(1); got != 0 {
+ fmt.Printf("mul_int16 1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int16_ssa(32766); got != 0 {
+ fmt.Printf("mul_int16 0*32766 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_0_ssa(32766); got != 0 {
+ fmt.Printf("mul_int16 32766*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int16_ssa(32767); got != 0 {
+ fmt.Printf("mul_int16 0*32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_0_ssa(32767); got != 0 {
+ fmt.Printf("mul_int16 32767*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int16_ssa(-32768); got != -32768 {
+ fmt.Printf("mul_int16 1*-32768 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_1_ssa(-32768); got != -32768 {
+ fmt.Printf("mul_int16 -32768*1 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int16_ssa(-32767); got != -32767 {
+ fmt.Printf("mul_int16 1*-32767 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_1_ssa(-32767); got != -32767 {
+ fmt.Printf("mul_int16 -32767*1 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int16_ssa(-1); got != -1 {
+ fmt.Printf("mul_int16 1*-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_1_ssa(-1); got != -1 {
+ fmt.Printf("mul_int16 -1*1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int16_ssa(0); got != 0 {
+ fmt.Printf("mul_int16 1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_1_ssa(0); got != 0 {
+ fmt.Printf("mul_int16 0*1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int16_ssa(1); got != 1 {
+ fmt.Printf("mul_int16 1*1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_1_ssa(1); got != 1 {
+ fmt.Printf("mul_int16 1*1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int16_ssa(32766); got != 32766 {
+ fmt.Printf("mul_int16 1*32766 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_1_ssa(32766); got != 32766 {
+ fmt.Printf("mul_int16 32766*1 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int16_ssa(32767); got != 32767 {
+ fmt.Printf("mul_int16 1*32767 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_1_ssa(32767); got != 32767 {
+ fmt.Printf("mul_int16 32767*1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := mul_32766_int16_ssa(-32768); got != 0 {
+ fmt.Printf("mul_int16 32766*-32768 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_32766_ssa(-32768); got != 0 {
+ fmt.Printf("mul_int16 -32768*32766 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_32766_int16_ssa(-32767); got != 32766 {
+ fmt.Printf("mul_int16 32766*-32767 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_32766_ssa(-32767); got != 32766 {
+ fmt.Printf("mul_int16 -32767*32766 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := mul_32766_int16_ssa(-1); got != -32766 {
+ fmt.Printf("mul_int16 32766*-1 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_32766_ssa(-1); got != -32766 {
+ fmt.Printf("mul_int16 -1*32766 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := mul_32766_int16_ssa(0); got != 0 {
+ fmt.Printf("mul_int16 32766*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_32766_ssa(0); got != 0 {
+ fmt.Printf("mul_int16 0*32766 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_32766_int16_ssa(1); got != 32766 {
+ fmt.Printf("mul_int16 32766*1 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_32766_ssa(1); got != 32766 {
+ fmt.Printf("mul_int16 1*32766 = %d, wanted 32766\n", got)
+ failed = true
+ }
+
+ if got := mul_32766_int16_ssa(32766); got != 4 {
+ fmt.Printf("mul_int16 32766*32766 = %d, wanted 4\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_32766_ssa(32766); got != 4 {
+ fmt.Printf("mul_int16 32766*32766 = %d, wanted 4\n", got)
+ failed = true
+ }
+
+ if got := mul_32766_int16_ssa(32767); got != -32766 {
+ fmt.Printf("mul_int16 32766*32767 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_32766_ssa(32767); got != -32766 {
+ fmt.Printf("mul_int16 32767*32766 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := mul_32767_int16_ssa(-32768); got != -32768 {
+ fmt.Printf("mul_int16 32767*-32768 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_32767_ssa(-32768); got != -32768 {
+ fmt.Printf("mul_int16 -32768*32767 = %d, wanted -32768\n", got)
+ failed = true
+ }
+
+ if got := mul_32767_int16_ssa(-32767); got != -1 {
+ fmt.Printf("mul_int16 32767*-32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_32767_ssa(-32767); got != -1 {
+ fmt.Printf("mul_int16 -32767*32767 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_32767_int16_ssa(-1); got != -32767 {
+ fmt.Printf("mul_int16 32767*-1 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_32767_ssa(-1); got != -32767 {
+ fmt.Printf("mul_int16 -1*32767 = %d, wanted -32767\n", got)
+ failed = true
+ }
+
+ if got := mul_32767_int16_ssa(0); got != 0 {
+ fmt.Printf("mul_int16 32767*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_32767_ssa(0); got != 0 {
+ fmt.Printf("mul_int16 0*32767 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_32767_int16_ssa(1); got != 32767 {
+ fmt.Printf("mul_int16 32767*1 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_32767_ssa(1); got != 32767 {
+ fmt.Printf("mul_int16 1*32767 = %d, wanted 32767\n", got)
+ failed = true
+ }
+
+ if got := mul_32767_int16_ssa(32766); got != -32766 {
+ fmt.Printf("mul_int16 32767*32766 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_32767_ssa(32766); got != -32766 {
+ fmt.Printf("mul_int16 32766*32767 = %d, wanted -32766\n", got)
+ failed = true
+ }
+
+ if got := mul_32767_int16_ssa(32767); got != 1 {
+ fmt.Printf("mul_int16 32767*32767 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_int16_32767_ssa(32767); got != 1 {
+ fmt.Printf("mul_int16 32767*32767 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_0_uint8_ssa(0); got != 0 {
+ fmt.Printf("add_uint8 0+0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_uint8_0_ssa(0); got != 0 {
+ fmt.Printf("add_uint8 0+0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_0_uint8_ssa(1); got != 1 {
+ fmt.Printf("add_uint8 0+1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_uint8_0_ssa(1); got != 1 {
+ fmt.Printf("add_uint8 1+0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_0_uint8_ssa(255); got != 255 {
+ fmt.Printf("add_uint8 0+255 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := add_uint8_0_ssa(255); got != 255 {
+ fmt.Printf("add_uint8 255+0 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := add_1_uint8_ssa(0); got != 1 {
+ fmt.Printf("add_uint8 1+0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_uint8_1_ssa(0); got != 1 {
+ fmt.Printf("add_uint8 0+1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_1_uint8_ssa(1); got != 2 {
+ fmt.Printf("add_uint8 1+1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_uint8_1_ssa(1); got != 2 {
+ fmt.Printf("add_uint8 1+1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_1_uint8_ssa(255); got != 0 {
+ fmt.Printf("add_uint8 1+255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_uint8_1_ssa(255); got != 0 {
+ fmt.Printf("add_uint8 255+1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_255_uint8_ssa(0); got != 255 {
+ fmt.Printf("add_uint8 255+0 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := add_uint8_255_ssa(0); got != 255 {
+ fmt.Printf("add_uint8 0+255 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := add_255_uint8_ssa(1); got != 0 {
+ fmt.Printf("add_uint8 255+1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_uint8_255_ssa(1); got != 0 {
+ fmt.Printf("add_uint8 1+255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_255_uint8_ssa(255); got != 254 {
+ fmt.Printf("add_uint8 255+255 = %d, wanted 254\n", got)
+ failed = true
+ }
+
+ if got := add_uint8_255_ssa(255); got != 254 {
+ fmt.Printf("add_uint8 255+255 = %d, wanted 254\n", got)
+ failed = true
+ }
+
+ if got := sub_0_uint8_ssa(0); got != 0 {
+ fmt.Printf("sub_uint8 0-0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_uint8_0_ssa(0); got != 0 {
+ fmt.Printf("sub_uint8 0-0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_0_uint8_ssa(1); got != 255 {
+ fmt.Printf("sub_uint8 0-1 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := sub_uint8_0_ssa(1); got != 1 {
+ fmt.Printf("sub_uint8 1-0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_0_uint8_ssa(255); got != 1 {
+ fmt.Printf("sub_uint8 0-255 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_uint8_0_ssa(255); got != 255 {
+ fmt.Printf("sub_uint8 255-0 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := sub_1_uint8_ssa(0); got != 1 {
+ fmt.Printf("sub_uint8 1-0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_uint8_1_ssa(0); got != 255 {
+ fmt.Printf("sub_uint8 0-1 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := sub_1_uint8_ssa(1); got != 0 {
+ fmt.Printf("sub_uint8 1-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_uint8_1_ssa(1); got != 0 {
+ fmt.Printf("sub_uint8 1-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_1_uint8_ssa(255); got != 2 {
+ fmt.Printf("sub_uint8 1-255 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_uint8_1_ssa(255); got != 254 {
+ fmt.Printf("sub_uint8 255-1 = %d, wanted 254\n", got)
+ failed = true
+ }
+
+ if got := sub_255_uint8_ssa(0); got != 255 {
+ fmt.Printf("sub_uint8 255-0 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := sub_uint8_255_ssa(0); got != 1 {
+ fmt.Printf("sub_uint8 0-255 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_255_uint8_ssa(1); got != 254 {
+ fmt.Printf("sub_uint8 255-1 = %d, wanted 254\n", got)
+ failed = true
+ }
+
+ if got := sub_uint8_255_ssa(1); got != 2 {
+ fmt.Printf("sub_uint8 1-255 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_255_uint8_ssa(255); got != 0 {
+ fmt.Printf("sub_uint8 255-255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_uint8_255_ssa(255); got != 0 {
+ fmt.Printf("sub_uint8 255-255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_uint8_ssa(1); got != 0 {
+ fmt.Printf("div_uint8 0/1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_uint8_ssa(255); got != 0 {
+ fmt.Printf("div_uint8 0/255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_uint8_1_ssa(0); got != 0 {
+ fmt.Printf("div_uint8 0/1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_1_uint8_ssa(1); got != 1 {
+ fmt.Printf("div_uint8 1/1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_uint8_1_ssa(1); got != 1 {
+ fmt.Printf("div_uint8 1/1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_1_uint8_ssa(255); got != 0 {
+ fmt.Printf("div_uint8 1/255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_uint8_1_ssa(255); got != 255 {
+ fmt.Printf("div_uint8 255/1 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := div_uint8_255_ssa(0); got != 0 {
+ fmt.Printf("div_uint8 0/255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_255_uint8_ssa(1); got != 255 {
+ fmt.Printf("div_uint8 255/1 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := div_uint8_255_ssa(1); got != 0 {
+ fmt.Printf("div_uint8 1/255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_255_uint8_ssa(255); got != 1 {
+ fmt.Printf("div_uint8 255/255 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_uint8_255_ssa(255); got != 1 {
+ fmt.Printf("div_uint8 255/255 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_0_uint8_ssa(0); got != 0 {
+ fmt.Printf("mul_uint8 0*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint8_0_ssa(0); got != 0 {
+ fmt.Printf("mul_uint8 0*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_uint8_ssa(1); got != 0 {
+ fmt.Printf("mul_uint8 0*1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint8_0_ssa(1); got != 0 {
+ fmt.Printf("mul_uint8 1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_uint8_ssa(255); got != 0 {
+ fmt.Printf("mul_uint8 0*255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint8_0_ssa(255); got != 0 {
+ fmt.Printf("mul_uint8 255*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_1_uint8_ssa(0); got != 0 {
+ fmt.Printf("mul_uint8 1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint8_1_ssa(0); got != 0 {
+ fmt.Printf("mul_uint8 0*1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_1_uint8_ssa(1); got != 1 {
+ fmt.Printf("mul_uint8 1*1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_uint8_1_ssa(1); got != 1 {
+ fmt.Printf("mul_uint8 1*1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_1_uint8_ssa(255); got != 255 {
+ fmt.Printf("mul_uint8 1*255 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := mul_uint8_1_ssa(255); got != 255 {
+ fmt.Printf("mul_uint8 255*1 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := mul_255_uint8_ssa(0); got != 0 {
+ fmt.Printf("mul_uint8 255*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_uint8_255_ssa(0); got != 0 {
+ fmt.Printf("mul_uint8 0*255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_255_uint8_ssa(1); got != 255 {
+ fmt.Printf("mul_uint8 255*1 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := mul_uint8_255_ssa(1); got != 255 {
+ fmt.Printf("mul_uint8 1*255 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := mul_255_uint8_ssa(255); got != 1 {
+ fmt.Printf("mul_uint8 255*255 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_uint8_255_ssa(255); got != 1 {
+ fmt.Printf("mul_uint8 255*255 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := lsh_0_uint8_ssa(0); got != 0 {
+ fmt.Printf("lsh_uint8 0<<0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint8_0_ssa(0); got != 0 {
+ fmt.Printf("lsh_uint8 0<<0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_0_uint8_ssa(1); got != 0 {
+ fmt.Printf("lsh_uint8 0<<1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint8_0_ssa(1); got != 1 {
+ fmt.Printf("lsh_uint8 1<<0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := lsh_0_uint8_ssa(255); got != 0 {
+ fmt.Printf("lsh_uint8 0<<255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint8_0_ssa(255); got != 255 {
+ fmt.Printf("lsh_uint8 255<<0 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := lsh_1_uint8_ssa(0); got != 1 {
+ fmt.Printf("lsh_uint8 1<<0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint8_1_ssa(0); got != 0 {
+ fmt.Printf("lsh_uint8 0<<1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_1_uint8_ssa(1); got != 2 {
+ fmt.Printf("lsh_uint8 1<<1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint8_1_ssa(1); got != 2 {
+ fmt.Printf("lsh_uint8 1<<1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := lsh_1_uint8_ssa(255); got != 0 {
+ fmt.Printf("lsh_uint8 1<<255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint8_1_ssa(255); got != 254 {
+ fmt.Printf("lsh_uint8 255<<1 = %d, wanted 254\n", got)
+ failed = true
+ }
+
+ if got := lsh_255_uint8_ssa(0); got != 255 {
+ fmt.Printf("lsh_uint8 255<<0 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint8_255_ssa(0); got != 0 {
+ fmt.Printf("lsh_uint8 0<<255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_255_uint8_ssa(1); got != 254 {
+ fmt.Printf("lsh_uint8 255<<1 = %d, wanted 254\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint8_255_ssa(1); got != 0 {
+ fmt.Printf("lsh_uint8 1<<255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_255_uint8_ssa(255); got != 0 {
+ fmt.Printf("lsh_uint8 255<<255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := lsh_uint8_255_ssa(255); got != 0 {
+ fmt.Printf("lsh_uint8 255<<255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_0_uint8_ssa(0); got != 0 {
+ fmt.Printf("rsh_uint8 0>>0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint8_0_ssa(0); got != 0 {
+ fmt.Printf("rsh_uint8 0>>0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_0_uint8_ssa(1); got != 0 {
+ fmt.Printf("rsh_uint8 0>>1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint8_0_ssa(1); got != 1 {
+ fmt.Printf("rsh_uint8 1>>0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := rsh_0_uint8_ssa(255); got != 0 {
+ fmt.Printf("rsh_uint8 0>>255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint8_0_ssa(255); got != 255 {
+ fmt.Printf("rsh_uint8 255>>0 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := rsh_1_uint8_ssa(0); got != 1 {
+ fmt.Printf("rsh_uint8 1>>0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint8_1_ssa(0); got != 0 {
+ fmt.Printf("rsh_uint8 0>>1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_1_uint8_ssa(1); got != 0 {
+ fmt.Printf("rsh_uint8 1>>1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint8_1_ssa(1); got != 0 {
+ fmt.Printf("rsh_uint8 1>>1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_1_uint8_ssa(255); got != 0 {
+ fmt.Printf("rsh_uint8 1>>255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint8_1_ssa(255); got != 127 {
+ fmt.Printf("rsh_uint8 255>>1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := rsh_255_uint8_ssa(0); got != 255 {
+ fmt.Printf("rsh_uint8 255>>0 = %d, wanted 255\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint8_255_ssa(0); got != 0 {
+ fmt.Printf("rsh_uint8 0>>255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_255_uint8_ssa(1); got != 127 {
+ fmt.Printf("rsh_uint8 255>>1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint8_255_ssa(1); got != 0 {
+ fmt.Printf("rsh_uint8 1>>255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_255_uint8_ssa(255); got != 0 {
+ fmt.Printf("rsh_uint8 255>>255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := rsh_uint8_255_ssa(255); got != 0 {
+ fmt.Printf("rsh_uint8 255>>255 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_Neg128_int8_ssa(-128); got != 0 {
+ fmt.Printf("add_int8 -128+-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg128_ssa(-128); got != 0 {
+ fmt.Printf("add_int8 -128+-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_Neg128_int8_ssa(-127); got != 1 {
+ fmt.Printf("add_int8 -128+-127 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg128_ssa(-127); got != 1 {
+ fmt.Printf("add_int8 -127+-128 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg128_int8_ssa(-1); got != 127 {
+ fmt.Printf("add_int8 -128+-1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg128_ssa(-1); got != 127 {
+ fmt.Printf("add_int8 -1+-128 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := add_Neg128_int8_ssa(0); got != -128 {
+ fmt.Printf("add_int8 -128+0 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg128_ssa(0); got != -128 {
+ fmt.Printf("add_int8 0+-128 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := add_Neg128_int8_ssa(1); got != -127 {
+ fmt.Printf("add_int8 -128+1 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg128_ssa(1); got != -127 {
+ fmt.Printf("add_int8 1+-128 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := add_Neg128_int8_ssa(126); got != -2 {
+ fmt.Printf("add_int8 -128+126 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg128_ssa(126); got != -2 {
+ fmt.Printf("add_int8 126+-128 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_Neg128_int8_ssa(127); got != -1 {
+ fmt.Printf("add_int8 -128+127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg128_ssa(127); got != -1 {
+ fmt.Printf("add_int8 127+-128 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg127_int8_ssa(-128); got != 1 {
+ fmt.Printf("add_int8 -127+-128 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg127_ssa(-128); got != 1 {
+ fmt.Printf("add_int8 -128+-127 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg127_int8_ssa(-127); got != 2 {
+ fmt.Printf("add_int8 -127+-127 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg127_ssa(-127); got != 2 {
+ fmt.Printf("add_int8 -127+-127 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_Neg127_int8_ssa(-1); got != -128 {
+ fmt.Printf("add_int8 -127+-1 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg127_ssa(-1); got != -128 {
+ fmt.Printf("add_int8 -1+-127 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := add_Neg127_int8_ssa(0); got != -127 {
+ fmt.Printf("add_int8 -127+0 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg127_ssa(0); got != -127 {
+ fmt.Printf("add_int8 0+-127 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := add_Neg127_int8_ssa(1); got != -126 {
+ fmt.Printf("add_int8 -127+1 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg127_ssa(1); got != -126 {
+ fmt.Printf("add_int8 1+-127 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := add_Neg127_int8_ssa(126); got != -1 {
+ fmt.Printf("add_int8 -127+126 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg127_ssa(126); got != -1 {
+ fmt.Printf("add_int8 126+-127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg127_int8_ssa(127); got != 0 {
+ fmt.Printf("add_int8 -127+127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg127_ssa(127); got != 0 {
+ fmt.Printf("add_int8 127+-127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int8_ssa(-128); got != 127 {
+ fmt.Printf("add_int8 -1+-128 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg1_ssa(-128); got != 127 {
+ fmt.Printf("add_int8 -128+-1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int8_ssa(-127); got != -128 {
+ fmt.Printf("add_int8 -1+-127 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg1_ssa(-127); got != -128 {
+ fmt.Printf("add_int8 -127+-1 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int8_ssa(-1); got != -2 {
+ fmt.Printf("add_int8 -1+-1 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg1_ssa(-1); got != -2 {
+ fmt.Printf("add_int8 -1+-1 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int8_ssa(0); got != -1 {
+ fmt.Printf("add_int8 -1+0 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg1_ssa(0); got != -1 {
+ fmt.Printf("add_int8 0+-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int8_ssa(1); got != 0 {
+ fmt.Printf("add_int8 -1+1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg1_ssa(1); got != 0 {
+ fmt.Printf("add_int8 1+-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int8_ssa(126); got != 125 {
+ fmt.Printf("add_int8 -1+126 = %d, wanted 125\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg1_ssa(126); got != 125 {
+ fmt.Printf("add_int8 126+-1 = %d, wanted 125\n", got)
+ failed = true
+ }
+
+ if got := add_Neg1_int8_ssa(127); got != 126 {
+ fmt.Printf("add_int8 -1+127 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := add_int8_Neg1_ssa(127); got != 126 {
+ fmt.Printf("add_int8 127+-1 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := add_0_int8_ssa(-128); got != -128 {
+ fmt.Printf("add_int8 0+-128 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := add_int8_0_ssa(-128); got != -128 {
+ fmt.Printf("add_int8 -128+0 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := add_0_int8_ssa(-127); got != -127 {
+ fmt.Printf("add_int8 0+-127 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := add_int8_0_ssa(-127); got != -127 {
+ fmt.Printf("add_int8 -127+0 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := add_0_int8_ssa(-1); got != -1 {
+ fmt.Printf("add_int8 0+-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int8_0_ssa(-1); got != -1 {
+ fmt.Printf("add_int8 -1+0 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_0_int8_ssa(0); got != 0 {
+ fmt.Printf("add_int8 0+0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int8_0_ssa(0); got != 0 {
+ fmt.Printf("add_int8 0+0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_0_int8_ssa(1); got != 1 {
+ fmt.Printf("add_int8 0+1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_int8_0_ssa(1); got != 1 {
+ fmt.Printf("add_int8 1+0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_0_int8_ssa(126); got != 126 {
+ fmt.Printf("add_int8 0+126 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := add_int8_0_ssa(126); got != 126 {
+ fmt.Printf("add_int8 126+0 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := add_0_int8_ssa(127); got != 127 {
+ fmt.Printf("add_int8 0+127 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := add_int8_0_ssa(127); got != 127 {
+ fmt.Printf("add_int8 127+0 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := add_1_int8_ssa(-128); got != -127 {
+ fmt.Printf("add_int8 1+-128 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := add_int8_1_ssa(-128); got != -127 {
+ fmt.Printf("add_int8 -128+1 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := add_1_int8_ssa(-127); got != -126 {
+ fmt.Printf("add_int8 1+-127 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := add_int8_1_ssa(-127); got != -126 {
+ fmt.Printf("add_int8 -127+1 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := add_1_int8_ssa(-1); got != 0 {
+ fmt.Printf("add_int8 1+-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int8_1_ssa(-1); got != 0 {
+ fmt.Printf("add_int8 -1+1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_1_int8_ssa(0); got != 1 {
+ fmt.Printf("add_int8 1+0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_int8_1_ssa(0); got != 1 {
+ fmt.Printf("add_int8 0+1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := add_1_int8_ssa(1); got != 2 {
+ fmt.Printf("add_int8 1+1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_int8_1_ssa(1); got != 2 {
+ fmt.Printf("add_int8 1+1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := add_1_int8_ssa(126); got != 127 {
+ fmt.Printf("add_int8 1+126 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := add_int8_1_ssa(126); got != 127 {
+ fmt.Printf("add_int8 126+1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := add_1_int8_ssa(127); got != -128 {
+ fmt.Printf("add_int8 1+127 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := add_int8_1_ssa(127); got != -128 {
+ fmt.Printf("add_int8 127+1 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := add_126_int8_ssa(-128); got != -2 {
+ fmt.Printf("add_int8 126+-128 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_int8_126_ssa(-128); got != -2 {
+ fmt.Printf("add_int8 -128+126 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_126_int8_ssa(-127); got != -1 {
+ fmt.Printf("add_int8 126+-127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int8_126_ssa(-127); got != -1 {
+ fmt.Printf("add_int8 -127+126 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_126_int8_ssa(-1); got != 125 {
+ fmt.Printf("add_int8 126+-1 = %d, wanted 125\n", got)
+ failed = true
+ }
+
+ if got := add_int8_126_ssa(-1); got != 125 {
+ fmt.Printf("add_int8 -1+126 = %d, wanted 125\n", got)
+ failed = true
+ }
+
+ if got := add_126_int8_ssa(0); got != 126 {
+ fmt.Printf("add_int8 126+0 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := add_int8_126_ssa(0); got != 126 {
+ fmt.Printf("add_int8 0+126 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := add_126_int8_ssa(1); got != 127 {
+ fmt.Printf("add_int8 126+1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := add_int8_126_ssa(1); got != 127 {
+ fmt.Printf("add_int8 1+126 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := add_126_int8_ssa(126); got != -4 {
+ fmt.Printf("add_int8 126+126 = %d, wanted -4\n", got)
+ failed = true
+ }
+
+ if got := add_int8_126_ssa(126); got != -4 {
+ fmt.Printf("add_int8 126+126 = %d, wanted -4\n", got)
+ failed = true
+ }
+
+ if got := add_126_int8_ssa(127); got != -3 {
+ fmt.Printf("add_int8 126+127 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := add_int8_126_ssa(127); got != -3 {
+ fmt.Printf("add_int8 127+126 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := add_127_int8_ssa(-128); got != -1 {
+ fmt.Printf("add_int8 127+-128 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_int8_127_ssa(-128); got != -1 {
+ fmt.Printf("add_int8 -128+127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := add_127_int8_ssa(-127); got != 0 {
+ fmt.Printf("add_int8 127+-127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_int8_127_ssa(-127); got != 0 {
+ fmt.Printf("add_int8 -127+127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := add_127_int8_ssa(-1); got != 126 {
+ fmt.Printf("add_int8 127+-1 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := add_int8_127_ssa(-1); got != 126 {
+ fmt.Printf("add_int8 -1+127 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := add_127_int8_ssa(0); got != 127 {
+ fmt.Printf("add_int8 127+0 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := add_int8_127_ssa(0); got != 127 {
+ fmt.Printf("add_int8 0+127 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := add_127_int8_ssa(1); got != -128 {
+ fmt.Printf("add_int8 127+1 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := add_int8_127_ssa(1); got != -128 {
+ fmt.Printf("add_int8 1+127 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := add_127_int8_ssa(126); got != -3 {
+ fmt.Printf("add_int8 127+126 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := add_int8_127_ssa(126); got != -3 {
+ fmt.Printf("add_int8 126+127 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := add_127_int8_ssa(127); got != -2 {
+ fmt.Printf("add_int8 127+127 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := add_int8_127_ssa(127); got != -2 {
+ fmt.Printf("add_int8 127+127 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg128_int8_ssa(-128); got != 0 {
+ fmt.Printf("sub_int8 -128--128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg128_ssa(-128); got != 0 {
+ fmt.Printf("sub_int8 -128--128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg128_int8_ssa(-127); got != -1 {
+ fmt.Printf("sub_int8 -128--127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg128_ssa(-127); got != 1 {
+ fmt.Printf("sub_int8 -127--128 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg128_int8_ssa(-1); got != -127 {
+ fmt.Printf("sub_int8 -128--1 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg128_ssa(-1); got != 127 {
+ fmt.Printf("sub_int8 -1--128 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg128_int8_ssa(0); got != -128 {
+ fmt.Printf("sub_int8 -128-0 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg128_ssa(0); got != -128 {
+ fmt.Printf("sub_int8 0--128 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg128_int8_ssa(1); got != 127 {
+ fmt.Printf("sub_int8 -128-1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg128_ssa(1); got != -127 {
+ fmt.Printf("sub_int8 1--128 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg128_int8_ssa(126); got != 2 {
+ fmt.Printf("sub_int8 -128-126 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg128_ssa(126); got != -2 {
+ fmt.Printf("sub_int8 126--128 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg128_int8_ssa(127); got != 1 {
+ fmt.Printf("sub_int8 -128-127 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg128_ssa(127); got != -1 {
+ fmt.Printf("sub_int8 127--128 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg127_int8_ssa(-128); got != 1 {
+ fmt.Printf("sub_int8 -127--128 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg127_ssa(-128); got != -1 {
+ fmt.Printf("sub_int8 -128--127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg127_int8_ssa(-127); got != 0 {
+ fmt.Printf("sub_int8 -127--127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg127_ssa(-127); got != 0 {
+ fmt.Printf("sub_int8 -127--127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg127_int8_ssa(-1); got != -126 {
+ fmt.Printf("sub_int8 -127--1 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg127_ssa(-1); got != 126 {
+ fmt.Printf("sub_int8 -1--127 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg127_int8_ssa(0); got != -127 {
+ fmt.Printf("sub_int8 -127-0 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg127_ssa(0); got != 127 {
+ fmt.Printf("sub_int8 0--127 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg127_int8_ssa(1); got != -128 {
+ fmt.Printf("sub_int8 -127-1 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg127_ssa(1); got != -128 {
+ fmt.Printf("sub_int8 1--127 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg127_int8_ssa(126); got != 3 {
+ fmt.Printf("sub_int8 -127-126 = %d, wanted 3\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg127_ssa(126); got != -3 {
+ fmt.Printf("sub_int8 126--127 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg127_int8_ssa(127); got != 2 {
+ fmt.Printf("sub_int8 -127-127 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg127_ssa(127); got != -2 {
+ fmt.Printf("sub_int8 127--127 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int8_ssa(-128); got != 127 {
+ fmt.Printf("sub_int8 -1--128 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg1_ssa(-128); got != -127 {
+ fmt.Printf("sub_int8 -128--1 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int8_ssa(-127); got != 126 {
+ fmt.Printf("sub_int8 -1--127 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg1_ssa(-127); got != -126 {
+ fmt.Printf("sub_int8 -127--1 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int8_ssa(-1); got != 0 {
+ fmt.Printf("sub_int8 -1--1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg1_ssa(-1); got != 0 {
+ fmt.Printf("sub_int8 -1--1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int8_ssa(0); got != -1 {
+ fmt.Printf("sub_int8 -1-0 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg1_ssa(0); got != 1 {
+ fmt.Printf("sub_int8 0--1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int8_ssa(1); got != -2 {
+ fmt.Printf("sub_int8 -1-1 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg1_ssa(1); got != 2 {
+ fmt.Printf("sub_int8 1--1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int8_ssa(126); got != -127 {
+ fmt.Printf("sub_int8 -1-126 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg1_ssa(126); got != 127 {
+ fmt.Printf("sub_int8 126--1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := sub_Neg1_int8_ssa(127); got != -128 {
+ fmt.Printf("sub_int8 -1-127 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_Neg1_ssa(127); got != -128 {
+ fmt.Printf("sub_int8 127--1 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int8_ssa(-128); got != -128 {
+ fmt.Printf("sub_int8 0--128 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_0_ssa(-128); got != -128 {
+ fmt.Printf("sub_int8 -128-0 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int8_ssa(-127); got != 127 {
+ fmt.Printf("sub_int8 0--127 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_0_ssa(-127); got != -127 {
+ fmt.Printf("sub_int8 -127-0 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int8_ssa(-1); got != 1 {
+ fmt.Printf("sub_int8 0--1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_0_ssa(-1); got != -1 {
+ fmt.Printf("sub_int8 -1-0 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int8_ssa(0); got != 0 {
+ fmt.Printf("sub_int8 0-0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_0_ssa(0); got != 0 {
+ fmt.Printf("sub_int8 0-0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int8_ssa(1); got != -1 {
+ fmt.Printf("sub_int8 0-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_0_ssa(1); got != 1 {
+ fmt.Printf("sub_int8 1-0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int8_ssa(126); got != -126 {
+ fmt.Printf("sub_int8 0-126 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_0_ssa(126); got != 126 {
+ fmt.Printf("sub_int8 126-0 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := sub_0_int8_ssa(127); got != -127 {
+ fmt.Printf("sub_int8 0-127 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_0_ssa(127); got != 127 {
+ fmt.Printf("sub_int8 127-0 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int8_ssa(-128); got != -127 {
+ fmt.Printf("sub_int8 1--128 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_1_ssa(-128); got != 127 {
+ fmt.Printf("sub_int8 -128-1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int8_ssa(-127); got != -128 {
+ fmt.Printf("sub_int8 1--127 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_1_ssa(-127); got != -128 {
+ fmt.Printf("sub_int8 -127-1 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int8_ssa(-1); got != 2 {
+ fmt.Printf("sub_int8 1--1 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_1_ssa(-1); got != -2 {
+ fmt.Printf("sub_int8 -1-1 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int8_ssa(0); got != 1 {
+ fmt.Printf("sub_int8 1-0 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_1_ssa(0); got != -1 {
+ fmt.Printf("sub_int8 0-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int8_ssa(1); got != 0 {
+ fmt.Printf("sub_int8 1-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_1_ssa(1); got != 0 {
+ fmt.Printf("sub_int8 1-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int8_ssa(126); got != -125 {
+ fmt.Printf("sub_int8 1-126 = %d, wanted -125\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_1_ssa(126); got != 125 {
+ fmt.Printf("sub_int8 126-1 = %d, wanted 125\n", got)
+ failed = true
+ }
+
+ if got := sub_1_int8_ssa(127); got != -126 {
+ fmt.Printf("sub_int8 1-127 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_1_ssa(127); got != 126 {
+ fmt.Printf("sub_int8 127-1 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := sub_126_int8_ssa(-128); got != -2 {
+ fmt.Printf("sub_int8 126--128 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_126_ssa(-128); got != 2 {
+ fmt.Printf("sub_int8 -128-126 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_126_int8_ssa(-127); got != -3 {
+ fmt.Printf("sub_int8 126--127 = %d, wanted -3\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_126_ssa(-127); got != 3 {
+ fmt.Printf("sub_int8 -127-126 = %d, wanted 3\n", got)
+ failed = true
+ }
+
+ if got := sub_126_int8_ssa(-1); got != 127 {
+ fmt.Printf("sub_int8 126--1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_126_ssa(-1); got != -127 {
+ fmt.Printf("sub_int8 -1-126 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := sub_126_int8_ssa(0); got != 126 {
+ fmt.Printf("sub_int8 126-0 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_126_ssa(0); got != -126 {
+ fmt.Printf("sub_int8 0-126 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := sub_126_int8_ssa(1); got != 125 {
+ fmt.Printf("sub_int8 126-1 = %d, wanted 125\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_126_ssa(1); got != -125 {
+ fmt.Printf("sub_int8 1-126 = %d, wanted -125\n", got)
+ failed = true
+ }
+
+ if got := sub_126_int8_ssa(126); got != 0 {
+ fmt.Printf("sub_int8 126-126 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_126_ssa(126); got != 0 {
+ fmt.Printf("sub_int8 126-126 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_126_int8_ssa(127); got != -1 {
+ fmt.Printf("sub_int8 126-127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_126_ssa(127); got != 1 {
+ fmt.Printf("sub_int8 127-126 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_127_int8_ssa(-128); got != -1 {
+ fmt.Printf("sub_int8 127--128 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_127_ssa(-128); got != 1 {
+ fmt.Printf("sub_int8 -128-127 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_127_int8_ssa(-127); got != -2 {
+ fmt.Printf("sub_int8 127--127 = %d, wanted -2\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_127_ssa(-127); got != 2 {
+ fmt.Printf("sub_int8 -127-127 = %d, wanted 2\n", got)
+ failed = true
+ }
+
+ if got := sub_127_int8_ssa(-1); got != -128 {
+ fmt.Printf("sub_int8 127--1 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_127_ssa(-1); got != -128 {
+ fmt.Printf("sub_int8 -1-127 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := sub_127_int8_ssa(0); got != 127 {
+ fmt.Printf("sub_int8 127-0 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_127_ssa(0); got != -127 {
+ fmt.Printf("sub_int8 0-127 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := sub_127_int8_ssa(1); got != 126 {
+ fmt.Printf("sub_int8 127-1 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_127_ssa(1); got != -126 {
+ fmt.Printf("sub_int8 1-127 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := sub_127_int8_ssa(126); got != 1 {
+ fmt.Printf("sub_int8 127-126 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_127_ssa(126); got != -1 {
+ fmt.Printf("sub_int8 126-127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := sub_127_int8_ssa(127); got != 0 {
+ fmt.Printf("sub_int8 127-127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := sub_int8_127_ssa(127); got != 0 {
+ fmt.Printf("sub_int8 127-127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg128_int8_ssa(-128); got != 1 {
+ fmt.Printf("div_int8 -128/-128 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg128_ssa(-128); got != 1 {
+ fmt.Printf("div_int8 -128/-128 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg128_int8_ssa(-127); got != 1 {
+ fmt.Printf("div_int8 -128/-127 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg128_ssa(-127); got != 0 {
+ fmt.Printf("div_int8 -127/-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg128_int8_ssa(-1); got != -128 {
+ fmt.Printf("div_int8 -128/-1 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg128_ssa(-1); got != 0 {
+ fmt.Printf("div_int8 -1/-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg128_ssa(0); got != 0 {
+ fmt.Printf("div_int8 0/-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg128_int8_ssa(1); got != -128 {
+ fmt.Printf("div_int8 -128/1 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg128_ssa(1); got != 0 {
+ fmt.Printf("div_int8 1/-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg128_int8_ssa(126); got != -1 {
+ fmt.Printf("div_int8 -128/126 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg128_ssa(126); got != 0 {
+ fmt.Printf("div_int8 126/-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg128_int8_ssa(127); got != -1 {
+ fmt.Printf("div_int8 -128/127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg128_ssa(127); got != 0 {
+ fmt.Printf("div_int8 127/-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg127_int8_ssa(-128); got != 0 {
+ fmt.Printf("div_int8 -127/-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg127_ssa(-128); got != 1 {
+ fmt.Printf("div_int8 -128/-127 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg127_int8_ssa(-127); got != 1 {
+ fmt.Printf("div_int8 -127/-127 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg127_ssa(-127); got != 1 {
+ fmt.Printf("div_int8 -127/-127 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg127_int8_ssa(-1); got != 127 {
+ fmt.Printf("div_int8 -127/-1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg127_ssa(-1); got != 0 {
+ fmt.Printf("div_int8 -1/-127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg127_ssa(0); got != 0 {
+ fmt.Printf("div_int8 0/-127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg127_int8_ssa(1); got != -127 {
+ fmt.Printf("div_int8 -127/1 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg127_ssa(1); got != 0 {
+ fmt.Printf("div_int8 1/-127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg127_int8_ssa(126); got != -1 {
+ fmt.Printf("div_int8 -127/126 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg127_ssa(126); got != 0 {
+ fmt.Printf("div_int8 126/-127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg127_int8_ssa(127); got != -1 {
+ fmt.Printf("div_int8 -127/127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg127_ssa(127); got != -1 {
+ fmt.Printf("div_int8 127/-127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int8_ssa(-128); got != 0 {
+ fmt.Printf("div_int8 -1/-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg1_ssa(-128); got != -128 {
+ fmt.Printf("div_int8 -128/-1 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int8_ssa(-127); got != 0 {
+ fmt.Printf("div_int8 -1/-127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg1_ssa(-127); got != 127 {
+ fmt.Printf("div_int8 -127/-1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int8_ssa(-1); got != 1 {
+ fmt.Printf("div_int8 -1/-1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg1_ssa(-1); got != 1 {
+ fmt.Printf("div_int8 -1/-1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg1_ssa(0); got != 0 {
+ fmt.Printf("div_int8 0/-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int8_ssa(1); got != -1 {
+ fmt.Printf("div_int8 -1/1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg1_ssa(1); got != -1 {
+ fmt.Printf("div_int8 1/-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int8_ssa(126); got != 0 {
+ fmt.Printf("div_int8 -1/126 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg1_ssa(126); got != -126 {
+ fmt.Printf("div_int8 126/-1 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := div_Neg1_int8_ssa(127); got != 0 {
+ fmt.Printf("div_int8 -1/127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int8_Neg1_ssa(127); got != -127 {
+ fmt.Printf("div_int8 127/-1 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := div_0_int8_ssa(-128); got != 0 {
+ fmt.Printf("div_int8 0/-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int8_ssa(-127); got != 0 {
+ fmt.Printf("div_int8 0/-127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int8_ssa(-1); got != 0 {
+ fmt.Printf("div_int8 0/-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int8_ssa(1); got != 0 {
+ fmt.Printf("div_int8 0/1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int8_ssa(126); got != 0 {
+ fmt.Printf("div_int8 0/126 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_0_int8_ssa(127); got != 0 {
+ fmt.Printf("div_int8 0/127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_1_int8_ssa(-128); got != 0 {
+ fmt.Printf("div_int8 1/-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int8_1_ssa(-128); got != -128 {
+ fmt.Printf("div_int8 -128/1 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := div_1_int8_ssa(-127); got != 0 {
+ fmt.Printf("div_int8 1/-127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int8_1_ssa(-127); got != -127 {
+ fmt.Printf("div_int8 -127/1 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := div_1_int8_ssa(-1); got != -1 {
+ fmt.Printf("div_int8 1/-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int8_1_ssa(-1); got != -1 {
+ fmt.Printf("div_int8 -1/1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int8_1_ssa(0); got != 0 {
+ fmt.Printf("div_int8 0/1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_1_int8_ssa(1); got != 1 {
+ fmt.Printf("div_int8 1/1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int8_1_ssa(1); got != 1 {
+ fmt.Printf("div_int8 1/1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_1_int8_ssa(126); got != 0 {
+ fmt.Printf("div_int8 1/126 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int8_1_ssa(126); got != 126 {
+ fmt.Printf("div_int8 126/1 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := div_1_int8_ssa(127); got != 0 {
+ fmt.Printf("div_int8 1/127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int8_1_ssa(127); got != 127 {
+ fmt.Printf("div_int8 127/1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := div_126_int8_ssa(-128); got != 0 {
+ fmt.Printf("div_int8 126/-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int8_126_ssa(-128); got != -1 {
+ fmt.Printf("div_int8 -128/126 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_126_int8_ssa(-127); got != 0 {
+ fmt.Printf("div_int8 126/-127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int8_126_ssa(-127); got != -1 {
+ fmt.Printf("div_int8 -127/126 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_126_int8_ssa(-1); got != -126 {
+ fmt.Printf("div_int8 126/-1 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := div_int8_126_ssa(-1); got != 0 {
+ fmt.Printf("div_int8 -1/126 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int8_126_ssa(0); got != 0 {
+ fmt.Printf("div_int8 0/126 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_126_int8_ssa(1); got != 126 {
+ fmt.Printf("div_int8 126/1 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := div_int8_126_ssa(1); got != 0 {
+ fmt.Printf("div_int8 1/126 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_126_int8_ssa(126); got != 1 {
+ fmt.Printf("div_int8 126/126 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int8_126_ssa(126); got != 1 {
+ fmt.Printf("div_int8 126/126 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_126_int8_ssa(127); got != 0 {
+ fmt.Printf("div_int8 126/127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int8_126_ssa(127); got != 1 {
+ fmt.Printf("div_int8 127/126 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_127_int8_ssa(-128); got != 0 {
+ fmt.Printf("div_int8 127/-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int8_127_ssa(-128); got != -1 {
+ fmt.Printf("div_int8 -128/127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_127_int8_ssa(-127); got != -1 {
+ fmt.Printf("div_int8 127/-127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_int8_127_ssa(-127); got != -1 {
+ fmt.Printf("div_int8 -127/127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := div_127_int8_ssa(-1); got != -127 {
+ fmt.Printf("div_int8 127/-1 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := div_int8_127_ssa(-1); got != 0 {
+ fmt.Printf("div_int8 -1/127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_int8_127_ssa(0); got != 0 {
+ fmt.Printf("div_int8 0/127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_127_int8_ssa(1); got != 127 {
+ fmt.Printf("div_int8 127/1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := div_int8_127_ssa(1); got != 0 {
+ fmt.Printf("div_int8 1/127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_127_int8_ssa(126); got != 1 {
+ fmt.Printf("div_int8 127/126 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int8_127_ssa(126); got != 0 {
+ fmt.Printf("div_int8 126/127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := div_127_int8_ssa(127); got != 1 {
+ fmt.Printf("div_int8 127/127 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := div_int8_127_ssa(127); got != 1 {
+ fmt.Printf("div_int8 127/127 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg128_int8_ssa(-128); got != 0 {
+ fmt.Printf("mul_int8 -128*-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg128_ssa(-128); got != 0 {
+ fmt.Printf("mul_int8 -128*-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg128_int8_ssa(-127); got != -128 {
+ fmt.Printf("mul_int8 -128*-127 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg128_ssa(-127); got != -128 {
+ fmt.Printf("mul_int8 -127*-128 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg128_int8_ssa(-1); got != -128 {
+ fmt.Printf("mul_int8 -128*-1 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg128_ssa(-1); got != -128 {
+ fmt.Printf("mul_int8 -1*-128 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg128_int8_ssa(0); got != 0 {
+ fmt.Printf("mul_int8 -128*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg128_ssa(0); got != 0 {
+ fmt.Printf("mul_int8 0*-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg128_int8_ssa(1); got != -128 {
+ fmt.Printf("mul_int8 -128*1 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg128_ssa(1); got != -128 {
+ fmt.Printf("mul_int8 1*-128 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg128_int8_ssa(126); got != 0 {
+ fmt.Printf("mul_int8 -128*126 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg128_ssa(126); got != 0 {
+ fmt.Printf("mul_int8 126*-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg128_int8_ssa(127); got != -128 {
+ fmt.Printf("mul_int8 -128*127 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg128_ssa(127); got != -128 {
+ fmt.Printf("mul_int8 127*-128 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg127_int8_ssa(-128); got != -128 {
+ fmt.Printf("mul_int8 -127*-128 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg127_ssa(-128); got != -128 {
+ fmt.Printf("mul_int8 -128*-127 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg127_int8_ssa(-127); got != 1 {
+ fmt.Printf("mul_int8 -127*-127 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg127_ssa(-127); got != 1 {
+ fmt.Printf("mul_int8 -127*-127 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg127_int8_ssa(-1); got != 127 {
+ fmt.Printf("mul_int8 -127*-1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg127_ssa(-1); got != 127 {
+ fmt.Printf("mul_int8 -1*-127 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg127_int8_ssa(0); got != 0 {
+ fmt.Printf("mul_int8 -127*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg127_ssa(0); got != 0 {
+ fmt.Printf("mul_int8 0*-127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg127_int8_ssa(1); got != -127 {
+ fmt.Printf("mul_int8 -127*1 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg127_ssa(1); got != -127 {
+ fmt.Printf("mul_int8 1*-127 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg127_int8_ssa(126); got != 126 {
+ fmt.Printf("mul_int8 -127*126 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg127_ssa(126); got != 126 {
+ fmt.Printf("mul_int8 126*-127 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg127_int8_ssa(127); got != -1 {
+ fmt.Printf("mul_int8 -127*127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg127_ssa(127); got != -1 {
+ fmt.Printf("mul_int8 127*-127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int8_ssa(-128); got != -128 {
+ fmt.Printf("mul_int8 -1*-128 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg1_ssa(-128); got != -128 {
+ fmt.Printf("mul_int8 -128*-1 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int8_ssa(-127); got != 127 {
+ fmt.Printf("mul_int8 -1*-127 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg1_ssa(-127); got != 127 {
+ fmt.Printf("mul_int8 -127*-1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int8_ssa(-1); got != 1 {
+ fmt.Printf("mul_int8 -1*-1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg1_ssa(-1); got != 1 {
+ fmt.Printf("mul_int8 -1*-1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int8_ssa(0); got != 0 {
+ fmt.Printf("mul_int8 -1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg1_ssa(0); got != 0 {
+ fmt.Printf("mul_int8 0*-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int8_ssa(1); got != -1 {
+ fmt.Printf("mul_int8 -1*1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg1_ssa(1); got != -1 {
+ fmt.Printf("mul_int8 1*-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int8_ssa(126); got != -126 {
+ fmt.Printf("mul_int8 -1*126 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg1_ssa(126); got != -126 {
+ fmt.Printf("mul_int8 126*-1 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := mul_Neg1_int8_ssa(127); got != -127 {
+ fmt.Printf("mul_int8 -1*127 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_Neg1_ssa(127); got != -127 {
+ fmt.Printf("mul_int8 127*-1 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int8_ssa(-128); got != 0 {
+ fmt.Printf("mul_int8 0*-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_0_ssa(-128); got != 0 {
+ fmt.Printf("mul_int8 -128*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int8_ssa(-127); got != 0 {
+ fmt.Printf("mul_int8 0*-127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_0_ssa(-127); got != 0 {
+ fmt.Printf("mul_int8 -127*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int8_ssa(-1); got != 0 {
+ fmt.Printf("mul_int8 0*-1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_0_ssa(-1); got != 0 {
+ fmt.Printf("mul_int8 -1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int8_ssa(0); got != 0 {
+ fmt.Printf("mul_int8 0*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_0_ssa(0); got != 0 {
+ fmt.Printf("mul_int8 0*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int8_ssa(1); got != 0 {
+ fmt.Printf("mul_int8 0*1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_0_ssa(1); got != 0 {
+ fmt.Printf("mul_int8 1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int8_ssa(126); got != 0 {
+ fmt.Printf("mul_int8 0*126 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_0_ssa(126); got != 0 {
+ fmt.Printf("mul_int8 126*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_0_int8_ssa(127); got != 0 {
+ fmt.Printf("mul_int8 0*127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_0_ssa(127); got != 0 {
+ fmt.Printf("mul_int8 127*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int8_ssa(-128); got != -128 {
+ fmt.Printf("mul_int8 1*-128 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_1_ssa(-128); got != -128 {
+ fmt.Printf("mul_int8 -128*1 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int8_ssa(-127); got != -127 {
+ fmt.Printf("mul_int8 1*-127 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_1_ssa(-127); got != -127 {
+ fmt.Printf("mul_int8 -127*1 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int8_ssa(-1); got != -1 {
+ fmt.Printf("mul_int8 1*-1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_1_ssa(-1); got != -1 {
+ fmt.Printf("mul_int8 -1*1 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int8_ssa(0); got != 0 {
+ fmt.Printf("mul_int8 1*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_1_ssa(0); got != 0 {
+ fmt.Printf("mul_int8 0*1 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int8_ssa(1); got != 1 {
+ fmt.Printf("mul_int8 1*1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_1_ssa(1); got != 1 {
+ fmt.Printf("mul_int8 1*1 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int8_ssa(126); got != 126 {
+ fmt.Printf("mul_int8 1*126 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_1_ssa(126); got != 126 {
+ fmt.Printf("mul_int8 126*1 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := mul_1_int8_ssa(127); got != 127 {
+ fmt.Printf("mul_int8 1*127 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_1_ssa(127); got != 127 {
+ fmt.Printf("mul_int8 127*1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := mul_126_int8_ssa(-128); got != 0 {
+ fmt.Printf("mul_int8 126*-128 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_126_ssa(-128); got != 0 {
+ fmt.Printf("mul_int8 -128*126 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_126_int8_ssa(-127); got != 126 {
+ fmt.Printf("mul_int8 126*-127 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_126_ssa(-127); got != 126 {
+ fmt.Printf("mul_int8 -127*126 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := mul_126_int8_ssa(-1); got != -126 {
+ fmt.Printf("mul_int8 126*-1 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_126_ssa(-1); got != -126 {
+ fmt.Printf("mul_int8 -1*126 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := mul_126_int8_ssa(0); got != 0 {
+ fmt.Printf("mul_int8 126*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_126_ssa(0); got != 0 {
+ fmt.Printf("mul_int8 0*126 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_126_int8_ssa(1); got != 126 {
+ fmt.Printf("mul_int8 126*1 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_126_ssa(1); got != 126 {
+ fmt.Printf("mul_int8 1*126 = %d, wanted 126\n", got)
+ failed = true
+ }
+
+ if got := mul_126_int8_ssa(126); got != 4 {
+ fmt.Printf("mul_int8 126*126 = %d, wanted 4\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_126_ssa(126); got != 4 {
+ fmt.Printf("mul_int8 126*126 = %d, wanted 4\n", got)
+ failed = true
+ }
+
+ if got := mul_126_int8_ssa(127); got != -126 {
+ fmt.Printf("mul_int8 126*127 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_126_ssa(127); got != -126 {
+ fmt.Printf("mul_int8 127*126 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := mul_127_int8_ssa(-128); got != -128 {
+ fmt.Printf("mul_int8 127*-128 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_127_ssa(-128); got != -128 {
+ fmt.Printf("mul_int8 -128*127 = %d, wanted -128\n", got)
+ failed = true
+ }
+
+ if got := mul_127_int8_ssa(-127); got != -1 {
+ fmt.Printf("mul_int8 127*-127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_127_ssa(-127); got != -1 {
+ fmt.Printf("mul_int8 -127*127 = %d, wanted -1\n", got)
+ failed = true
+ }
+
+ if got := mul_127_int8_ssa(-1); got != -127 {
+ fmt.Printf("mul_int8 127*-1 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_127_ssa(-1); got != -127 {
+ fmt.Printf("mul_int8 -1*127 = %d, wanted -127\n", got)
+ failed = true
+ }
+
+ if got := mul_127_int8_ssa(0); got != 0 {
+ fmt.Printf("mul_int8 127*0 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_127_ssa(0); got != 0 {
+ fmt.Printf("mul_int8 0*127 = %d, wanted 0\n", got)
+ failed = true
+ }
+
+ if got := mul_127_int8_ssa(1); got != 127 {
+ fmt.Printf("mul_int8 127*1 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_127_ssa(1); got != 127 {
+ fmt.Printf("mul_int8 1*127 = %d, wanted 127\n", got)
+ failed = true
+ }
+
+ if got := mul_127_int8_ssa(126); got != -126 {
+ fmt.Printf("mul_int8 127*126 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_127_ssa(126); got != -126 {
+ fmt.Printf("mul_int8 126*127 = %d, wanted -126\n", got)
+ failed = true
+ }
+
+ if got := mul_127_int8_ssa(127); got != 1 {
+ fmt.Printf("mul_int8 127*127 = %d, wanted 1\n", got)
+ failed = true
+ }
+
+ if got := mul_int8_127_ssa(127); got != 1 {
+ fmt.Printf("mul_int8 127*127 = %d, wanted 1\n", got)
+ failed = true
+ }
+ if failed {
+ panic("tests failed")
+ }
+}
--- /dev/null
+// run
+
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Tests arithmetic expressions
+
+package main
+
+import "fmt"
+
+const (
+ y = 0x0fffFFFF
+)
+
+//go:noinline
+func invalidAdd_ssa(x uint32) uint32 {
+ return x + y + y + y + y + y + y + y + y + y + y + y + y + y + y + y + y + y
+}
+
+//go:noinline
+func invalidSub_ssa(x uint32) uint32 {
+ return x - y - y - y - y - y - y - y - y - y - y - y - y - y - y - y - y - y
+}
+
+//go:noinline
+func invalidMul_ssa(x uint32) uint32 {
+ return x * y * y * y * y * y * y * y * y * y * y * y * y * y * y * y * y * y
+}
+
+// testLargeConst tests a situation where larger than 32 bit consts were passed to ADDL
+// causing an invalid instruction error.
+func testLargeConst() {
+ if want, got := uint32(268435440), invalidAdd_ssa(1); want != got {
+ println("testLargeConst add failed, wanted", want, "got", got)
+ failed = true
+ }
+ if want, got := uint32(4026531858), invalidSub_ssa(1); want != got {
+ println("testLargeConst sub failed, wanted", want, "got", got)
+ failed = true
+ }
+ if want, got := uint32(268435455), invalidMul_ssa(1); want != got {
+ println("testLargeConst mul failed, wanted", want, "got", got)
+ failed = true
+ }
+}
+
+// testArithRshConst ensures that "const >> const" right shifts correctly perform
+// sign extension on the lhs constant
+func testArithRshConst() {
+ wantu := uint64(0x4000000000000000)
+ if got := arithRshuConst_ssa(); got != wantu {
+ println("arithRshuConst failed, wanted", wantu, "got", got)
+ failed = true
+ }
+
+ wants := int64(-0x4000000000000000)
+ if got := arithRshConst_ssa(); got != wants {
+ println("arithRshuConst failed, wanted", wants, "got", got)
+ failed = true
+ }
+}
+
+//go:noinline
+func arithRshuConst_ssa() uint64 {
+ y := uint64(0x8000000000000001)
+ z := uint64(1)
+ return uint64(y >> z)
+}
+
+//go:noinline
+func arithRshConst_ssa() int64 {
+ y := int64(-0x8000000000000000)
+ z := uint64(1)
+ return int64(y >> z)
+}
+
+//go:noinline
+func arithConstShift_ssa(x int64) int64 {
+ return x >> 100
+}
+
+// testArithConstShift tests that right shift by large constants preserve
+// the sign of the input.
+func testArithConstShift() {
+ want := int64(-1)
+ if got := arithConstShift_ssa(-1); want != got {
+ println("arithConstShift_ssa(-1) failed, wanted", want, "got", got)
+ failed = true
+ }
+ want = 0
+ if got := arithConstShift_ssa(1); want != got {
+ println("arithConstShift_ssa(1) failed, wanted", want, "got", got)
+ failed = true
+ }
+}
+
+// overflowConstShift_ssa verifes that constant folding for shift
+// doesn't wrap (i.e. x << MAX_INT << 1 doesn't get folded to x << 0).
+//go:noinline
+func overflowConstShift64_ssa(x int64) int64 {
+ return x << uint64(0xffffffffffffffff) << uint64(1)
+}
+
+//go:noinline
+func overflowConstShift32_ssa(x int64) int32 {
+ return int32(x) << uint32(0xffffffff) << uint32(1)
+}
+
+//go:noinline
+func overflowConstShift16_ssa(x int64) int16 {
+ return int16(x) << uint16(0xffff) << uint16(1)
+}
+
+//go:noinline
+func overflowConstShift8_ssa(x int64) int8 {
+ return int8(x) << uint8(0xff) << uint8(1)
+}
+
+func testOverflowConstShift() {
+ want := int64(0)
+ for x := int64(-127); x < int64(127); x++ {
+ got := overflowConstShift64_ssa(x)
+ if want != got {
+ fmt.Printf("overflowShift64 failed, wanted %d got %d\n", want, got)
+ }
+ got = int64(overflowConstShift32_ssa(x))
+ if want != got {
+ fmt.Printf("overflowShift32 failed, wanted %d got %d\n", want, got)
+ }
+ got = int64(overflowConstShift16_ssa(x))
+ if want != got {
+ fmt.Printf("overflowShift16 failed, wanted %d got %d\n", want, got)
+ }
+ got = int64(overflowConstShift8_ssa(x))
+ if want != got {
+ fmt.Printf("overflowShift8 failed, wanted %d got %d\n", want, got)
+ }
+ }
+}
+
+// test64BitConstMult tests that rewrite rules don't fold 64 bit constants
+// into multiply instructions.
+func test64BitConstMult() {
+ want := int64(103079215109)
+ if got := test64BitConstMult_ssa(1, 2); want != got {
+ println("test64BitConstMult failed, wanted", want, "got", got)
+ failed = true
+ }
+}
+
+//go:noinline
+func test64BitConstMult_ssa(a, b int64) int64 {
+ return 34359738369*a + b*34359738370
+}
+
+// test64BitConstAdd tests that rewrite rules don't fold 64 bit constants
+// into add instructions.
+func test64BitConstAdd() {
+ want := int64(3567671782835376650)
+ if got := test64BitConstAdd_ssa(1, 2); want != got {
+ println("test64BitConstAdd failed, wanted", want, "got", got)
+ failed = true
+ }
+}
+
+//go:noinline
+func test64BitConstAdd_ssa(a, b int64) int64 {
+ return a + 575815584948629622 + b + 2991856197886747025
+}
+
+// testRegallocCVSpill tests that regalloc spills a value whose last use is the
+// current value.
+func testRegallocCVSpill() {
+ want := int8(-9)
+ if got := testRegallocCVSpill_ssa(1, 2, 3, 4); want != got {
+ println("testRegallocCVSpill failed, wanted", want, "got", got)
+ failed = true
+ }
+}
+
+//go:noinline
+func testRegallocCVSpill_ssa(a, b, c, d int8) int8 {
+ return a + -32 + b + 63*c*-87*d
+}
+
+func testBitwiseLogic() {
+ a, b := uint32(57623283), uint32(1314713839)
+ if want, got := uint32(38551779), testBitwiseAnd_ssa(a, b); want != got {
+ println("testBitwiseAnd failed, wanted", want, "got", got)
+ failed = true
+ }
+ if want, got := uint32(1333785343), testBitwiseOr_ssa(a, b); want != got {
+ println("testBitwiseOr failed, wanted", want, "got", got)
+ failed = true
+ }
+ if want, got := uint32(1295233564), testBitwiseXor_ssa(a, b); want != got {
+ println("testBitwiseXor failed, wanted", want, "got", got)
+ failed = true
+ }
+ if want, got := int32(832), testBitwiseLsh_ssa(13, 4, 2); want != got {
+ println("testBitwiseLsh failed, wanted", want, "got", got)
+ failed = true
+ }
+ if want, got := int32(0), testBitwiseLsh_ssa(13, 25, 15); want != got {
+ println("testBitwiseLsh failed, wanted", want, "got", got)
+ failed = true
+ }
+ if want, got := int32(0), testBitwiseLsh_ssa(-13, 25, 15); want != got {
+ println("testBitwiseLsh failed, wanted", want, "got", got)
+ failed = true
+ }
+ if want, got := int32(-13), testBitwiseRsh_ssa(-832, 4, 2); want != got {
+ println("testBitwiseRsh failed, wanted", want, "got", got)
+ failed = true
+ }
+ if want, got := int32(0), testBitwiseRsh_ssa(13, 25, 15); want != got {
+ println("testBitwiseRsh failed, wanted", want, "got", got)
+ failed = true
+ }
+ if want, got := int32(-1), testBitwiseRsh_ssa(-13, 25, 15); want != got {
+ println("testBitwiseRsh failed, wanted", want, "got", got)
+ failed = true
+ }
+ if want, got := uint32(0x3ffffff), testBitwiseRshU_ssa(0xffffffff, 4, 2); want != got {
+ println("testBitwiseRshU failed, wanted", want, "got", got)
+ failed = true
+ }
+ if want, got := uint32(0), testBitwiseRshU_ssa(13, 25, 15); want != got {
+ println("testBitwiseRshU failed, wanted", want, "got", got)
+ failed = true
+ }
+ if want, got := uint32(0), testBitwiseRshU_ssa(0x8aaaaaaa, 25, 15); want != got {
+ println("testBitwiseRshU failed, wanted", want, "got", got)
+ failed = true
+ }
+}
+
+//go:noinline
+func testBitwiseAnd_ssa(a, b uint32) uint32 {
+ return a & b
+}
+
+//go:noinline
+func testBitwiseOr_ssa(a, b uint32) uint32 {
+ return a | b
+}
+
+//go:noinline
+func testBitwiseXor_ssa(a, b uint32) uint32 {
+ return a ^ b
+}
+
+//go:noinline
+func testBitwiseLsh_ssa(a int32, b, c uint32) int32 {
+ return a << b << c
+}
+
+//go:noinline
+func testBitwiseRsh_ssa(a int32, b, c uint32) int32 {
+ return a >> b >> c
+}
+
+//go:noinline
+func testBitwiseRshU_ssa(a uint32, b, c uint32) uint32 {
+ return a >> b >> c
+}
+
+//go:noinline
+func testShiftCX_ssa() int {
+ v1 := uint8(3)
+ v4 := (v1 * v1) ^ v1 | v1 - v1 - v1&v1 ^ uint8(3+2) + v1*1>>0 - v1 | 1 | v1<<(2*3|0-0*0^1)
+ v5 := v4>>(3-0-uint(3)) | v1 | v1 + v1 ^ v4<<(0+1|3&1)<<(uint64(1)<<0*2*0<<0) ^ v1
+ v6 := v5 ^ (v1+v1)*v1 | v1 | v1*v1>>(v1&v1)>>(uint(1)<<0*uint(3)>>1)*v1<<2*v1<<v1 - v1>>2 | (v4 - v1) ^ v1 + v1 ^ v1>>1 | v1 + v1 - v1 ^ v1
+ v7 := v6 & v5 << 0
+ v1++
+ v11 := 2&1 ^ 0 + 3 | int(0^0)<<1>>(1*0*3) ^ 0*0 ^ 3&0*3&3 ^ 3*3 ^ 1 ^ int(2)<<(2*3) + 2 | 2 | 2 ^ 2 + 1 | 3 | 0 ^ int(1)>>1 ^ 2 // int
+ v7--
+ return int(uint64(2*1)<<(3-2)<<uint(3>>v7)-2)&v11 | v11 - int(2)<<0>>(2-1)*(v11*0&v11<<1<<(uint8(2)+v4))
+}
+
+func testShiftCX() {
+ want := 141
+ if got := testShiftCX_ssa(); want != got {
+ println("testShiftCX failed, wanted", want, "got", got)
+ failed = true
+ }
+}
+
+// testSubqToNegq ensures that the SUBQ -> NEGQ translation works correctly.
+func testSubqToNegq() {
+ want := int64(-318294940372190156)
+ if got := testSubqToNegq_ssa(1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2); want != got {
+ println("testSubqToNegq failed, wanted", want, "got", got)
+ failed = true
+ }
+}
+
+//go:noinline
+func testSubqToNegq_ssa(a, b, c, d, e, f, g, h, i, j, k int64) int64 {
+ return a + 8207351403619448057 - b - 1779494519303207690 + c*8810076340510052032*d - 4465874067674546219 - e*4361839741470334295 - f + 8688847565426072650*g*8065564729145417479
+}
+
+func testOcom() {
+ want1, want2 := int32(0x55555555), int32(-0x55555556)
+ if got1, got2 := testOcom_ssa(0x55555555, 0x55555555); want1 != got1 || want2 != got2 {
+ println("testSubqToNegq failed, wanted", want1, "and", want2,
+ "got", got1, "and", got2)
+ failed = true
+ }
+}
+
+//go:noinline
+func testOcom_ssa(a, b int32) (int32, int32) {
+ return ^^^^a, ^^^^^b
+}
+
+func lrot1_ssa(w uint8, x uint16, y uint32, z uint64) (a uint8, b uint16, c uint32, d uint64) {
+ a = (w << 5) | (w >> 3)
+ b = (x << 13) | (x >> 3)
+ c = (y << 29) | (y >> 3)
+ d = (z << 61) | (z >> 3)
+ return
+}
+
+//go:noinline
+func lrot2_ssa(w, n uint32) uint32 {
+ // Want to be sure that a "rotate by 32" which
+ // is really 0 | (w >> 0) == w
+ // is correctly compiled.
+ return (w << n) | (w >> (32 - n))
+}
+
+//go:noinline
+func lrot3_ssa(w uint32) uint32 {
+ // Want to be sure that a "rotate by 32" which
+ // is really 0 | (w >> 0) == w
+ // is correctly compiled.
+ return (w << 32) | (w >> (32 - 32))
+}
+
+func testLrot() {
+ wantA, wantB, wantC, wantD := uint8(0xe1), uint16(0xe001),
+ uint32(0xe0000001), uint64(0xe000000000000001)
+ a, b, c, d := lrot1_ssa(0xf, 0xf, 0xf, 0xf)
+ if a != wantA || b != wantB || c != wantC || d != wantD {
+ println("lrot1_ssa(0xf, 0xf, 0xf, 0xf)=",
+ wantA, wantB, wantC, wantD, ", got", a, b, c, d)
+ failed = true
+ }
+ x := lrot2_ssa(0xb0000001, 32)
+ wantX := uint32(0xb0000001)
+ if x != wantX {
+ println("lrot2_ssa(0xb0000001, 32)=",
+ wantX, ", got", x)
+ failed = true
+ }
+ x = lrot3_ssa(0xb0000001)
+ if x != wantX {
+ println("lrot3_ssa(0xb0000001)=",
+ wantX, ", got", x)
+ failed = true
+ }
+
+}
+
+//go:noinline
+func sub1_ssa() uint64 {
+ v1 := uint64(3) // uint64
+ return v1*v1 - (v1&v1)&v1
+}
+func sub2_ssa() uint8 {
+ switch {
+ }
+ v1 := uint8(0)
+ v3 := v1 + v1 + v1 ^ v1 | 3 + v1 ^ v1 | v1 ^ v1
+ v1-- // dev.ssa doesn't see this one
+ return v1 ^ v1*v1 - v3
+}
+
+func testSubConst() {
+ x1 := sub1_ssa()
+ want1 := uint64(6)
+ if x1 != want1 {
+ println("sub1_ssa()=", want1, ", got", x1)
+ failed = true
+ }
+ x2 := sub2_ssa()
+ want2 := uint8(251)
+ if x2 != want2 {
+ println("sub2_ssa()=", want2, ", got", x2)
+ failed = true
+ }
+}
+
+//go:noinline
+func orPhi_ssa(a bool, x int) int {
+ v := 0
+ if a {
+ v = -1
+ } else {
+ v = -1
+ }
+ return x | v
+}
+
+func testOrPhi() {
+ if want, got := -1, orPhi_ssa(true, 4); got != want {
+ println("orPhi_ssa(true, 4)=", got, " want ", want)
+ }
+ if want, got := -1, orPhi_ssa(false, 0); got != want {
+ println("orPhi_ssa(false, 0)=", got, " want ", want)
+ }
+}
+
+var failed = false
+
+func main() {
+
+ test64BitConstMult()
+ test64BitConstAdd()
+ testRegallocCVSpill()
+ testSubqToNegq()
+ testBitwiseLogic()
+ testOcom()
+ testLrot()
+ testShiftCX()
+ testSubConst()
+ testOverflowConstShift()
+ testArithConstShift()
+ testArithRshConst()
+ testLargeConst()
+
+ if failed {
+ panic("failed")
+ }
+}
--- /dev/null
+package main
+
+var failed = false
+
+//go:noinline
+func testSliceLenCap12_ssa(a [10]int, i, j int) (int, int) {
+ b := a[i:j]
+ return len(b), cap(b)
+}
+
+//go:noinline
+func testSliceLenCap1_ssa(a [10]int, i, j int) (int, int) {
+ b := a[i:]
+ return len(b), cap(b)
+}
+
+//go:noinline
+func testSliceLenCap2_ssa(a [10]int, i, j int) (int, int) {
+ b := a[:j]
+ return len(b), cap(b)
+}
+
+func testSliceLenCap() {
+ a := [10]int{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
+ tests := [...]struct {
+ fn func(a [10]int, i, j int) (int, int)
+ i, j int // slice range
+ l, c int // len, cap
+ }{
+ // -1 means the value is not used.
+ {testSliceLenCap12_ssa, 0, 0, 0, 10},
+ {testSliceLenCap12_ssa, 0, 1, 1, 10},
+ {testSliceLenCap12_ssa, 0, 10, 10, 10},
+ {testSliceLenCap12_ssa, 10, 10, 0, 0},
+ {testSliceLenCap12_ssa, 0, 5, 5, 10},
+ {testSliceLenCap12_ssa, 5, 5, 0, 5},
+ {testSliceLenCap12_ssa, 5, 10, 5, 5},
+ {testSliceLenCap1_ssa, 0, -1, 0, 10},
+ {testSliceLenCap1_ssa, 5, -1, 5, 5},
+ {testSliceLenCap1_ssa, 10, -1, 0, 0},
+ {testSliceLenCap2_ssa, -1, 0, 0, 10},
+ {testSliceLenCap2_ssa, -1, 5, 5, 10},
+ {testSliceLenCap2_ssa, -1, 10, 10, 10},
+ }
+
+ for i, t := range tests {
+ if l, c := t.fn(a, t.i, t.j); l != t.l && c != t.c {
+ println("#", i, " len(a[", t.i, ":", t.j, "]), cap(a[", t.i, ":", t.j, "]) =", l, c,
+ ", want", t.l, t.c)
+ failed = true
+ }
+ }
+}
+
+//go:noinline
+func testSliceGetElement_ssa(a [10]int, i, j, p int) int {
+ return a[i:j][p]
+}
+
+func testSliceGetElement() {
+ a := [10]int{0, 10, 20, 30, 40, 50, 60, 70, 80, 90}
+ tests := [...]struct {
+ i, j, p int
+ want int // a[i:j][p]
+ }{
+ {0, 10, 2, 20},
+ {0, 5, 4, 40},
+ {5, 10, 3, 80},
+ {1, 9, 7, 80},
+ }
+
+ for i, t := range tests {
+ if got := testSliceGetElement_ssa(a, t.i, t.j, t.p); got != t.want {
+ println("#", i, " a[", t.i, ":", t.j, "][", t.p, "] = ", got, " wanted ", t.want)
+ failed = true
+ }
+ }
+}
+
+//go:noinline
+func testSliceSetElement_ssa(a *[10]int, i, j, p, x int) {
+ (*a)[i:j][p] = x
+}
+
+func testSliceSetElement() {
+ a := [10]int{0, 10, 20, 30, 40, 50, 60, 70, 80, 90}
+ tests := [...]struct {
+ i, j, p int
+ want int // a[i:j][p]
+ }{
+ {0, 10, 2, 17},
+ {0, 5, 4, 11},
+ {5, 10, 3, 28},
+ {1, 9, 7, 99},
+ }
+
+ for i, t := range tests {
+ testSliceSetElement_ssa(&a, t.i, t.j, t.p, t.want)
+ if got := a[t.i+t.p]; got != t.want {
+ println("#", i, " a[", t.i, ":", t.j, "][", t.p, "] = ", got, " wanted ", t.want)
+ failed = true
+ }
+ }
+}
+
+func testSlicePanic1() {
+ defer func() {
+ if r := recover(); r != nil {
+ println("paniced as expected")
+ }
+ }()
+
+ a := [10]int{0, 10, 20, 30, 40, 50, 60, 70, 80, 90}
+ testSliceLenCap12_ssa(a, 3, 12)
+ println("expected to panic, but didn't")
+ failed = true
+}
+
+func testSlicePanic2() {
+ defer func() {
+ if r := recover(); r != nil {
+ println("paniced as expected")
+ }
+ }()
+
+ a := [10]int{0, 10, 20, 30, 40, 50, 60, 70, 80, 90}
+ testSliceGetElement_ssa(a, 3, 7, 4)
+ println("expected to panic, but didn't")
+ failed = true
+}
+
+func main() {
+ testSliceLenCap()
+ testSliceGetElement()
+ testSliceSetElement()
+ testSlicePanic1()
+ testSlicePanic2()
+
+ if failed {
+ panic("failed")
+ }
+}
--- /dev/null
+// run
+
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Tests type assertion expressions and statements
+
+package main
+
+import (
+ "fmt"
+ "runtime"
+)
+
+type (
+ S struct{}
+ T struct{}
+
+ I interface {
+ F()
+ }
+)
+
+var (
+ s *S
+ t *T
+)
+
+func (s *S) F() {}
+func (t *T) F() {}
+
+func e2t_ssa(e interface{}) *T {
+ return e.(*T)
+}
+
+func i2t_ssa(i I) *T {
+ return i.(*T)
+}
+
+func testAssertE2TOk() {
+ if got := e2t_ssa(t); got != t {
+ fmt.Printf("e2t_ssa(t)=%v want %v", got, t)
+ failed = true
+ }
+}
+
+func testAssertE2TPanic() {
+ var got *T
+ defer func() {
+ if got != nil {
+ fmt.Printf("e2t_ssa(s)=%v want nil", got)
+ failed = true
+ }
+ e := recover()
+ err, ok := e.(*runtime.TypeAssertionError)
+ if !ok {
+ fmt.Printf("e2t_ssa(s) panic type %T", e)
+ failed = true
+ }
+ want := "interface conversion: interface {} is *main.S, not *main.T"
+ if err.Error() != want {
+ fmt.Printf("e2t_ssa(s) wrong error, want '%s', got '%s'\n", want, err.Error())
+ failed = true
+ }
+ }()
+ got = e2t_ssa(s)
+ fmt.Printf("e2t_ssa(s) should panic")
+ failed = true
+}
+
+func testAssertI2TOk() {
+ if got := i2t_ssa(t); got != t {
+ fmt.Printf("i2t_ssa(t)=%v want %v", got, t)
+ failed = true
+ }
+}
+
+func testAssertI2TPanic() {
+ var got *T
+ defer func() {
+ if got != nil {
+ fmt.Printf("i2t_ssa(s)=%v want nil", got)
+ failed = true
+ }
+ e := recover()
+ err, ok := e.(*runtime.TypeAssertionError)
+ if !ok {
+ fmt.Printf("i2t_ssa(s) panic type %T", e)
+ failed = true
+ }
+ want := "interface conversion: main.I is *main.S, not *main.T"
+ if err.Error() != want {
+ fmt.Printf("i2t_ssa(s) wrong error, want '%s', got '%s'\n", want, err.Error())
+ failed = true
+ }
+ }()
+ got = i2t_ssa(s)
+ fmt.Printf("i2t_ssa(s) should panic")
+ failed = true
+}
+
+func e2t2_ssa(e interface{}) (*T, bool) {
+ t, ok := e.(*T)
+ return t, ok
+}
+
+func i2t2_ssa(i I) (*T, bool) {
+ t, ok := i.(*T)
+ return t, ok
+}
+
+func testAssertE2T2() {
+ if got, ok := e2t2_ssa(t); !ok || got != t {
+ fmt.Printf("e2t2_ssa(t)=(%v, %v) want (%v, %v)", got, ok, t, true)
+ failed = true
+ }
+ if got, ok := e2t2_ssa(s); ok || got != nil {
+ fmt.Printf("e2t2_ssa(s)=(%v, %v) want (%v, %v)", got, ok, nil, false)
+ failed = true
+ }
+}
+
+func testAssertI2T2() {
+ if got, ok := i2t2_ssa(t); !ok || got != t {
+ fmt.Printf("i2t2_ssa(t)=(%v, %v) want (%v, %v)", got, ok, t, true)
+ failed = true
+ }
+ if got, ok := i2t2_ssa(s); ok || got != nil {
+ fmt.Printf("i2t2_ssa(s)=(%v, %v) want (%v, %v)", got, ok, nil, false)
+ failed = true
+ }
+}
+
+var failed = false
+
+func main() {
+ testAssertE2TOk()
+ testAssertE2TPanic()
+ testAssertI2TOk()
+ testAssertI2TPanic()
+ testAssertE2T2()
+ testAssertI2T2()
+ if failed {
+ panic("failed")
+ }
+}
--- /dev/null
+// run
+
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Tests continue and break.
+
+package main
+
+func continuePlain_ssa() int {
+ var n int
+ for i := 0; i < 10; i++ {
+ if i == 6 {
+ continue
+ }
+ n = i
+ }
+ return n
+}
+
+func continueLabeled_ssa() int {
+ var n int
+Next:
+ for i := 0; i < 10; i++ {
+ if i == 6 {
+ continue Next
+ }
+ n = i
+ }
+ return n
+}
+
+func continuePlainInner_ssa() int {
+ var n int
+ for j := 0; j < 30; j += 10 {
+ for i := 0; i < 10; i++ {
+ if i == 6 {
+ continue
+ }
+ n = i
+ }
+ n += j
+ }
+ return n
+}
+
+func continueLabeledInner_ssa() int {
+ var n int
+ for j := 0; j < 30; j += 10 {
+ Next:
+ for i := 0; i < 10; i++ {
+ if i == 6 {
+ continue Next
+ }
+ n = i
+ }
+ n += j
+ }
+ return n
+}
+
+func continueLabeledOuter_ssa() int {
+ var n int
+Next:
+ for j := 0; j < 30; j += 10 {
+ for i := 0; i < 10; i++ {
+ if i == 6 {
+ continue Next
+ }
+ n = i
+ }
+ n += j
+ }
+ return n
+}
+
+func breakPlain_ssa() int {
+ var n int
+ for i := 0; i < 10; i++ {
+ if i == 6 {
+ break
+ }
+ n = i
+ }
+ return n
+}
+
+func breakLabeled_ssa() int {
+ var n int
+Next:
+ for i := 0; i < 10; i++ {
+ if i == 6 {
+ break Next
+ }
+ n = i
+ }
+ return n
+}
+
+func breakPlainInner_ssa() int {
+ var n int
+ for j := 0; j < 30; j += 10 {
+ for i := 0; i < 10; i++ {
+ if i == 6 {
+ break
+ }
+ n = i
+ }
+ n += j
+ }
+ return n
+}
+
+func breakLabeledInner_ssa() int {
+ var n int
+ for j := 0; j < 30; j += 10 {
+ Next:
+ for i := 0; i < 10; i++ {
+ if i == 6 {
+ break Next
+ }
+ n = i
+ }
+ n += j
+ }
+ return n
+}
+
+func breakLabeledOuter_ssa() int {
+ var n int
+Next:
+ for j := 0; j < 30; j += 10 {
+ for i := 0; i < 10; i++ {
+ if i == 6 {
+ break Next
+ }
+ n = i
+ }
+ n += j
+ }
+ return n
+}
+
+var g, h int // globals to ensure optimizations don't collapse our switch statements
+
+func switchPlain_ssa() int {
+ var n int
+ switch g {
+ case 0:
+ n = 1
+ break
+ n = 2
+ }
+ return n
+}
+
+func switchLabeled_ssa() int {
+ var n int
+Done:
+ switch g {
+ case 0:
+ n = 1
+ break Done
+ n = 2
+ }
+ return n
+}
+
+func switchPlainInner_ssa() int {
+ var n int
+ switch g {
+ case 0:
+ n = 1
+ switch h {
+ case 0:
+ n += 10
+ break
+ }
+ n = 2
+ }
+ return n
+}
+
+func switchLabeledInner_ssa() int {
+ var n int
+ switch g {
+ case 0:
+ n = 1
+ Done:
+ switch h {
+ case 0:
+ n += 10
+ break Done
+ }
+ n = 2
+ }
+ return n
+}
+
+func switchLabeledOuter_ssa() int {
+ var n int
+Done:
+ switch g {
+ case 0:
+ n = 1
+ switch h {
+ case 0:
+ n += 10
+ break Done
+ }
+ n = 2
+ }
+ return n
+}
+
+func main() {
+ tests := [...]struct {
+ name string
+ fn func() int
+ want int
+ }{
+ {"continuePlain_ssa", continuePlain_ssa, 9},
+ {"continueLabeled_ssa", continueLabeled_ssa, 9},
+ {"continuePlainInner_ssa", continuePlainInner_ssa, 29},
+ {"continueLabeledInner_ssa", continueLabeledInner_ssa, 29},
+ {"continueLabeledOuter_ssa", continueLabeledOuter_ssa, 5},
+
+ {"breakPlain_ssa", breakPlain_ssa, 5},
+ {"breakLabeled_ssa", breakLabeled_ssa, 5},
+ {"breakPlainInner_ssa", breakPlainInner_ssa, 25},
+ {"breakLabeledInner_ssa", breakLabeledInner_ssa, 25},
+ {"breakLabeledOuter_ssa", breakLabeledOuter_ssa, 5},
+
+ {"switchPlain_ssa", switchPlain_ssa, 1},
+ {"switchLabeled_ssa", switchLabeled_ssa, 1},
+ {"switchPlainInner_ssa", switchPlainInner_ssa, 2},
+ {"switchLabeledInner_ssa", switchLabeledInner_ssa, 2},
+ {"switchLabeledOuter_ssa", switchLabeledOuter_ssa, 11},
+
+ // no select tests; they're identical to switch
+ }
+
+ var failed bool
+ for _, test := range tests {
+ if got := test.fn(); test.fn() != test.want {
+ print(test.name, "()=", got, ", want ", test.want, "\n")
+ failed = true
+ }
+ }
+
+ if failed {
+ panic("failed")
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// chan_ssa.go tests chan operations.
+package main
+
+import "fmt"
+
+var failed = false
+
+//go:noinline
+func lenChan_ssa(v chan int) int {
+ return len(v)
+}
+
+//go:noinline
+func capChan_ssa(v chan int) int {
+ return cap(v)
+}
+
+func testLenChan() {
+
+ v := make(chan int, 10)
+ v <- 1
+ v <- 1
+ v <- 1
+
+ if want, got := 3, lenChan_ssa(v); got != want {
+ fmt.Printf("expected len(chan) = %d, got %d", want, got)
+ failed = true
+ }
+}
+
+func testLenNilChan() {
+
+ var v chan int
+ if want, got := 0, lenChan_ssa(v); got != want {
+ fmt.Printf("expected len(nil) = %d, got %d", want, got)
+ failed = true
+ }
+}
+
+func testCapChan() {
+
+ v := make(chan int, 25)
+
+ if want, got := 25, capChan_ssa(v); got != want {
+ fmt.Printf("expected cap(chan) = %d, got %d", want, got)
+ failed = true
+ }
+}
+
+func testCapNilChan() {
+
+ var v chan int
+ if want, got := 0, capChan_ssa(v); got != want {
+ fmt.Printf("expected cap(nil) = %d, got %d", want, got)
+ failed = true
+ }
+}
+
+func main() {
+ testLenChan()
+ testLenNilChan()
+
+ testCapChan()
+ testCapNilChan()
+
+ if failed {
+ panic("failed")
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// map_ssa.go tests map operations.
+package main
+
+import "fmt"
+
+var failed = false
+
+//go:noinline
+func testCFunc_ssa() int {
+ a := 0
+ b := func() {
+ switch {
+ }
+ a++
+ }
+ b()
+ b()
+ return a
+}
+
+func testCFunc() {
+ if want, got := 2, testCFunc_ssa(); got != want {
+ fmt.Printf("expected %d, got %d", want, got)
+ failed = true
+ }
+}
+
+func main() {
+ testCFunc()
+
+ if failed {
+ panic("failed")
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// cmp_ssa.go tests compare simplification operations.
+package main
+
+import "fmt"
+
+var failed = false
+
+//go:noinline
+func eq_ssa(a int64) bool {
+ return 4+a == 10
+}
+
+//go:noinline
+func neq_ssa(a int64) bool {
+ return 10 != a+4
+}
+
+func testCmp() {
+ if wanted, got := true, eq_ssa(6); wanted != got {
+ fmt.Printf("eq_ssa: expected %v, got %v\n", wanted, got)
+ failed = true
+ }
+ if wanted, got := false, eq_ssa(7); wanted != got {
+ fmt.Printf("eq_ssa: expected %v, got %v\n", wanted, got)
+ failed = true
+ }
+
+ if wanted, got := false, neq_ssa(6); wanted != got {
+ fmt.Printf("neq_ssa: expected %v, got %v\n", wanted, got)
+ failed = true
+ }
+ if wanted, got := true, neq_ssa(7); wanted != got {
+ fmt.Printf("neq_ssa: expected %v, got %v\n", wanted, got)
+ failed = true
+ }
+}
+
+func main() {
+ testCmp()
+
+ if failed {
+ panic("failed")
+ }
+}
--- /dev/null
+// run
+
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Test compound objects
+
+package main
+
+import "fmt"
+
+func string_ssa(a, b string, x bool) string {
+ s := ""
+ if x {
+ s = a
+ } else {
+ s = b
+ }
+ return s
+}
+
+func testString() {
+ a := "foo"
+ b := "barz"
+ if want, got := a, string_ssa(a, b, true); got != want {
+ fmt.Printf("string_ssa(%v, %v, true) = %v, want %v\n", a, b, got, want)
+ failed = true
+ }
+ if want, got := b, string_ssa(a, b, false); got != want {
+ fmt.Printf("string_ssa(%v, %v, false) = %v, want %v\n", a, b, got, want)
+ failed = true
+ }
+}
+
+func complex64_ssa(a, b complex64, x bool) complex64 {
+ switch {
+ }
+ var c complex64
+ if x {
+ c = a
+ } else {
+ c = b
+ }
+ return c
+}
+
+func complex128_ssa(a, b complex128, x bool) complex128 {
+ switch {
+ }
+ var c complex128
+ if x {
+ c = a
+ } else {
+ c = b
+ }
+ return c
+}
+
+func testComplex64() {
+ var a complex64 = 1 + 2i
+ var b complex64 = 3 + 4i
+
+ if want, got := a, complex64_ssa(a, b, true); got != want {
+ fmt.Printf("complex64_ssa(%v, %v, true) = %v, want %v\n", a, b, got, want)
+ failed = true
+ }
+ if want, got := b, complex64_ssa(a, b, false); got != want {
+ fmt.Printf("complex64_ssa(%v, %v, true) = %v, want %v\n", a, b, got, want)
+ failed = true
+ }
+}
+
+func testComplex128() {
+ var a complex128 = 1 + 2i
+ var b complex128 = 3 + 4i
+
+ if want, got := a, complex128_ssa(a, b, true); got != want {
+ fmt.Printf("complex128_ssa(%v, %v, true) = %v, want %v\n", a, b, got, want)
+ failed = true
+ }
+ if want, got := b, complex128_ssa(a, b, false); got != want {
+ fmt.Printf("complex128_ssa(%v, %v, true) = %v, want %v\n", a, b, got, want)
+ failed = true
+ }
+}
+
+func slice_ssa(a, b []byte, x bool) []byte {
+ var s []byte
+ if x {
+ s = a
+ } else {
+ s = b
+ }
+ return s
+}
+
+func testSlice() {
+ a := []byte{3, 4, 5}
+ b := []byte{7, 8, 9}
+ if want, got := byte(3), slice_ssa(a, b, true)[0]; got != want {
+ fmt.Printf("slice_ssa(%v, %v, true) = %v, want %v\n", a, b, got, want)
+ failed = true
+ }
+ if want, got := byte(7), slice_ssa(a, b, false)[0]; got != want {
+ fmt.Printf("slice_ssa(%v, %v, false) = %v, want %v\n", a, b, got, want)
+ failed = true
+ }
+}
+
+func interface_ssa(a, b interface{}, x bool) interface{} {
+ var s interface{}
+ if x {
+ s = a
+ } else {
+ s = b
+ }
+ return s
+}
+
+func testInterface() {
+ a := interface{}(3)
+ b := interface{}(4)
+ if want, got := 3, interface_ssa(a, b, true).(int); got != want {
+ fmt.Printf("interface_ssa(%v, %v, true) = %v, want %v\n", a, b, got, want)
+ failed = true
+ }
+ if want, got := 4, interface_ssa(a, b, false).(int); got != want {
+ fmt.Printf("interface_ssa(%v, %v, false) = %v, want %v\n", a, b, got, want)
+ failed = true
+ }
+}
+
+var failed = false
+
+func main() {
+ testString()
+ testSlice()
+ testInterface()
+ testComplex64()
+ testComplex128()
+ if failed {
+ panic("failed")
+ }
+}
--- /dev/null
+// run
+// autogenerated from gen/copyGen.go - do not edit!
+package main
+
+import "fmt"
+
+type T1 struct {
+ pre [8]byte
+ mid [1]byte
+ post [8]byte
+}
+
+func t1copy_ssa(y, x *[1]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy1() {
+ a := T1{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1]byte{0}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [1]byte{100}
+ t1copy_ssa(&a.mid, &x)
+ want := T1{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1]byte{100}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t1copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T2 struct {
+ pre [8]byte
+ mid [2]byte
+ post [8]byte
+}
+
+func t2copy_ssa(y, x *[2]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy2() {
+ a := T2{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [2]byte{0, 1}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [2]byte{100, 101}
+ t2copy_ssa(&a.mid, &x)
+ want := T2{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [2]byte{100, 101}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t2copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T3 struct {
+ pre [8]byte
+ mid [3]byte
+ post [8]byte
+}
+
+func t3copy_ssa(y, x *[3]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy3() {
+ a := T3{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [3]byte{0, 1, 2}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [3]byte{100, 101, 102}
+ t3copy_ssa(&a.mid, &x)
+ want := T3{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [3]byte{100, 101, 102}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t3copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T4 struct {
+ pre [8]byte
+ mid [4]byte
+ post [8]byte
+}
+
+func t4copy_ssa(y, x *[4]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy4() {
+ a := T4{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [4]byte{0, 1, 2, 3}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [4]byte{100, 101, 102, 103}
+ t4copy_ssa(&a.mid, &x)
+ want := T4{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [4]byte{100, 101, 102, 103}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t4copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T5 struct {
+ pre [8]byte
+ mid [5]byte
+ post [8]byte
+}
+
+func t5copy_ssa(y, x *[5]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy5() {
+ a := T5{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [5]byte{0, 1, 2, 3, 4}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [5]byte{100, 101, 102, 103, 104}
+ t5copy_ssa(&a.mid, &x)
+ want := T5{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [5]byte{100, 101, 102, 103, 104}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t5copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T6 struct {
+ pre [8]byte
+ mid [6]byte
+ post [8]byte
+}
+
+func t6copy_ssa(y, x *[6]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy6() {
+ a := T6{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [6]byte{0, 1, 2, 3, 4, 5}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [6]byte{100, 101, 102, 103, 104, 105}
+ t6copy_ssa(&a.mid, &x)
+ want := T6{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [6]byte{100, 101, 102, 103, 104, 105}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t6copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T7 struct {
+ pre [8]byte
+ mid [7]byte
+ post [8]byte
+}
+
+func t7copy_ssa(y, x *[7]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy7() {
+ a := T7{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [7]byte{0, 1, 2, 3, 4, 5, 6}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [7]byte{100, 101, 102, 103, 104, 105, 106}
+ t7copy_ssa(&a.mid, &x)
+ want := T7{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [7]byte{100, 101, 102, 103, 104, 105, 106}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t7copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T8 struct {
+ pre [8]byte
+ mid [8]byte
+ post [8]byte
+}
+
+func t8copy_ssa(y, x *[8]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy8() {
+ a := T8{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [8]byte{0, 1, 2, 3, 4, 5, 6, 7}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [8]byte{100, 101, 102, 103, 104, 105, 106, 107}
+ t8copy_ssa(&a.mid, &x)
+ want := T8{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [8]byte{100, 101, 102, 103, 104, 105, 106, 107}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t8copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T9 struct {
+ pre [8]byte
+ mid [9]byte
+ post [8]byte
+}
+
+func t9copy_ssa(y, x *[9]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy9() {
+ a := T9{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [9]byte{0, 1, 2, 3, 4, 5, 6, 7, 8}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [9]byte{100, 101, 102, 103, 104, 105, 106, 107, 108}
+ t9copy_ssa(&a.mid, &x)
+ want := T9{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [9]byte{100, 101, 102, 103, 104, 105, 106, 107, 108}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t9copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T10 struct {
+ pre [8]byte
+ mid [10]byte
+ post [8]byte
+}
+
+func t10copy_ssa(y, x *[10]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy10() {
+ a := T10{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [10]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [10]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109}
+ t10copy_ssa(&a.mid, &x)
+ want := T10{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [10]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t10copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T15 struct {
+ pre [8]byte
+ mid [15]byte
+ post [8]byte
+}
+
+func t15copy_ssa(y, x *[15]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy15() {
+ a := T15{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [15]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [15]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114}
+ t15copy_ssa(&a.mid, &x)
+ want := T15{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [15]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t15copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T16 struct {
+ pre [8]byte
+ mid [16]byte
+ post [8]byte
+}
+
+func t16copy_ssa(y, x *[16]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy16() {
+ a := T16{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [16]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [16]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115}
+ t16copy_ssa(&a.mid, &x)
+ want := T16{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [16]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t16copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T17 struct {
+ pre [8]byte
+ mid [17]byte
+ post [8]byte
+}
+
+func t17copy_ssa(y, x *[17]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy17() {
+ a := T17{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [17]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [17]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116}
+ t17copy_ssa(&a.mid, &x)
+ want := T17{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [17]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t17copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T23 struct {
+ pre [8]byte
+ mid [23]byte
+ post [8]byte
+}
+
+func t23copy_ssa(y, x *[23]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy23() {
+ a := T23{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [23]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [23]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122}
+ t23copy_ssa(&a.mid, &x)
+ want := T23{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [23]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t23copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T24 struct {
+ pre [8]byte
+ mid [24]byte
+ post [8]byte
+}
+
+func t24copy_ssa(y, x *[24]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy24() {
+ a := T24{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [24]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [24]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123}
+ t24copy_ssa(&a.mid, &x)
+ want := T24{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [24]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t24copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T25 struct {
+ pre [8]byte
+ mid [25]byte
+ post [8]byte
+}
+
+func t25copy_ssa(y, x *[25]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy25() {
+ a := T25{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [25]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [25]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124}
+ t25copy_ssa(&a.mid, &x)
+ want := T25{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [25]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t25copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T31 struct {
+ pre [8]byte
+ mid [31]byte
+ post [8]byte
+}
+
+func t31copy_ssa(y, x *[31]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy31() {
+ a := T31{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [31]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [31]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130}
+ t31copy_ssa(&a.mid, &x)
+ want := T31{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [31]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t31copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T32 struct {
+ pre [8]byte
+ mid [32]byte
+ post [8]byte
+}
+
+func t32copy_ssa(y, x *[32]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy32() {
+ a := T32{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [32]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [32]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131}
+ t32copy_ssa(&a.mid, &x)
+ want := T32{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [32]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t32copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T33 struct {
+ pre [8]byte
+ mid [33]byte
+ post [8]byte
+}
+
+func t33copy_ssa(y, x *[33]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy33() {
+ a := T33{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [33]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [33]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132}
+ t33copy_ssa(&a.mid, &x)
+ want := T33{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [33]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t33copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T63 struct {
+ pre [8]byte
+ mid [63]byte
+ post [8]byte
+}
+
+func t63copy_ssa(y, x *[63]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy63() {
+ a := T63{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [63]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [63]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162}
+ t63copy_ssa(&a.mid, &x)
+ want := T63{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [63]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t63copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T64 struct {
+ pre [8]byte
+ mid [64]byte
+ post [8]byte
+}
+
+func t64copy_ssa(y, x *[64]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy64() {
+ a := T64{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [64]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [64]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163}
+ t64copy_ssa(&a.mid, &x)
+ want := T64{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [64]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t64copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T65 struct {
+ pre [8]byte
+ mid [65]byte
+ post [8]byte
+}
+
+func t65copy_ssa(y, x *[65]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy65() {
+ a := T65{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [65]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [65]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164}
+ t65copy_ssa(&a.mid, &x)
+ want := T65{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [65]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t65copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T1023 struct {
+ pre [8]byte
+ mid [1023]byte
+ post [8]byte
+}
+
+func t1023copy_ssa(y, x *[1023]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy1023() {
+ a := T1023{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1023]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [1023]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122}
+ t1023copy_ssa(&a.mid, &x)
+ want := T1023{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1023]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t1023copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T1024 struct {
+ pre [8]byte
+ mid [1024]byte
+ post [8]byte
+}
+
+func t1024copy_ssa(y, x *[1024]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy1024() {
+ a := T1024{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1024]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [1024]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123}
+ t1024copy_ssa(&a.mid, &x)
+ want := T1024{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1024]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t1024copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T1025 struct {
+ pre [8]byte
+ mid [1025]byte
+ post [8]byte
+}
+
+func t1025copy_ssa(y, x *[1025]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy1025() {
+ a := T1025{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1025]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [1025]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124}
+ t1025copy_ssa(&a.mid, &x)
+ want := T1025{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1025]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t1025copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T1031 struct {
+ pre [8]byte
+ mid [1031]byte
+ post [8]byte
+}
+
+func t1031copy_ssa(y, x *[1031]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy1031() {
+ a := T1031{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1031]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [1031]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130}
+ t1031copy_ssa(&a.mid, &x)
+ want := T1031{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1031]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t1031copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T1032 struct {
+ pre [8]byte
+ mid [1032]byte
+ post [8]byte
+}
+
+func t1032copy_ssa(y, x *[1032]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy1032() {
+ a := T1032{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1032]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [1032]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131}
+ t1032copy_ssa(&a.mid, &x)
+ want := T1032{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1032]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t1032copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T1033 struct {
+ pre [8]byte
+ mid [1033]byte
+ post [8]byte
+}
+
+func t1033copy_ssa(y, x *[1033]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy1033() {
+ a := T1033{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1033]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [1033]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132}
+ t1033copy_ssa(&a.mid, &x)
+ want := T1033{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1033]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t1033copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T1039 struct {
+ pre [8]byte
+ mid [1039]byte
+ post [8]byte
+}
+
+func t1039copy_ssa(y, x *[1039]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy1039() {
+ a := T1039{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1039]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [1039]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138}
+ t1039copy_ssa(&a.mid, &x)
+ want := T1039{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1039]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t1039copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T1040 struct {
+ pre [8]byte
+ mid [1040]byte
+ post [8]byte
+}
+
+func t1040copy_ssa(y, x *[1040]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy1040() {
+ a := T1040{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1040]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [1040]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139}
+ t1040copy_ssa(&a.mid, &x)
+ want := T1040{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1040]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t1040copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T1041 struct {
+ pre [8]byte
+ mid [1041]byte
+ post [8]byte
+}
+
+func t1041copy_ssa(y, x *[1041]byte) {
+ switch {
+ }
+ *y = *x
+}
+func testCopy1041() {
+ a := T1041{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1041]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ x := [1041]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140}
+ t1041copy_ssa(&a.mid, &x)
+ want := T1041{[8]byte{201, 202, 203, 204, 205, 206, 207, 208}, [1041]byte{100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140}, [8]byte{211, 212, 213, 214, 215, 216, 217, 218}}
+ if a != want {
+ fmt.Printf("t1041copy got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+var failed bool
+
+func main() {
+ testCopy1()
+ testCopy2()
+ testCopy3()
+ testCopy4()
+ testCopy5()
+ testCopy6()
+ testCopy7()
+ testCopy8()
+ testCopy9()
+ testCopy10()
+ testCopy15()
+ testCopy16()
+ testCopy17()
+ testCopy23()
+ testCopy24()
+ testCopy25()
+ testCopy31()
+ testCopy32()
+ testCopy33()
+ testCopy63()
+ testCopy64()
+ testCopy65()
+ testCopy1023()
+ testCopy1024()
+ testCopy1025()
+ testCopy1031()
+ testCopy1032()
+ testCopy1033()
+ testCopy1039()
+ testCopy1040()
+ testCopy1041()
+ if failed {
+ panic("failed")
+ }
+}
--- /dev/null
+// run
+
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Test control flow
+
+package main
+
+// nor_ssa calculates NOR(a, b).
+// It is implemented in a way that generates
+// phi control values.
+func nor_ssa(a, b bool) bool {
+ var c bool
+ if a {
+ c = true
+ }
+ if b {
+ c = true
+ }
+ if c {
+ return false
+ }
+ return true
+}
+
+func testPhiControl() {
+ tests := [...][3]bool{ // a, b, want
+ {false, false, true},
+ {true, false, false},
+ {false, true, false},
+ {true, true, false},
+ }
+ for _, test := range tests {
+ a, b := test[0], test[1]
+ got := nor_ssa(a, b)
+ want := test[2]
+ if want != got {
+ print("nor(", a, ", ", b, ")=", want, " got ", got, "\n")
+ failed = true
+ }
+ }
+}
+
+func emptyRange_ssa(b []byte) bool {
+ for _, x := range b {
+ _ = x
+ }
+ return true
+}
+
+func testEmptyRange() {
+ if !emptyRange_ssa([]byte{}) {
+ println("emptyRange_ssa([]byte{})=false, want true")
+ failed = true
+ }
+}
+
+func switch_ssa(a int) int {
+ ret := 0
+ switch a {
+ case 5:
+ ret += 5
+ case 4:
+ ret += 4
+ case 3:
+ ret += 3
+ case 2:
+ ret += 2
+ case 1:
+ ret += 1
+ }
+ return ret
+
+}
+
+func fallthrough_ssa(a int) int {
+ ret := 0
+ switch a {
+ case 5:
+ ret++
+ fallthrough
+ case 4:
+ ret++
+ fallthrough
+ case 3:
+ ret++
+ fallthrough
+ case 2:
+ ret++
+ fallthrough
+ case 1:
+ ret++
+ }
+ return ret
+
+}
+
+func testFallthrough() {
+ for i := 0; i < 6; i++ {
+ if got := fallthrough_ssa(i); got != i {
+ println("fallthrough_ssa(i) =", got, "wanted", i)
+ failed = true
+ }
+ }
+}
+
+func testSwitch() {
+ for i := 0; i < 6; i++ {
+ if got := switch_ssa(i); got != i {
+ println("switch_ssa(i) =", got, "wanted", i)
+ failed = true
+ }
+ }
+}
+
+type junk struct {
+ step int
+}
+
+// flagOverwrite_ssa is intended to reproduce an issue seen where a XOR
+// was scheduled between a compare and branch, clearing flags.
+func flagOverwrite_ssa(s *junk, c int) int {
+ switch {
+ }
+ if '0' <= c && c <= '9' {
+ s.step = 0
+ return 1
+ }
+ if c == 'e' || c == 'E' {
+ s.step = 0
+ return 2
+ }
+ s.step = 0
+ return 3
+}
+
+func testFlagOverwrite() {
+ j := junk{}
+ if got := flagOverwrite_ssa(&j, ' '); got != 3 {
+ println("flagOverwrite_ssa =", got, "wanted 3")
+ failed = true
+ }
+}
+
+var failed = false
+
+func main() {
+ testPhiControl()
+ testEmptyRange()
+
+ testSwitch()
+ testFallthrough()
+
+ testFlagOverwrite()
+
+ if failed {
+ panic("failed")
+ }
+}
--- /dev/null
+// compile
+
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Test that a defer in a function with no return
+// statement will compile correctly.
+
+package foo
+
+func deferNoReturn_ssa() {
+ defer func() { println("returned") }()
+ for {
+ println("loop")
+ }
+}
--- /dev/null
+// run
+
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Tests floating point arithmetic expressions
+
+package main
+
+import "fmt"
+
+// manysub_ssa is designed to tickle bugs that depend on register
+// pressure or unfriendly operand ordering in registers (and at
+// least once it succeeded in this).
+func manysub_ssa(a, b, c, d float64) (aa, ab, ac, ad, ba, bb, bc, bd, ca, cb, cc, cd, da, db, dc, dd float64) {
+ switch {
+ }
+ aa = a + 11.0 - a
+ ab = a - b
+ ac = a - c
+ ad = a - d
+ ba = b - a
+ bb = b + 22.0 - b
+ bc = b - c
+ bd = b - d
+ ca = c - a
+ cb = c - b
+ cc = c + 33.0 - c
+ cd = c - d
+ da = d - a
+ db = d - b
+ dc = d - c
+ dd = d + 44.0 - d
+ return
+}
+
+// fpspill_ssa attempts to trigger a bug where phis with floating point values
+// were stored in non-fp registers causing an error in doasm.
+func fpspill_ssa(a int) float64 {
+ switch {
+ }
+
+ ret := -1.0
+ switch a {
+ case 0:
+ ret = 1.0
+ case 1:
+ ret = 1.1
+ case 2:
+ ret = 1.2
+ case 3:
+ ret = 1.3
+ case 4:
+ ret = 1.4
+ case 5:
+ ret = 1.5
+ case 6:
+ ret = 1.6
+ case 7:
+ ret = 1.7
+ case 8:
+ ret = 1.8
+ case 9:
+ ret = 1.9
+ case 10:
+ ret = 1.10
+ case 11:
+ ret = 1.11
+ case 12:
+ ret = 1.12
+ case 13:
+ ret = 1.13
+ case 14:
+ ret = 1.14
+ case 15:
+ ret = 1.15
+ case 16:
+ ret = 1.16
+ }
+ return ret
+}
+
+func add64_ssa(a, b float64) float64 {
+ switch {
+ }
+ return a + b
+}
+
+func mul64_ssa(a, b float64) float64 {
+ switch {
+ }
+ return a * b
+}
+
+func sub64_ssa(a, b float64) float64 {
+ switch {
+ }
+ return a - b
+}
+
+func div64_ssa(a, b float64) float64 {
+ switch {
+ }
+ return a / b
+}
+
+func neg64_ssa(a, b float64) float64 {
+ switch {
+ }
+ return -a + -1*b
+}
+
+func add32_ssa(a, b float32) float32 {
+ switch {
+ }
+ return a + b
+}
+
+func mul32_ssa(a, b float32) float32 {
+ switch {
+ }
+ return a * b
+}
+
+func sub32_ssa(a, b float32) float32 {
+ switch {
+ }
+ return a - b
+}
+func div32_ssa(a, b float32) float32 {
+ switch {
+ }
+ return a / b
+}
+
+func neg32_ssa(a, b float32) float32 {
+ switch {
+ }
+ return -a + -1*b
+}
+
+func conv2Float64_ssa(a int8, b uint8, c int16, d uint16,
+ e int32, f uint32, g int64, h uint64, i float32) (aa, bb, cc, dd, ee, ff, gg, hh, ii float64) {
+ switch {
+ }
+ aa = float64(a)
+ bb = float64(b)
+ cc = float64(c)
+ hh = float64(h)
+ dd = float64(d)
+ ee = float64(e)
+ ff = float64(f)
+ gg = float64(g)
+ ii = float64(i)
+ return
+}
+
+func conv2Float32_ssa(a int8, b uint8, c int16, d uint16,
+ e int32, f uint32, g int64, h uint64, i float64) (aa, bb, cc, dd, ee, ff, gg, hh, ii float32) {
+ switch {
+ }
+ aa = float32(a)
+ bb = float32(b)
+ cc = float32(c)
+ dd = float32(d)
+ ee = float32(e)
+ ff = float32(f)
+ gg = float32(g)
+ hh = float32(h)
+ ii = float32(i)
+ return
+}
+
+func integer2floatConversions() int {
+ fails := 0
+ {
+ a, b, c, d, e, f, g, h, i := conv2Float64_ssa(0, 0, 0, 0, 0, 0, 0, 0, 0)
+ fails += expectAll64("zero64", 0, a, b, c, d, e, f, g, h, i)
+ }
+ {
+ a, b, c, d, e, f, g, h, i := conv2Float64_ssa(1, 1, 1, 1, 1, 1, 1, 1, 1)
+ fails += expectAll64("one64", 1, a, b, c, d, e, f, g, h, i)
+ }
+ {
+ a, b, c, d, e, f, g, h, i := conv2Float32_ssa(0, 0, 0, 0, 0, 0, 0, 0, 0)
+ fails += expectAll32("zero32", 0, a, b, c, d, e, f, g, h, i)
+ }
+ {
+ a, b, c, d, e, f, g, h, i := conv2Float32_ssa(1, 1, 1, 1, 1, 1, 1, 1, 1)
+ fails += expectAll32("one32", 1, a, b, c, d, e, f, g, h, i)
+ }
+ {
+ // Check maximum values
+ a, b, c, d, e, f, g, h, i := conv2Float64_ssa(127, 255, 32767, 65535, 0x7fffffff, 0xffffffff, 0x7fffFFFFffffFFFF, 0xffffFFFFffffFFFF, 3.402823E38)
+ fails += expect64("a", a, 127)
+ fails += expect64("b", b, 255)
+ fails += expect64("c", c, 32767)
+ fails += expect64("d", d, 65535)
+ fails += expect64("e", e, float64(int32(0x7fffffff)))
+ fails += expect64("f", f, float64(uint32(0xffffffff)))
+ fails += expect64("g", g, float64(int64(0x7fffffffffffffff)))
+ fails += expect64("h", h, float64(uint64(0xffffffffffffffff)))
+ fails += expect64("i", i, float64(float32(3.402823E38)))
+ }
+ {
+ // Check minimum values (and tweaks for unsigned)
+ a, b, c, d, e, f, g, h, i := conv2Float64_ssa(-128, 254, -32768, 65534, ^0x7fffffff, 0xfffffffe, ^0x7fffFFFFffffFFFF, 0xffffFFFFffffF401, 1.5E-45)
+ fails += expect64("a", a, -128)
+ fails += expect64("b", b, 254)
+ fails += expect64("c", c, -32768)
+ fails += expect64("d", d, 65534)
+ fails += expect64("e", e, float64(^int32(0x7fffffff)))
+ fails += expect64("f", f, float64(uint32(0xfffffffe)))
+ fails += expect64("g", g, float64(^int64(0x7fffffffffffffff)))
+ fails += expect64("h", h, float64(uint64(0xfffffffffffff401)))
+ fails += expect64("i", i, float64(float32(1.5E-45)))
+ }
+ {
+ // Check maximum values
+ a, b, c, d, e, f, g, h, i := conv2Float32_ssa(127, 255, 32767, 65535, 0x7fffffff, 0xffffffff, 0x7fffFFFFffffFFFF, 0xffffFFFFffffFFFF, 3.402823E38)
+ fails += expect32("a", a, 127)
+ fails += expect32("b", b, 255)
+ fails += expect32("c", c, 32767)
+ fails += expect32("d", d, 65535)
+ fails += expect32("e", e, float32(int32(0x7fffffff)))
+ fails += expect32("f", f, float32(uint32(0xffffffff)))
+ fails += expect32("g", g, float32(int64(0x7fffffffffffffff)))
+ fails += expect32("h", h, float32(uint64(0xffffffffffffffff)))
+ fails += expect32("i", i, float32(float64(3.402823E38)))
+ }
+ {
+ // Check minimum values (and tweaks for unsigned)
+ a, b, c, d, e, f, g, h, i := conv2Float32_ssa(-128, 254, -32768, 65534, ^0x7fffffff, 0xfffffffe, ^0x7fffFFFFffffFFFF, 0xffffFFFFffffF401, 1.5E-45)
+ fails += expect32("a", a, -128)
+ fails += expect32("b", b, 254)
+ fails += expect32("c", c, -32768)
+ fails += expect32("d", d, 65534)
+ fails += expect32("e", e, float32(^int32(0x7fffffff)))
+ fails += expect32("f", f, float32(uint32(0xfffffffe)))
+ fails += expect32("g", g, float32(^int64(0x7fffffffffffffff)))
+ fails += expect32("h", h, float32(uint64(0xfffffffffffff401)))
+ fails += expect32("i", i, float32(float64(1.5E-45)))
+ }
+ return fails
+}
+
+const (
+ aa = 0x1000000000000000
+ ab = 0x100000000000000
+ ac = 0x10000000000000
+ ad = 0x1000000000000
+ ba = 0x100000000000
+ bb = 0x10000000000
+ bc = 0x1000000000
+ bd = 0x100000000
+ ca = 0x10000000
+ cb = 0x1000000
+ cc = 0x100000
+ cd = 0x10000
+ da = 0x1000
+ db = 0x100
+ dc = 0x10
+ dd = 0x1
+)
+
+func compares64_ssa(a, b, c, d float64) (lt, le, eq, ne, ge, gt uint64) {
+
+ switch {
+ }
+
+ if a < a {
+ lt += aa
+ }
+ if a < b {
+ lt += ab
+ }
+ if a < c {
+ lt += ac
+ }
+ if a < d {
+ lt += ad
+ }
+
+ if b < a {
+ lt += ba
+ }
+ if b < b {
+ lt += bb
+ }
+ if b < c {
+ lt += bc
+ }
+ if b < d {
+ lt += bd
+ }
+
+ if c < a {
+ lt += ca
+ }
+ if c < b {
+ lt += cb
+ }
+ if c < c {
+ lt += cc
+ }
+ if c < d {
+ lt += cd
+ }
+
+ if d < a {
+ lt += da
+ }
+ if d < b {
+ lt += db
+ }
+ if d < c {
+ lt += dc
+ }
+ if d < d {
+ lt += dd
+ }
+
+ if a <= a {
+ le += aa
+ }
+ if a <= b {
+ le += ab
+ }
+ if a <= c {
+ le += ac
+ }
+ if a <= d {
+ le += ad
+ }
+
+ if b <= a {
+ le += ba
+ }
+ if b <= b {
+ le += bb
+ }
+ if b <= c {
+ le += bc
+ }
+ if b <= d {
+ le += bd
+ }
+
+ if c <= a {
+ le += ca
+ }
+ if c <= b {
+ le += cb
+ }
+ if c <= c {
+ le += cc
+ }
+ if c <= d {
+ le += cd
+ }
+
+ if d <= a {
+ le += da
+ }
+ if d <= b {
+ le += db
+ }
+ if d <= c {
+ le += dc
+ }
+ if d <= d {
+ le += dd
+ }
+
+ if a == a {
+ eq += aa
+ }
+ if a == b {
+ eq += ab
+ }
+ if a == c {
+ eq += ac
+ }
+ if a == d {
+ eq += ad
+ }
+
+ if b == a {
+ eq += ba
+ }
+ if b == b {
+ eq += bb
+ }
+ if b == c {
+ eq += bc
+ }
+ if b == d {
+ eq += bd
+ }
+
+ if c == a {
+ eq += ca
+ }
+ if c == b {
+ eq += cb
+ }
+ if c == c {
+ eq += cc
+ }
+ if c == d {
+ eq += cd
+ }
+
+ if d == a {
+ eq += da
+ }
+ if d == b {
+ eq += db
+ }
+ if d == c {
+ eq += dc
+ }
+ if d == d {
+ eq += dd
+ }
+
+ if a != a {
+ ne += aa
+ }
+ if a != b {
+ ne += ab
+ }
+ if a != c {
+ ne += ac
+ }
+ if a != d {
+ ne += ad
+ }
+
+ if b != a {
+ ne += ba
+ }
+ if b != b {
+ ne += bb
+ }
+ if b != c {
+ ne += bc
+ }
+ if b != d {
+ ne += bd
+ }
+
+ if c != a {
+ ne += ca
+ }
+ if c != b {
+ ne += cb
+ }
+ if c != c {
+ ne += cc
+ }
+ if c != d {
+ ne += cd
+ }
+
+ if d != a {
+ ne += da
+ }
+ if d != b {
+ ne += db
+ }
+ if d != c {
+ ne += dc
+ }
+ if d != d {
+ ne += dd
+ }
+
+ if a >= a {
+ ge += aa
+ }
+ if a >= b {
+ ge += ab
+ }
+ if a >= c {
+ ge += ac
+ }
+ if a >= d {
+ ge += ad
+ }
+
+ if b >= a {
+ ge += ba
+ }
+ if b >= b {
+ ge += bb
+ }
+ if b >= c {
+ ge += bc
+ }
+ if b >= d {
+ ge += bd
+ }
+
+ if c >= a {
+ ge += ca
+ }
+ if c >= b {
+ ge += cb
+ }
+ if c >= c {
+ ge += cc
+ }
+ if c >= d {
+ ge += cd
+ }
+
+ if d >= a {
+ ge += da
+ }
+ if d >= b {
+ ge += db
+ }
+ if d >= c {
+ ge += dc
+ }
+ if d >= d {
+ ge += dd
+ }
+
+ if a > a {
+ gt += aa
+ }
+ if a > b {
+ gt += ab
+ }
+ if a > c {
+ gt += ac
+ }
+ if a > d {
+ gt += ad
+ }
+
+ if b > a {
+ gt += ba
+ }
+ if b > b {
+ gt += bb
+ }
+ if b > c {
+ gt += bc
+ }
+ if b > d {
+ gt += bd
+ }
+
+ if c > a {
+ gt += ca
+ }
+ if c > b {
+ gt += cb
+ }
+ if c > c {
+ gt += cc
+ }
+ if c > d {
+ gt += cd
+ }
+
+ if d > a {
+ gt += da
+ }
+ if d > b {
+ gt += db
+ }
+ if d > c {
+ gt += dc
+ }
+ if d > d {
+ gt += dd
+ }
+
+ return
+}
+
+func compares32_ssa(a, b, c, d float32) (lt, le, eq, ne, ge, gt uint64) {
+
+ switch {
+ }
+
+ if a < a {
+ lt += aa
+ }
+ if a < b {
+ lt += ab
+ }
+ if a < c {
+ lt += ac
+ }
+ if a < d {
+ lt += ad
+ }
+
+ if b < a {
+ lt += ba
+ }
+ if b < b {
+ lt += bb
+ }
+ if b < c {
+ lt += bc
+ }
+ if b < d {
+ lt += bd
+ }
+
+ if c < a {
+ lt += ca
+ }
+ if c < b {
+ lt += cb
+ }
+ if c < c {
+ lt += cc
+ }
+ if c < d {
+ lt += cd
+ }
+
+ if d < a {
+ lt += da
+ }
+ if d < b {
+ lt += db
+ }
+ if d < c {
+ lt += dc
+ }
+ if d < d {
+ lt += dd
+ }
+
+ if a <= a {
+ le += aa
+ }
+ if a <= b {
+ le += ab
+ }
+ if a <= c {
+ le += ac
+ }
+ if a <= d {
+ le += ad
+ }
+
+ if b <= a {
+ le += ba
+ }
+ if b <= b {
+ le += bb
+ }
+ if b <= c {
+ le += bc
+ }
+ if b <= d {
+ le += bd
+ }
+
+ if c <= a {
+ le += ca
+ }
+ if c <= b {
+ le += cb
+ }
+ if c <= c {
+ le += cc
+ }
+ if c <= d {
+ le += cd
+ }
+
+ if d <= a {
+ le += da
+ }
+ if d <= b {
+ le += db
+ }
+ if d <= c {
+ le += dc
+ }
+ if d <= d {
+ le += dd
+ }
+
+ if a == a {
+ eq += aa
+ }
+ if a == b {
+ eq += ab
+ }
+ if a == c {
+ eq += ac
+ }
+ if a == d {
+ eq += ad
+ }
+
+ if b == a {
+ eq += ba
+ }
+ if b == b {
+ eq += bb
+ }
+ if b == c {
+ eq += bc
+ }
+ if b == d {
+ eq += bd
+ }
+
+ if c == a {
+ eq += ca
+ }
+ if c == b {
+ eq += cb
+ }
+ if c == c {
+ eq += cc
+ }
+ if c == d {
+ eq += cd
+ }
+
+ if d == a {
+ eq += da
+ }
+ if d == b {
+ eq += db
+ }
+ if d == c {
+ eq += dc
+ }
+ if d == d {
+ eq += dd
+ }
+
+ if a != a {
+ ne += aa
+ }
+ if a != b {
+ ne += ab
+ }
+ if a != c {
+ ne += ac
+ }
+ if a != d {
+ ne += ad
+ }
+
+ if b != a {
+ ne += ba
+ }
+ if b != b {
+ ne += bb
+ }
+ if b != c {
+ ne += bc
+ }
+ if b != d {
+ ne += bd
+ }
+
+ if c != a {
+ ne += ca
+ }
+ if c != b {
+ ne += cb
+ }
+ if c != c {
+ ne += cc
+ }
+ if c != d {
+ ne += cd
+ }
+
+ if d != a {
+ ne += da
+ }
+ if d != b {
+ ne += db
+ }
+ if d != c {
+ ne += dc
+ }
+ if d != d {
+ ne += dd
+ }
+
+ if a >= a {
+ ge += aa
+ }
+ if a >= b {
+ ge += ab
+ }
+ if a >= c {
+ ge += ac
+ }
+ if a >= d {
+ ge += ad
+ }
+
+ if b >= a {
+ ge += ba
+ }
+ if b >= b {
+ ge += bb
+ }
+ if b >= c {
+ ge += bc
+ }
+ if b >= d {
+ ge += bd
+ }
+
+ if c >= a {
+ ge += ca
+ }
+ if c >= b {
+ ge += cb
+ }
+ if c >= c {
+ ge += cc
+ }
+ if c >= d {
+ ge += cd
+ }
+
+ if d >= a {
+ ge += da
+ }
+ if d >= b {
+ ge += db
+ }
+ if d >= c {
+ ge += dc
+ }
+ if d >= d {
+ ge += dd
+ }
+
+ if a > a {
+ gt += aa
+ }
+ if a > b {
+ gt += ab
+ }
+ if a > c {
+ gt += ac
+ }
+ if a > d {
+ gt += ad
+ }
+
+ if b > a {
+ gt += ba
+ }
+ if b > b {
+ gt += bb
+ }
+ if b > c {
+ gt += bc
+ }
+ if b > d {
+ gt += bd
+ }
+
+ if c > a {
+ gt += ca
+ }
+ if c > b {
+ gt += cb
+ }
+ if c > c {
+ gt += cc
+ }
+ if c > d {
+ gt += cd
+ }
+
+ if d > a {
+ gt += da
+ }
+ if d > b {
+ gt += db
+ }
+ if d > c {
+ gt += dc
+ }
+ if d > d {
+ gt += dd
+ }
+
+ return
+}
+
+func le64_ssa(x, y float64) bool {
+ switch {
+ }
+ return x <= y
+}
+func ge64_ssa(x, y float64) bool {
+ switch {
+ }
+ return x >= y
+}
+func lt64_ssa(x, y float64) bool {
+ switch {
+ }
+ return x < y
+}
+func gt64_ssa(x, y float64) bool {
+ switch {
+ }
+ return x > y
+}
+func eq64_ssa(x, y float64) bool {
+ switch {
+ }
+ return x == y
+}
+func ne64_ssa(x, y float64) bool {
+ switch {
+ }
+ return x != y
+}
+
+func eqbr64_ssa(x, y float64) float64 {
+ switch {
+ }
+ if x == y {
+ return 17
+ }
+ return 42
+}
+func nebr64_ssa(x, y float64) float64 {
+ switch {
+ }
+ if x != y {
+ return 17
+ }
+ return 42
+}
+func gebr64_ssa(x, y float64) float64 {
+ switch {
+ }
+ if x >= y {
+ return 17
+ }
+ return 42
+}
+func lebr64_ssa(x, y float64) float64 {
+ switch {
+ }
+ if x <= y {
+ return 17
+ }
+ return 42
+}
+func ltbr64_ssa(x, y float64) float64 {
+ switch {
+ }
+ if x < y {
+ return 17
+ }
+ return 42
+}
+func gtbr64_ssa(x, y float64) float64 {
+ switch {
+ }
+ if x > y {
+ return 17
+ }
+ return 42
+}
+
+func le32_ssa(x, y float32) bool {
+ switch {
+ }
+ return x <= y
+}
+func ge32_ssa(x, y float32) bool {
+ switch {
+ }
+ return x >= y
+}
+func lt32_ssa(x, y float32) bool {
+ switch {
+ }
+ return x < y
+}
+func gt32_ssa(x, y float32) bool {
+ switch {
+ }
+ return x > y
+}
+func eq32_ssa(x, y float32) bool {
+ switch {
+ }
+ return x == y
+}
+func ne32_ssa(x, y float32) bool {
+ switch {
+ }
+ return x != y
+}
+
+func eqbr32_ssa(x, y float32) float32 {
+ switch {
+ }
+ if x == y {
+ return 17
+ }
+ return 42
+}
+func nebr32_ssa(x, y float32) float32 {
+ switch {
+ }
+ if x != y {
+ return 17
+ }
+ return 42
+}
+func gebr32_ssa(x, y float32) float32 {
+ switch {
+ }
+ if x >= y {
+ return 17
+ }
+ return 42
+}
+func lebr32_ssa(x, y float32) float32 {
+ switch {
+ }
+ if x <= y {
+ return 17
+ }
+ return 42
+}
+func ltbr32_ssa(x, y float32) float32 {
+ switch {
+ }
+ if x < y {
+ return 17
+ }
+ return 42
+}
+func gtbr32_ssa(x, y float32) float32 {
+ switch {
+ }
+ if x > y {
+ return 17
+ }
+ return 42
+}
+
+func F32toU8_ssa(x float32) uint8 {
+ switch {
+ }
+ return uint8(x)
+}
+
+func F32toI8_ssa(x float32) int8 {
+ switch {
+ }
+ return int8(x)
+}
+
+func F32toU16_ssa(x float32) uint16 {
+ switch {
+ }
+ return uint16(x)
+}
+
+func F32toI16_ssa(x float32) int16 {
+ switch {
+ }
+ return int16(x)
+}
+
+func F32toU32_ssa(x float32) uint32 {
+ switch {
+ }
+ return uint32(x)
+}
+
+func F32toI32_ssa(x float32) int32 {
+ switch {
+ }
+ return int32(x)
+}
+
+func F32toU64_ssa(x float32) uint64 {
+ switch {
+ }
+ return uint64(x)
+}
+
+func F32toI64_ssa(x float32) int64 {
+ switch {
+ }
+ return int64(x)
+}
+
+func F64toU8_ssa(x float64) uint8 {
+ switch {
+ }
+ return uint8(x)
+}
+
+func F64toI8_ssa(x float64) int8 {
+ switch {
+ }
+ return int8(x)
+}
+
+func F64toU16_ssa(x float64) uint16 {
+ switch {
+ }
+ return uint16(x)
+}
+
+func F64toI16_ssa(x float64) int16 {
+ switch {
+ }
+ return int16(x)
+}
+
+func F64toU32_ssa(x float64) uint32 {
+ switch {
+ }
+ return uint32(x)
+}
+
+func F64toI32_ssa(x float64) int32 {
+ switch {
+ }
+ return int32(x)
+}
+
+func F64toU64_ssa(x float64) uint64 {
+ switch {
+ }
+ return uint64(x)
+}
+
+func F64toI64_ssa(x float64) int64 {
+ switch {
+ }
+ return int64(x)
+}
+
+func floatsToInts(x float64, expected int64) int {
+ y := float32(x)
+ fails := 0
+ fails += expectInt64("F64toI8", int64(F64toI8_ssa(x)), expected)
+ fails += expectInt64("F64toI16", int64(F64toI16_ssa(x)), expected)
+ fails += expectInt64("F64toI32", int64(F64toI32_ssa(x)), expected)
+ fails += expectInt64("F64toI64", int64(F64toI64_ssa(x)), expected)
+ fails += expectInt64("F32toI8", int64(F32toI8_ssa(y)), expected)
+ fails += expectInt64("F32toI16", int64(F32toI16_ssa(y)), expected)
+ fails += expectInt64("F32toI32", int64(F32toI32_ssa(y)), expected)
+ fails += expectInt64("F32toI64", int64(F32toI64_ssa(y)), expected)
+ return fails
+}
+
+func floatsToUints(x float64, expected uint64) int {
+ y := float32(x)
+ fails := 0
+ fails += expectUint64("F64toU8", uint64(F64toU8_ssa(x)), expected)
+ fails += expectUint64("F64toU16", uint64(F64toU16_ssa(x)), expected)
+ fails += expectUint64("F64toU32", uint64(F64toU32_ssa(x)), expected)
+ fails += expectUint64("F64toU64", uint64(F64toU64_ssa(x)), expected)
+ fails += expectUint64("F32toU8", uint64(F32toU8_ssa(y)), expected)
+ fails += expectUint64("F32toU16", uint64(F32toU16_ssa(y)), expected)
+ fails += expectUint64("F32toU32", uint64(F32toU32_ssa(y)), expected)
+ fails += expectUint64("F32toU64", uint64(F32toU64_ssa(y)), expected)
+ return fails
+}
+
+func floatingToIntegerConversionsTest() int {
+ fails := 0
+ fails += floatsToInts(0.0, 0)
+ fails += floatsToInts(0.5, 0)
+ fails += floatsToInts(0.9, 0)
+ fails += floatsToInts(1.0, 1)
+ fails += floatsToInts(1.5, 1)
+ fails += floatsToInts(127.0, 127)
+ fails += floatsToInts(-1.0, -1)
+ fails += floatsToInts(-128.0, -128)
+
+ fails += floatsToUints(0.0, 0)
+ fails += floatsToUints(1.0, 1)
+ fails += floatsToUints(255.0, 255)
+
+ for j := uint(0); j < 24; j++ {
+ // Avoid hard cases in the construction
+ // of the test inputs.
+ v := int64(1<<62) | int64(1<<(62-j))
+ w := uint64(v)
+ f := float32(v)
+ d := float64(v)
+ fails += expectUint64("2**62...", F32toU64_ssa(f), w)
+ fails += expectUint64("2**62...", F64toU64_ssa(d), w)
+ fails += expectInt64("2**62...", F32toI64_ssa(f), v)
+ fails += expectInt64("2**62...", F64toI64_ssa(d), v)
+ fails += expectInt64("2**62...", F32toI64_ssa(-f), -v)
+ fails += expectInt64("2**62...", F64toI64_ssa(-d), -v)
+ w += w
+ f += f
+ d += d
+ fails += expectUint64("2**63...", F32toU64_ssa(f), w)
+ fails += expectUint64("2**63...", F64toU64_ssa(d), w)
+ }
+
+ for j := uint(0); j < 16; j++ {
+ // Avoid hard cases in the construction
+ // of the test inputs.
+ v := int32(1<<30) | int32(1<<(30-j))
+ w := uint32(v)
+ f := float32(v)
+ d := float64(v)
+ fails += expectUint32("2**30...", F32toU32_ssa(f), w)
+ fails += expectUint32("2**30...", F64toU32_ssa(d), w)
+ fails += expectInt32("2**30...", F32toI32_ssa(f), v)
+ fails += expectInt32("2**30...", F64toI32_ssa(d), v)
+ fails += expectInt32("2**30...", F32toI32_ssa(-f), -v)
+ fails += expectInt32("2**30...", F64toI32_ssa(-d), -v)
+ w += w
+ f += f
+ d += d
+ fails += expectUint32("2**31...", F32toU32_ssa(f), w)
+ fails += expectUint32("2**31...", F64toU32_ssa(d), w)
+ }
+
+ for j := uint(0); j < 15; j++ {
+ // Avoid hard cases in the construction
+ // of the test inputs.
+ v := int16(1<<14) | int16(1<<(14-j))
+ w := uint16(v)
+ f := float32(v)
+ d := float64(v)
+ fails += expectUint16("2**14...", F32toU16_ssa(f), w)
+ fails += expectUint16("2**14...", F64toU16_ssa(d), w)
+ fails += expectInt16("2**14...", F32toI16_ssa(f), v)
+ fails += expectInt16("2**14...", F64toI16_ssa(d), v)
+ fails += expectInt16("2**14...", F32toI16_ssa(-f), -v)
+ fails += expectInt16("2**14...", F64toI16_ssa(-d), -v)
+ w += w
+ f += f
+ d += d
+ fails += expectUint16("2**15...", F32toU16_ssa(f), w)
+ fails += expectUint16("2**15...", F64toU16_ssa(d), w)
+ }
+
+ fails += expectInt32("-2147483648", F32toI32_ssa(-2147483648), -2147483648)
+
+ fails += expectInt32("-2147483648", F64toI32_ssa(-2147483648), -2147483648)
+ fails += expectInt32("-2147483647", F64toI32_ssa(-2147483647), -2147483647)
+ fails += expectUint32("4294967295", F64toU32_ssa(4294967295), 4294967295)
+
+ fails += expectInt16("-32768", F64toI16_ssa(-32768), -32768)
+ fails += expectInt16("-32768", F32toI16_ssa(-32768), -32768)
+
+ // NB more of a pain to do these for 32-bit because of lost bits in Float32 mantissa
+ fails += expectInt16("32767", F64toI16_ssa(32767), 32767)
+ fails += expectInt16("32767", F32toI16_ssa(32767), 32767)
+ fails += expectUint16("32767", F64toU16_ssa(32767), 32767)
+ fails += expectUint16("32767", F32toU16_ssa(32767), 32767)
+ fails += expectUint16("65535", F64toU16_ssa(65535), 65535)
+ fails += expectUint16("65535", F32toU16_ssa(65535), 65535)
+
+ return fails
+}
+
+func fail64(s string, f func(a, b float64) float64, a, b, e float64) int {
+ d := f(a, b)
+ if d != e {
+ fmt.Printf("For (float64) %v %v %v, expected %v, got %v\n", a, s, b, e, d)
+ return 1
+ }
+ return 0
+}
+
+func fail64bool(s string, f func(a, b float64) bool, a, b float64, e bool) int {
+ d := f(a, b)
+ if d != e {
+ fmt.Printf("For (float64) %v %v %v, expected %v, got %v\n", a, s, b, e, d)
+ return 1
+ }
+ return 0
+}
+
+func fail32(s string, f func(a, b float32) float32, a, b, e float32) int {
+ d := f(a, b)
+ if d != e {
+ fmt.Printf("For (float32) %v %v %v, expected %v, got %v\n", a, s, b, e, d)
+ return 1
+ }
+ return 0
+}
+
+func fail32bool(s string, f func(a, b float32) bool, a, b float32, e bool) int {
+ d := f(a, b)
+ if d != e {
+ fmt.Printf("For (float32) %v %v %v, expected %v, got %v\n", a, s, b, e, d)
+ return 1
+ }
+ return 0
+}
+
+func expect64(s string, x, expected float64) int {
+ if x != expected {
+ println("F64 Expected", expected, "for", s, ", got", x)
+ return 1
+ }
+ return 0
+}
+
+func expect32(s string, x, expected float32) int {
+ if x != expected {
+ println("F32 Expected", expected, "for", s, ", got", x)
+ return 1
+ }
+ return 0
+}
+
+func expectUint64(s string, x, expected uint64) int {
+ if x != expected {
+ fmt.Printf("U64 Expected 0x%016x for %s, got 0x%016x\n", expected, s, x)
+ return 1
+ }
+ return 0
+}
+
+func expectInt64(s string, x, expected int64) int {
+ if x != expected {
+ fmt.Printf("%s: Expected 0x%016x, got 0x%016x\n", s, expected, x)
+ return 1
+ }
+ return 0
+}
+
+func expectUint32(s string, x, expected uint32) int {
+ if x != expected {
+ fmt.Printf("U32 %s: Expected 0x%08x, got 0x%08x\n", s, expected, x)
+ return 1
+ }
+ return 0
+}
+
+func expectInt32(s string, x, expected int32) int {
+ if x != expected {
+ fmt.Printf("I32 %s: Expected 0x%08x, got 0x%08x\n", s, expected, x)
+ return 1
+ }
+ return 0
+}
+
+func expectUint16(s string, x, expected uint16) int {
+ if x != expected {
+ fmt.Printf("U16 %s: Expected 0x%04x, got 0x%04x\n", s, expected, x)
+ return 1
+ }
+ return 0
+}
+
+func expectInt16(s string, x, expected int16) int {
+ if x != expected {
+ fmt.Printf("I16 %s: Expected 0x%04x, got 0x%04x\n", s, expected, x)
+ return 1
+ }
+ return 0
+}
+
+func expectAll64(s string, expected, a, b, c, d, e, f, g, h, i float64) int {
+ fails := 0
+ fails += expect64(s+":a", a, expected)
+ fails += expect64(s+":b", b, expected)
+ fails += expect64(s+":c", c, expected)
+ fails += expect64(s+":d", d, expected)
+ fails += expect64(s+":e", e, expected)
+ fails += expect64(s+":f", f, expected)
+ fails += expect64(s+":g", g, expected)
+ return fails
+}
+
+func expectAll32(s string, expected, a, b, c, d, e, f, g, h, i float32) int {
+ fails := 0
+ fails += expect32(s+":a", a, expected)
+ fails += expect32(s+":b", b, expected)
+ fails += expect32(s+":c", c, expected)
+ fails += expect32(s+":d", d, expected)
+ fails += expect32(s+":e", e, expected)
+ fails += expect32(s+":f", f, expected)
+ fails += expect32(s+":g", g, expected)
+ return fails
+}
+
+var ev64 [2]float64 = [2]float64{42.0, 17.0}
+var ev32 [2]float32 = [2]float32{42.0, 17.0}
+
+func cmpOpTest(s string,
+ f func(a, b float64) bool,
+ g func(a, b float64) float64,
+ ff func(a, b float32) bool,
+ gg func(a, b float32) float32,
+ zero, one, inf, nan float64, result uint) int {
+ fails := 0
+ fails += fail64bool(s, f, zero, zero, result>>16&1 == 1)
+ fails += fail64bool(s, f, zero, one, result>>12&1 == 1)
+ fails += fail64bool(s, f, zero, inf, result>>8&1 == 1)
+ fails += fail64bool(s, f, zero, nan, result>>4&1 == 1)
+ fails += fail64bool(s, f, nan, nan, result&1 == 1)
+
+ fails += fail64(s, g, zero, zero, ev64[result>>16&1])
+ fails += fail64(s, g, zero, one, ev64[result>>12&1])
+ fails += fail64(s, g, zero, inf, ev64[result>>8&1])
+ fails += fail64(s, g, zero, nan, ev64[result>>4&1])
+ fails += fail64(s, g, nan, nan, ev64[result>>0&1])
+
+ {
+ zero := float32(zero)
+ one := float32(one)
+ inf := float32(inf)
+ nan := float32(nan)
+ fails += fail32bool(s, ff, zero, zero, (result>>16)&1 == 1)
+ fails += fail32bool(s, ff, zero, one, (result>>12)&1 == 1)
+ fails += fail32bool(s, ff, zero, inf, (result>>8)&1 == 1)
+ fails += fail32bool(s, ff, zero, nan, (result>>4)&1 == 1)
+ fails += fail32bool(s, ff, nan, nan, result&1 == 1)
+
+ fails += fail32(s, gg, zero, zero, ev32[(result>>16)&1])
+ fails += fail32(s, gg, zero, one, ev32[(result>>12)&1])
+ fails += fail32(s, gg, zero, inf, ev32[(result>>8)&1])
+ fails += fail32(s, gg, zero, nan, ev32[(result>>4)&1])
+ fails += fail32(s, gg, nan, nan, ev32[(result>>0)&1])
+ }
+
+ return fails
+}
+
+func expectCx128(s string, x, expected complex128) int {
+ if x != expected {
+ println("Cx 128 Expected", expected, "for", s, ", got", x)
+ return 1
+ }
+ return 0
+}
+
+func expectCx64(s string, x, expected complex64) int {
+ if x != expected {
+ println("Cx 64 Expected", expected, "for", s, ", got", x)
+ return 1
+ }
+ return 0
+}
+
+//go:noinline
+func cx128sum_ssa(a, b complex128) complex128 {
+ return a + b
+}
+
+//go:noinline
+func cx128diff_ssa(a, b complex128) complex128 {
+ return a - b
+}
+
+//go:noinline
+func cx128prod_ssa(a, b complex128) complex128 {
+ return a * b
+}
+
+//go:noinline
+func cx128quot_ssa(a, b complex128) complex128 {
+ return a / b
+}
+
+//go:noinline
+func cx128neg_ssa(a complex128) complex128 {
+ return -a
+}
+
+//go:noinline
+func cx128real_ssa(a complex128) float64 {
+ return real(a)
+}
+
+//go:noinline
+func cx128imag_ssa(a complex128) float64 {
+ return imag(a)
+}
+
+//go:noinline
+func cx128cnst_ssa(a complex128) complex128 {
+ b := 2 + 3i
+ return a * b
+}
+
+//go:noinline
+func cx64sum_ssa(a, b complex64) complex64 {
+ return a + b
+}
+
+//go:noinline
+func cx64diff_ssa(a, b complex64) complex64 {
+ return a - b
+}
+
+//go:noinline
+func cx64prod_ssa(a, b complex64) complex64 {
+ return a * b
+}
+
+//go:noinline
+func cx64quot_ssa(a, b complex64) complex64 {
+ return a / b
+}
+
+//go:noinline
+func cx64neg_ssa(a complex64) complex64 {
+ return -a
+}
+
+//go:noinline
+func cx64real_ssa(a complex64) float32 {
+ return real(a)
+}
+
+//go:noinline
+func cx64imag_ssa(a complex64) float32 {
+ return imag(a)
+}
+
+//go:noinline
+func cx128eq_ssa(a, b complex128) bool {
+ return a == b
+}
+
+//go:noinline
+func cx128ne_ssa(a, b complex128) bool {
+ return a != b
+}
+
+//go:noinline
+func cx64eq_ssa(a, b complex64) bool {
+ return a == b
+}
+
+//go:noinline
+func cx64ne_ssa(a, b complex64) bool {
+ return a != b
+}
+
+func expectTrue(s string, b bool) int {
+ if !b {
+ println("expected true for", s, ", got false")
+ return 1
+ }
+ return 0
+}
+func expectFalse(s string, b bool) int {
+ if b {
+ println("expected false for", s, ", got true")
+ return 1
+ }
+ return 0
+}
+
+func complexTest128() int {
+ fails := 0
+ var a complex128 = 1 + 2i
+ var b complex128 = 3 + 6i
+ sum := cx128sum_ssa(b, a)
+ diff := cx128diff_ssa(b, a)
+ prod := cx128prod_ssa(b, a)
+ quot := cx128quot_ssa(b, a)
+ neg := cx128neg_ssa(a)
+ r := cx128real_ssa(a)
+ i := cx128imag_ssa(a)
+ cnst := cx128cnst_ssa(a)
+ c1 := cx128eq_ssa(a, a)
+ c2 := cx128eq_ssa(a, b)
+ c3 := cx128ne_ssa(a, a)
+ c4 := cx128ne_ssa(a, b)
+
+ fails += expectCx128("sum", sum, 4+8i)
+ fails += expectCx128("diff", diff, 2+4i)
+ fails += expectCx128("prod", prod, -9+12i)
+ fails += expectCx128("quot", quot, 3+0i)
+ fails += expectCx128("neg", neg, -1-2i)
+ fails += expect64("real", r, 1)
+ fails += expect64("imag", i, 2)
+ fails += expectCx128("cnst", cnst, -4+7i)
+ fails += expectTrue(fmt.Sprintf("%v==%v", a, a), c1)
+ fails += expectFalse(fmt.Sprintf("%v==%v", a, b), c2)
+ fails += expectFalse(fmt.Sprintf("%v!=%v", a, a), c3)
+ fails += expectTrue(fmt.Sprintf("%v!=%v", a, b), c4)
+
+ return fails
+}
+
+func complexTest64() int {
+ fails := 0
+ var a complex64 = 1 + 2i
+ var b complex64 = 3 + 6i
+ sum := cx64sum_ssa(b, a)
+ diff := cx64diff_ssa(b, a)
+ prod := cx64prod_ssa(b, a)
+ quot := cx64quot_ssa(b, a)
+ neg := cx64neg_ssa(a)
+ r := cx64real_ssa(a)
+ i := cx64imag_ssa(a)
+ c1 := cx64eq_ssa(a, a)
+ c2 := cx64eq_ssa(a, b)
+ c3 := cx64ne_ssa(a, a)
+ c4 := cx64ne_ssa(a, b)
+
+ fails += expectCx64("sum", sum, 4+8i)
+ fails += expectCx64("diff", diff, 2+4i)
+ fails += expectCx64("prod", prod, -9+12i)
+ fails += expectCx64("quot", quot, 3+0i)
+ fails += expectCx64("neg", neg, -1-2i)
+ fails += expect32("real", r, 1)
+ fails += expect32("imag", i, 2)
+ fails += expectTrue(fmt.Sprintf("%v==%v", a, a), c1)
+ fails += expectFalse(fmt.Sprintf("%v==%v", a, b), c2)
+ fails += expectFalse(fmt.Sprintf("%v!=%v", a, a), c3)
+ fails += expectTrue(fmt.Sprintf("%v!=%v", a, b), c4)
+
+ return fails
+}
+
+func main() {
+
+ a := 3.0
+ b := 4.0
+
+ c := float32(3.0)
+ d := float32(4.0)
+
+ tiny := float32(1.5E-45) // smallest f32 denorm = 2**(-149)
+ dtiny := float64(tiny) // well within range of f64
+
+ fails := 0
+ fails += fail64("+", add64_ssa, a, b, 7.0)
+ fails += fail64("*", mul64_ssa, a, b, 12.0)
+ fails += fail64("-", sub64_ssa, a, b, -1.0)
+ fails += fail64("/", div64_ssa, a, b, 0.75)
+ fails += fail64("neg", neg64_ssa, a, b, -7)
+
+ fails += fail32("+", add32_ssa, c, d, 7.0)
+ fails += fail32("*", mul32_ssa, c, d, 12.0)
+ fails += fail32("-", sub32_ssa, c, d, -1.0)
+ fails += fail32("/", div32_ssa, c, d, 0.75)
+ fails += fail32("neg", neg32_ssa, c, d, -7)
+
+ // denorm-squared should underflow to zero.
+ fails += fail32("*", mul32_ssa, tiny, tiny, 0)
+
+ // but should not underflow in float and in fact is exactly representable.
+ fails += fail64("*", mul64_ssa, dtiny, dtiny, 1.9636373861190906e-90)
+
+ // Intended to create register pressure which forces
+ // asymmetric op into different code paths.
+ aa, ab, ac, ad, ba, bb, bc, bd, ca, cb, cc, cd, da, db, dc, dd := manysub_ssa(1000.0, 100.0, 10.0, 1.0)
+
+ fails += expect64("aa", aa, 11.0)
+ fails += expect64("ab", ab, 900.0)
+ fails += expect64("ac", ac, 990.0)
+ fails += expect64("ad", ad, 999.0)
+
+ fails += expect64("ba", ba, -900.0)
+ fails += expect64("bb", bb, 22.0)
+ fails += expect64("bc", bc, 90.0)
+ fails += expect64("bd", bd, 99.0)
+
+ fails += expect64("ca", ca, -990.0)
+ fails += expect64("cb", cb, -90.0)
+ fails += expect64("cc", cc, 33.0)
+ fails += expect64("cd", cd, 9.0)
+
+ fails += expect64("da", da, -999.0)
+ fails += expect64("db", db, -99.0)
+ fails += expect64("dc", dc, -9.0)
+ fails += expect64("dd", dd, 44.0)
+
+ fails += integer2floatConversions()
+
+ var zero64 float64 = 0.0
+ var one64 float64 = 1.0
+ var inf64 float64 = 1.0 / zero64
+ var nan64 float64 = sub64_ssa(inf64, inf64)
+
+ fails += cmpOpTest("!=", ne64_ssa, nebr64_ssa, ne32_ssa, nebr32_ssa, zero64, one64, inf64, nan64, 0x01111)
+ fails += cmpOpTest("==", eq64_ssa, eqbr64_ssa, eq32_ssa, eqbr32_ssa, zero64, one64, inf64, nan64, 0x10000)
+ fails += cmpOpTest("<=", le64_ssa, lebr64_ssa, le32_ssa, lebr32_ssa, zero64, one64, inf64, nan64, 0x11100)
+ fails += cmpOpTest("<", lt64_ssa, ltbr64_ssa, lt32_ssa, ltbr32_ssa, zero64, one64, inf64, nan64, 0x01100)
+ fails += cmpOpTest(">", gt64_ssa, gtbr64_ssa, gt32_ssa, gtbr32_ssa, zero64, one64, inf64, nan64, 0x00000)
+ fails += cmpOpTest(">=", ge64_ssa, gebr64_ssa, ge32_ssa, gebr32_ssa, zero64, one64, inf64, nan64, 0x10000)
+
+ {
+ lt, le, eq, ne, ge, gt := compares64_ssa(0.0, 1.0, inf64, nan64)
+ fails += expectUint64("lt", lt, 0x0110001000000000)
+ fails += expectUint64("le", le, 0x1110011000100000)
+ fails += expectUint64("eq", eq, 0x1000010000100000)
+ fails += expectUint64("ne", ne, 0x0111101111011111)
+ fails += expectUint64("ge", ge, 0x1000110011100000)
+ fails += expectUint64("gt", gt, 0x0000100011000000)
+ // fmt.Printf("lt=0x%016x, le=0x%016x, eq=0x%016x, ne=0x%016x, ge=0x%016x, gt=0x%016x\n",
+ // lt, le, eq, ne, ge, gt)
+ }
+ {
+ lt, le, eq, ne, ge, gt := compares32_ssa(0.0, 1.0, float32(inf64), float32(nan64))
+ fails += expectUint64("lt", lt, 0x0110001000000000)
+ fails += expectUint64("le", le, 0x1110011000100000)
+ fails += expectUint64("eq", eq, 0x1000010000100000)
+ fails += expectUint64("ne", ne, 0x0111101111011111)
+ fails += expectUint64("ge", ge, 0x1000110011100000)
+ fails += expectUint64("gt", gt, 0x0000100011000000)
+ }
+
+ fails += floatingToIntegerConversionsTest()
+ fails += complexTest128()
+ fails += complexTest64()
+
+ if fails > 0 {
+ fmt.Printf("Saw %v failures\n", fails)
+ panic("Failed.")
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// This program generates a test to verify that the standard arithmetic
+// operators properly handle some special cases. The test file should be
+// generated with a known working version of go.
+// launch with `go run arithBoundaryGen.go` a file called arithBoundary_ssa.go
+// will be written into the parent directory containing the tests
+
+package main
+
+import (
+ "bytes"
+ "fmt"
+ "go/format"
+ "io/ioutil"
+ "log"
+ "text/template"
+)
+
+// used for interpolation in a text template
+type tmplData struct {
+ Name, Stype, Symbol string
+}
+
+// used to work around an issue with the mod symbol being
+// interpreted as part of a format string
+func (s tmplData) SymFirst() string {
+ return string(s.Symbol[0])
+}
+
+// ucast casts an unsigned int to the size in s
+func ucast(i uint64, s sizedTestData) uint64 {
+ switch s.name {
+ case "uint32":
+ return uint64(uint32(i))
+ case "uint16":
+ return uint64(uint16(i))
+ case "uint8":
+ return uint64(uint8(i))
+ }
+ return i
+}
+
+// icast casts a signed int to the size in s
+func icast(i int64, s sizedTestData) int64 {
+ switch s.name {
+ case "int32":
+ return int64(int32(i))
+ case "int16":
+ return int64(int16(i))
+ case "int8":
+ return int64(int8(i))
+ }
+ return i
+}
+
+type sizedTestData struct {
+ name string
+ sn string
+ u []uint64
+ i []int64
+}
+
+// values to generate tests. these should include the smallest and largest values, along
+// with any other values that might cause issues. we generate n^2 tests for each size to
+// cover all cases.
+var szs = []sizedTestData{
+ sizedTestData{name: "uint64", sn: "64", u: []uint64{0, 1, 4294967296, 0xffffFFFFffffFFFF}},
+ sizedTestData{name: "int64", sn: "64", i: []int64{-0x8000000000000000, -0x7FFFFFFFFFFFFFFF,
+ -4294967296, -1, 0, 1, 4294967296, 0x7FFFFFFFFFFFFFFE, 0x7FFFFFFFFFFFFFFF}},
+
+ sizedTestData{name: "uint32", sn: "32", u: []uint64{0, 1, 4294967295}},
+ sizedTestData{name: "int32", sn: "32", i: []int64{-0x80000000, -0x7FFFFFFF, -1, 0,
+ 1, 0x7FFFFFFF}},
+
+ sizedTestData{name: "uint16", sn: "16", u: []uint64{0, 1, 65535}},
+ sizedTestData{name: "int16", sn: "16", i: []int64{-32768, -32767, -1, 0, 1, 32766, 32767}},
+
+ sizedTestData{name: "uint8", sn: "8", u: []uint64{0, 1, 255}},
+ sizedTestData{name: "int8", sn: "8", i: []int64{-128, -127, -1, 0, 1, 126, 127}},
+}
+
+type op struct {
+ name, symbol string
+}
+
+// ops that we will be generating tests for
+var ops = []op{op{"add", "+"}, op{"sub", "-"}, op{"div", "/"}, op{"mod", "%%"}, op{"mul", "*"}}
+
+func main() {
+
+ w := new(bytes.Buffer)
+ fmt.Fprintf(w, "package main;\n")
+ fmt.Fprintf(w, "import \"fmt\"\n")
+
+ for _, sz := range []int{64, 32, 16, 8} {
+ fmt.Fprintf(w, "type utd%d struct {\n", sz)
+ fmt.Fprintf(w, " a,b uint%d\n", sz)
+ fmt.Fprintf(w, " add,sub,mul,div,mod uint%d\n", sz)
+ fmt.Fprintf(w, "}\n")
+
+ fmt.Fprintf(w, "type itd%d struct {\n", sz)
+ fmt.Fprintf(w, " a,b int%d\n", sz)
+ fmt.Fprintf(w, " add,sub,mul,div,mod int%d\n", sz)
+ fmt.Fprintf(w, "}\n")
+ }
+
+ // the function being tested
+ testFunc, err := template.New("testFunc").Parse(
+ `//go:noinline
+ func {{.Name}}_{{.Stype}}_ssa(a, b {{.Stype}}) {{.Stype}} {
+ return a {{.SymFirst}} b
+}
+`)
+ if err != nil {
+ panic(err)
+ }
+
+ // generate our functions to be tested
+ for _, s := range szs {
+ for _, o := range ops {
+ fd := tmplData{o.name, s.name, o.symbol}
+ err = testFunc.Execute(w, fd)
+ if err != nil {
+ panic(err)
+ }
+ }
+ }
+
+ // generate the test data
+ for _, s := range szs {
+ if len(s.u) > 0 {
+ fmt.Fprintf(w, "var %s_data []utd%s = []utd%s{", s.name, s.sn, s.sn)
+ for _, i := range s.u {
+ for _, j := range s.u {
+ fmt.Fprintf(w, "utd%s{a: %d, b: %d, add: %d, sub: %d, mul: %d", s.sn, i, j, ucast(i+j, s), ucast(i-j, s), ucast(i*j, s))
+ if j != 0 {
+ fmt.Fprintf(w, ", div: %d, mod: %d", ucast(i/j, s), ucast(i%j, s))
+ }
+ fmt.Fprint(w, "},\n")
+ }
+ }
+ fmt.Fprintf(w, "}\n")
+ } else {
+ // TODO: clean up this duplication
+ fmt.Fprintf(w, "var %s_data []itd%s = []itd%s{", s.name, s.sn, s.sn)
+ for _, i := range s.i {
+ for _, j := range s.i {
+ fmt.Fprintf(w, "itd%s{a: %d, b: %d, add: %d, sub: %d, mul: %d", s.sn, i, j, icast(i+j, s), icast(i-j, s), icast(i*j, s))
+ if j != 0 {
+ fmt.Fprintf(w, ", div: %d, mod: %d", icast(i/j, s), icast(i%j, s))
+ }
+ fmt.Fprint(w, "},\n")
+ }
+ }
+ fmt.Fprintf(w, "}\n")
+ }
+ }
+
+ fmt.Fprintf(w, "var failed bool\n\n")
+ fmt.Fprintf(w, "func main() {\n\n")
+
+ verify, err := template.New("tst").Parse(
+ `if got := {{.Name}}_{{.Stype}}_ssa(v.a, v.b); got != v.{{.Name}} {
+ fmt.Printf("{{.Name}}_{{.Stype}} %d{{.Symbol}}%d = %d, wanted %d\n",v.a,v.b,got,v.{{.Name}})
+ failed = true
+}
+`)
+
+ for _, s := range szs {
+ fmt.Fprintf(w, "for _, v := range %s_data {\n", s.name)
+
+ for _, o := range ops {
+ // avoid generating tests that divide by zero
+ if o.name == "div" || o.name == "mod" {
+ fmt.Fprint(w, "if v.b != 0 {")
+ }
+
+ err = verify.Execute(w, tmplData{o.name, s.name, o.symbol})
+
+ if o.name == "div" || o.name == "mod" {
+ fmt.Fprint(w, "\n}\n")
+ }
+
+ if err != nil {
+ panic(err)
+ }
+
+ }
+ fmt.Fprint(w, " }\n")
+ }
+
+ fmt.Fprintf(w, `if failed {
+ panic("tests failed")
+ }
+`)
+ fmt.Fprintf(w, "}\n")
+
+ // gofmt result
+ b := w.Bytes()
+ src, err := format.Source(b)
+ if err != nil {
+ fmt.Printf("%s\n", b)
+ panic(err)
+ }
+
+ // write to file
+ err = ioutil.WriteFile("../arithBoundary_ssa.go", src, 0666)
+ if err != nil {
+ log.Fatalf("can't write output: %v\n", err)
+ }
+}
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// This program generates a test to verify that the standard arithmetic
+// operators properly handle const cases. The test file should be
+// generated with a known working version of go.
+// launch with `go run arithConstGen.go` a file called arithConst_ssa.go
+// will be written into the parent directory containing the tests
+
+package main
+
+import (
+ "bytes"
+ "fmt"
+ "go/format"
+ "io/ioutil"
+ "log"
+ "strings"
+ "text/template"
+)
+
+type op struct {
+ name, symbol string
+}
+type szD struct {
+ name string
+ sn string
+ u []uint64
+ i []int64
+}
+
+var szs []szD = []szD{
+ szD{name: "uint64", sn: "64", u: []uint64{0, 1, 4294967296, 0xffffFFFFffffFFFF}},
+ szD{name: "int64", sn: "64", i: []int64{-0x8000000000000000, -0x7FFFFFFFFFFFFFFF,
+ -4294967296, -1, 0, 1, 4294967296, 0x7FFFFFFFFFFFFFFE, 0x7FFFFFFFFFFFFFFF}},
+
+ szD{name: "uint32", sn: "32", u: []uint64{0, 1, 4294967295}},
+ szD{name: "int32", sn: "32", i: []int64{-0x80000000, -0x7FFFFFFF, -1, 0,
+ 1, 0x7FFFFFFF}},
+
+ szD{name: "uint16", sn: "16", u: []uint64{0, 1, 65535}},
+ szD{name: "int16", sn: "16", i: []int64{-32768, -32767, -1, 0, 1, 32766, 32767}},
+
+ szD{name: "uint8", sn: "8", u: []uint64{0, 1, 255}},
+ szD{name: "int8", sn: "8", i: []int64{-128, -127, -1, 0, 1, 126, 127}},
+}
+
+var ops []op = []op{op{"add", "+"}, op{"sub", "-"}, op{"div", "/"}, op{"mul", "*"},
+ op{"lsh", "<<"}, op{"rsh", ">>"}}
+
+// compute the result of i op j, cast as type t.
+func ansU(i, j uint64, t, op string) string {
+ var ans uint64
+ switch op {
+ case "+":
+ ans = i + j
+ case "-":
+ ans = i - j
+ case "*":
+ ans = i * j
+ case "/":
+ if j != 0 {
+ ans = i / j
+ }
+ case "<<":
+ ans = i << j
+ case ">>":
+ ans = i >> j
+ }
+ switch t {
+ case "uint32":
+ ans = uint64(uint32(ans))
+ case "uint16":
+ ans = uint64(uint16(ans))
+ case "uint8":
+ ans = uint64(uint8(ans))
+ }
+ return fmt.Sprintf("%d", ans)
+}
+
+// compute the result of i op j, cast as type t.
+func ansS(i, j int64, t, op string) string {
+ var ans int64
+ switch op {
+ case "+":
+ ans = i + j
+ case "-":
+ ans = i - j
+ case "*":
+ ans = i * j
+ case "/":
+ if j != 0 {
+ ans = i / j
+ }
+ case "<<":
+ ans = i << uint64(j)
+ case ">>":
+ ans = i >> uint64(j)
+ }
+ switch t {
+ case "int32":
+ ans = int64(int32(ans))
+ case "int16":
+ ans = int64(int16(ans))
+ case "int8":
+ ans = int64(int8(ans))
+ }
+ return fmt.Sprintf("%d", ans)
+}
+
+func main() {
+
+ w := new(bytes.Buffer)
+
+ fmt.Fprintf(w, "package main;\n")
+ fmt.Fprintf(w, "import \"fmt\"\n")
+
+ fncCnst1, err := template.New("fnc").Parse(
+ `//go:noinline
+ func {{.Name}}_{{.Type_}}_{{.FNumber}}_ssa(a {{.Type_}}) {{.Type_}} {
+ return a {{.Symbol}} {{.Number}}
+}
+`)
+ if err != nil {
+ panic(err)
+ }
+ fncCnst2, err := template.New("fnc").Parse(
+ `//go:noinline
+ func {{.Name}}_{{.FNumber}}_{{.Type_}}_ssa(a {{.Type_}}) {{.Type_}} {
+ return {{.Number}} {{.Symbol}} a
+}
+
+`)
+ if err != nil {
+ panic(err)
+ }
+
+ type fncData struct {
+ Name, Type_, Symbol, FNumber, Number string
+ }
+
+ for _, s := range szs {
+ for _, o := range ops {
+ fd := fncData{o.name, s.name, o.symbol, "", ""}
+
+ // unsigned test cases
+ if len(s.u) > 0 {
+ for _, i := range s.u {
+ fd.Number = fmt.Sprintf("%d", i)
+ fd.FNumber = strings.Replace(fd.Number, "-", "Neg", -1)
+
+ // avoid division by zero
+ if o.name != "div" || i != 0 {
+ fncCnst1.Execute(w, fd)
+ }
+
+ fncCnst2.Execute(w, fd)
+ }
+ }
+
+ // signed test cases
+ if len(s.i) > 0 {
+ // don't generate tests for shifts by signed integers
+ if o.name == "lsh" || o.name == "rsh" {
+ continue
+ }
+ for _, i := range s.i {
+ fd.Number = fmt.Sprintf("%d", i)
+ fd.FNumber = strings.Replace(fd.Number, "-", "Neg", -1)
+
+ // avoid division by zero
+ if o.name != "div" || i != 0 {
+ fncCnst1.Execute(w, fd)
+ }
+ fncCnst2.Execute(w, fd)
+ }
+ }
+ }
+ }
+
+ fmt.Fprintf(w, "var failed bool\n\n")
+ fmt.Fprintf(w, "func main() {\n\n")
+
+ vrf1, _ := template.New("vrf1").Parse(`
+ if got := {{.Name}}_{{.FNumber}}_{{.Type_}}_ssa({{.Input}}); got != {{.Ans}} {
+ fmt.Printf("{{.Name}}_{{.Type_}} {{.Number}}{{.Symbol}}{{.Input}} = %d, wanted {{.Ans}}\n",got)
+ failed = true
+ }
+`)
+
+ vrf2, _ := template.New("vrf2").Parse(`
+ if got := {{.Name}}_{{.Type_}}_{{.FNumber}}_ssa({{.Input}}); got != {{.Ans}} {
+ fmt.Printf("{{.Name}}_{{.Type_}} {{.Input}}{{.Symbol}}{{.Number}} = %d, wanted {{.Ans}}\n",got)
+ failed = true
+ }
+`)
+
+ type cfncData struct {
+ Name, Type_, Symbol, FNumber, Number string
+ Ans, Input string
+ }
+ for _, s := range szs {
+ if len(s.u) > 0 {
+ for _, o := range ops {
+ fd := cfncData{o.name, s.name, o.symbol, "", "", "", ""}
+ for _, i := range s.u {
+ fd.Number = fmt.Sprintf("%d", i)
+ fd.FNumber = strings.Replace(fd.Number, "-", "Neg", -1)
+
+ // unsigned
+ for _, j := range s.u {
+
+ if o.name != "div" || j != 0 {
+ fd.Ans = ansU(i, j, s.name, o.symbol)
+ fd.Input = fmt.Sprintf("%d", j)
+ err = vrf1.Execute(w, fd)
+ if err != nil {
+ panic(err)
+ }
+ }
+
+ if o.name != "div" || i != 0 {
+ fd.Ans = ansU(j, i, s.name, o.symbol)
+ fd.Input = fmt.Sprintf("%d", j)
+ err = vrf2.Execute(w, fd)
+ if err != nil {
+ panic(err)
+ }
+ }
+
+ }
+ }
+
+ }
+ }
+
+ // signed
+ if len(s.i) > 0 {
+ for _, o := range ops {
+ // don't generate tests for shifts by signed integers
+ if o.name == "lsh" || o.name == "rsh" {
+ continue
+ }
+ fd := cfncData{o.name, s.name, o.symbol, "", "", "", ""}
+ for _, i := range s.i {
+ fd.Number = fmt.Sprintf("%d", i)
+ fd.FNumber = strings.Replace(fd.Number, "-", "Neg", -1)
+ for _, j := range s.i {
+ if o.name != "div" || j != 0 {
+ fd.Ans = ansS(i, j, s.name, o.symbol)
+ fd.Input = fmt.Sprintf("%d", j)
+ err = vrf1.Execute(w, fd)
+ if err != nil {
+ panic(err)
+ }
+ }
+
+ if o.name != "div" || i != 0 {
+ fd.Ans = ansS(j, i, s.name, o.symbol)
+ fd.Input = fmt.Sprintf("%d", j)
+ err = vrf2.Execute(w, fd)
+ if err != nil {
+ panic(err)
+ }
+ }
+
+ }
+ }
+
+ }
+ }
+ }
+
+ fmt.Fprintf(w, `if failed {
+ panic("tests failed")
+ }
+`)
+ fmt.Fprintf(w, "}\n")
+
+ // gofmt result
+ b := w.Bytes()
+ src, err := format.Source(b)
+ if err != nil {
+ fmt.Printf("%s\n", b)
+ panic(err)
+ }
+
+ // write to file
+ err = ioutil.WriteFile("../arithConst_ssa.go", src, 0666)
+ if err != nil {
+ log.Fatalf("can't write output: %v\n", err)
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package main
+
+import (
+ "bytes"
+ "fmt"
+ "go/format"
+ "io/ioutil"
+ "log"
+)
+
+// This program generates tests to verify that copying operations
+// copy the data they are supposed to and clobber no adjacent values.
+
+// run as `go run copyGen.go`. A file called copy_ssa.go
+// will be written into the parent directory containing the tests.
+
+var sizes = [...]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 16, 17, 23, 24, 25, 31, 32, 33, 63, 64, 65, 1023, 1024, 1025, 1024 + 7, 1024 + 8, 1024 + 9, 1024 + 15, 1024 + 16, 1024 + 17}
+
+func main() {
+ w := new(bytes.Buffer)
+ fmt.Fprintf(w, "// run\n")
+ fmt.Fprintf(w, "// autogenerated from gen/copyGen.go - do not edit!\n")
+ fmt.Fprintf(w, "package main\n")
+ fmt.Fprintf(w, "import \"fmt\"\n")
+
+ for _, s := range sizes {
+ // type for test
+ fmt.Fprintf(w, "type T%d struct {\n", s)
+ fmt.Fprintf(w, " pre [8]byte\n")
+ fmt.Fprintf(w, " mid [%d]byte\n", s)
+ fmt.Fprintf(w, " post [8]byte\n")
+ fmt.Fprintf(w, "}\n")
+
+ // function being tested
+ fmt.Fprintf(w, "func t%dcopy_ssa(y, x *[%d]byte) {\n", s, s)
+ fmt.Fprintf(w, " switch{}\n")
+ fmt.Fprintf(w, " *y = *x\n")
+ fmt.Fprintf(w, "}\n")
+
+ // testing harness
+ fmt.Fprintf(w, "func testCopy%d() {\n", s)
+ fmt.Fprintf(w, " a := T%d{[8]byte{201, 202, 203, 204, 205, 206, 207, 208},[%d]byte{", s, s)
+ for i := 0; i < s; i++ {
+ fmt.Fprintf(w, "%d,", i%100)
+ }
+ fmt.Fprintf(w, "},[8]byte{211, 212, 213, 214, 215, 216, 217, 218}}\n")
+ fmt.Fprintf(w, " x := [%d]byte{", s)
+ for i := 0; i < s; i++ {
+ fmt.Fprintf(w, "%d,", 100+i%100)
+ }
+ fmt.Fprintf(w, "}\n")
+ fmt.Fprintf(w, " t%dcopy_ssa(&a.mid, &x)\n", s)
+ fmt.Fprintf(w, " want := T%d{[8]byte{201, 202, 203, 204, 205, 206, 207, 208},[%d]byte{", s, s)
+ for i := 0; i < s; i++ {
+ fmt.Fprintf(w, "%d,", 100+i%100)
+ }
+ fmt.Fprintf(w, "},[8]byte{211, 212, 213, 214, 215, 216, 217, 218}}\n")
+ fmt.Fprintf(w, " if a != want {\n")
+ fmt.Fprintf(w, " fmt.Printf(\"t%dcopy got=%%v, want %%v\\n\", a, want)\n", s)
+ fmt.Fprintf(w, " failed=true\n")
+ fmt.Fprintf(w, " }\n")
+ fmt.Fprintf(w, "}\n")
+ }
+
+ // boilerplate at end
+ fmt.Fprintf(w, "var failed bool\n")
+ fmt.Fprintf(w, "func main() {\n")
+ for _, s := range sizes {
+ fmt.Fprintf(w, " testCopy%d()\n", s)
+ }
+ fmt.Fprintf(w, " if failed {\n")
+ fmt.Fprintf(w, " panic(\"failed\")\n")
+ fmt.Fprintf(w, " }\n")
+ fmt.Fprintf(w, "}\n")
+
+ // gofmt result
+ b := w.Bytes()
+ src, err := format.Source(b)
+ if err != nil {
+ fmt.Printf("%s\n", b)
+ panic(err)
+ }
+
+ // write to file
+ err = ioutil.WriteFile("../copy_ssa.go", src, 0666)
+ if err != nil {
+ log.Fatalf("can't write output: %v\n", err)
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package main
+
+import (
+ "bytes"
+ "fmt"
+ "go/format"
+ "io/ioutil"
+ "log"
+)
+
+// This program generates tests to verify that zeroing operations
+// zero the data they are supposed to and clobber no adjacent values.
+
+// run as `go run zeroGen.go`. A file called zero_ssa.go
+// will be written into the parent directory containing the tests.
+
+var sizes = [...]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 16, 17, 23, 24, 25, 31, 32, 33, 63, 64, 65, 1023, 1024, 1025}
+
+func main() {
+ w := new(bytes.Buffer)
+ fmt.Fprintf(w, "// run\n")
+ fmt.Fprintf(w, "// autogenerated from gen/zeroGen.go - do not edit!\n")
+ fmt.Fprintf(w, "package main\n")
+ fmt.Fprintf(w, "import \"fmt\"\n")
+
+ for _, s := range sizes {
+ // type for test
+ fmt.Fprintf(w, "type T%d struct {\n", s)
+ fmt.Fprintf(w, " pre [8]byte\n")
+ fmt.Fprintf(w, " mid [%d]byte\n", s)
+ fmt.Fprintf(w, " post [8]byte\n")
+ fmt.Fprintf(w, "}\n")
+
+ // function being tested
+ fmt.Fprintf(w, "func zero%d_ssa(x *[%d]byte) {\n", s, s)
+ fmt.Fprintf(w, " switch{}\n")
+ fmt.Fprintf(w, " *x = [%d]byte{}\n", s)
+ fmt.Fprintf(w, "}\n")
+
+ // testing harness
+ fmt.Fprintf(w, "func testZero%d() {\n", s)
+ fmt.Fprintf(w, " a := T%d{[8]byte{255,255,255,255,255,255,255,255},[%d]byte{", s, s)
+ for i := 0; i < s; i++ {
+ fmt.Fprintf(w, "255,")
+ }
+ fmt.Fprintf(w, "},[8]byte{255,255,255,255,255,255,255,255}}\n")
+ fmt.Fprintf(w, " zero%d_ssa(&a.mid)\n", s)
+ fmt.Fprintf(w, " want := T%d{[8]byte{255,255,255,255,255,255,255,255},[%d]byte{", s, s)
+ for i := 0; i < s; i++ {
+ fmt.Fprintf(w, "0,")
+ }
+ fmt.Fprintf(w, "},[8]byte{255,255,255,255,255,255,255,255}}\n")
+ fmt.Fprintf(w, " if a != want {\n")
+ fmt.Fprintf(w, " fmt.Printf(\"zero%d got=%%v, want %%v\\n\", a, want)\n", s)
+ fmt.Fprintf(w, " failed=true\n")
+ fmt.Fprintf(w, " }\n")
+ fmt.Fprintf(w, "}\n")
+ }
+
+ // boilerplate at end
+ fmt.Fprintf(w, "var failed bool\n")
+ fmt.Fprintf(w, "func main() {\n")
+ for _, s := range sizes {
+ fmt.Fprintf(w, " testZero%d()\n", s)
+ }
+ fmt.Fprintf(w, " if failed {\n")
+ fmt.Fprintf(w, " panic(\"failed\")\n")
+ fmt.Fprintf(w, " }\n")
+ fmt.Fprintf(w, "}\n")
+
+ // gofmt result
+ b := w.Bytes()
+ src, err := format.Source(b)
+ if err != nil {
+ fmt.Printf("%s\n", b)
+ panic(err)
+ }
+
+ // write to file
+ err = ioutil.WriteFile("../zero_ssa.go", src, 0666)
+ if err != nil {
+ log.Fatalf("can't write output: %v\n", err)
+ }
+}
--- /dev/null
+// run
+
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Tests load/store ordering
+
+package main
+
+import "fmt"
+
+// testLoadStoreOrder tests for reordering of stores/loads.
+func testLoadStoreOrder() {
+ z := uint32(1000)
+ if testLoadStoreOrder_ssa(&z, 100) == 0 {
+ println("testLoadStoreOrder failed")
+ failed = true
+ }
+}
+func testLoadStoreOrder_ssa(z *uint32, prec uint) int {
+ switch {
+ }
+ old := *z // load
+ *z = uint32(prec) // store
+ if *z < old { // load
+ return 1
+ }
+ return 0
+}
+
+func testStoreSize() {
+ a := [4]uint16{11, 22, 33, 44}
+ testStoreSize_ssa(&a[0], &a[2], 77)
+ want := [4]uint16{77, 22, 33, 44}
+ if a != want {
+ fmt.Println("testStoreSize failed. want =", want, ", got =", a)
+ failed = true
+ }
+}
+func testStoreSize_ssa(p *uint16, q *uint16, v uint32) {
+ switch {
+ }
+ // Test to make sure that (Store ptr (Trunc32to16 val) mem)
+ // does not end up as a 32-bit store. It must stay a 16 bit store
+ // even when Trunc32to16 is rewritten to be a nop.
+ // To ensure that we get rewrite the Trunc32to16 before
+ // we rewrite the Store, we force the truncate into an
+ // earlier basic block by using it on both branches.
+ w := uint16(v)
+ if p != nil {
+ *p = w
+ } else {
+ *q = w
+ }
+}
+
+var failed = false
+
+func testExtStore_ssa(p *byte, b bool) int {
+ switch {
+ }
+ x := *p
+ *p = 7
+ if b {
+ return int(x)
+ }
+ return 0
+}
+
+func testExtStore() {
+ const start = 8
+ var b byte = start
+ if got := testExtStore_ssa(&b, true); got != start {
+ fmt.Println("testExtStore failed. want =", start, ", got =", got)
+ failed = true
+ }
+}
+
+var b int
+
+// testDeadStorePanic_ssa ensures that we don't optimize away stores
+// that could be read by after recover(). Modeled after fixedbugs/issue1304.
+func testDeadStorePanic_ssa(a int) (r int) {
+ switch {
+ }
+ defer func() {
+ recover()
+ r = a
+ }()
+ a = 2 // store
+ b := a - a // optimized to zero
+ c := 4
+ a = c / b // store, but panics
+ a = 3 // store
+ r = a
+ return
+}
+
+func testDeadStorePanic() {
+ if want, got := 2, testDeadStorePanic_ssa(1); want != got {
+ fmt.Println("testDeadStorePanic failed. want =", want, ", got =", got)
+ failed = true
+ }
+}
+
+func main() {
+
+ testLoadStoreOrder()
+ testStoreSize()
+ testExtStore()
+ testDeadStorePanic()
+
+ if failed {
+ panic("failed")
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// map_ssa.go tests map operations.
+package main
+
+import "fmt"
+
+var failed = false
+
+//go:noinline
+func lenMap_ssa(v map[int]int) int {
+ return len(v)
+}
+
+func testLenMap() {
+
+ v := make(map[int]int)
+ v[0] = 0
+ v[1] = 0
+ v[2] = 0
+
+ if want, got := 3, lenMap_ssa(v); got != want {
+ fmt.Printf("expected len(map) = %d, got %d", want, got)
+ failed = true
+ }
+}
+
+func testLenNilMap() {
+
+ var v map[int]int
+ if want, got := 0, lenMap_ssa(v); got != want {
+ fmt.Printf("expected len(nil) = %d, got %d", want, got)
+ failed = true
+ }
+}
+func main() {
+ testLenMap()
+ testLenNilMap()
+
+ if failed {
+ panic("failed")
+ }
+}
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package main
+
+// Test to make sure spills of cast-shortened values
+// don't end up spilling the pre-shortened size instead
+// of the post-shortened size.
+
+import (
+ "fmt"
+ "runtime"
+)
+
+// unfoldable true
+var true_ = true
+
+var data1 [26]int32
+var data2 [26]int64
+
+func init() {
+ for i := 0; i < 26; i++ {
+ // If we spill all 8 bytes of this datum, the 1 in the high-order 4 bytes
+ // will overwrite some other variable in the stack frame.
+ data2[i] = 0x100000000
+ }
+}
+
+func foo() int32 {
+ var a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z int32
+ if true_ {
+ a = data1[0]
+ b = data1[1]
+ c = data1[2]
+ d = data1[3]
+ e = data1[4]
+ f = data1[5]
+ g = data1[6]
+ h = data1[7]
+ i = data1[8]
+ j = data1[9]
+ k = data1[10]
+ l = data1[11]
+ m = data1[12]
+ n = data1[13]
+ o = data1[14]
+ p = data1[15]
+ q = data1[16]
+ r = data1[17]
+ s = data1[18]
+ t = data1[19]
+ u = data1[20]
+ v = data1[21]
+ w = data1[22]
+ x = data1[23]
+ y = data1[24]
+ z = data1[25]
+ } else {
+ a = int32(data2[0])
+ b = int32(data2[1])
+ c = int32(data2[2])
+ d = int32(data2[3])
+ e = int32(data2[4])
+ f = int32(data2[5])
+ g = int32(data2[6])
+ h = int32(data2[7])
+ i = int32(data2[8])
+ j = int32(data2[9])
+ k = int32(data2[10])
+ l = int32(data2[11])
+ m = int32(data2[12])
+ n = int32(data2[13])
+ o = int32(data2[14])
+ p = int32(data2[15])
+ q = int32(data2[16])
+ r = int32(data2[17])
+ s = int32(data2[18])
+ t = int32(data2[19])
+ u = int32(data2[20])
+ v = int32(data2[21])
+ w = int32(data2[22])
+ x = int32(data2[23])
+ y = int32(data2[24])
+ z = int32(data2[25])
+ }
+ // Lots of phis of the form phi(int32,int64) of type int32 happen here.
+ // Some will be stack phis. For those stack phis, make sure the spill
+ // of the second argument uses the phi's width (4 bytes), not its width
+ // (8 bytes). Otherwise, a random stack slot gets clobbered.
+
+ runtime.Gosched()
+ return a + b + c + d + e + f + g + h + i + j + k + l + m + n + o + p + q + r + s + t + u + v + w + x + y + z
+}
+
+func main() {
+ want := int32(0)
+ got := foo()
+ if got != want {
+ fmt.Printf("want %d, got %d\n", want, got)
+ panic("bad")
+ }
+}
--- /dev/null
+// run
+
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Tests phi implementation
+
+package main
+
+func phiOverwrite_ssa() int {
+ var n int
+ for i := 0; i < 10; i++ {
+ if i == 6 {
+ break
+ }
+ n = i
+ }
+ return n
+}
+
+func phiOverwrite() {
+ want := 5
+ got := phiOverwrite_ssa()
+ if got != want {
+ println("phiOverwrite_ssa()=", want, ", got", got)
+ failed = true
+ }
+}
+
+func phiOverwriteBig_ssa() int {
+ var a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z int
+ a = 1
+ for idx := 0; idx < 26; idx++ {
+ a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z = b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z, a
+ }
+ return a*1 + b*2 + c*3 + d*4 + e*5 + f*6 + g*7 + h*8 + i*9 + j*10 + k*11 + l*12 + m*13 + n*14 + o*15 + p*16 + q*17 + r*18 + s*19 + t*20 + u*21 + v*22 + w*23 + x*24 + y*25 + z*26
+}
+
+func phiOverwriteBig() {
+ want := 1
+ got := phiOverwriteBig_ssa()
+ if got != want {
+ println("phiOverwriteBig_ssa()=", want, ", got", got)
+ failed = true
+ }
+}
+
+var failed = false
+
+func main() {
+ phiOverwrite()
+ phiOverwriteBig()
+ if failed {
+ panic("failed")
+ }
+}
--- /dev/null
+// run
+
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Tests short circuiting.
+
+package main
+
+func and_ssa(arg1, arg2 bool) bool {
+ return arg1 && rightCall(arg2)
+}
+
+func or_ssa(arg1, arg2 bool) bool {
+ return arg1 || rightCall(arg2)
+}
+
+var rightCalled bool
+
+//go:noinline
+func rightCall(v bool) bool {
+ rightCalled = true
+ return v
+ panic("unreached")
+}
+
+func testAnd(arg1, arg2, wantRes bool) { testShortCircuit("AND", arg1, arg2, and_ssa, arg1, wantRes) }
+func testOr(arg1, arg2, wantRes bool) { testShortCircuit("OR", arg1, arg2, or_ssa, !arg1, wantRes) }
+
+func testShortCircuit(opName string, arg1, arg2 bool, fn func(bool, bool) bool, wantRightCall, wantRes bool) {
+ rightCalled = false
+ got := fn(arg1, arg2)
+ if rightCalled != wantRightCall {
+ println("failed for", arg1, opName, arg2, "; rightCalled=", rightCalled, "want=", wantRightCall)
+ failed = true
+ }
+ if wantRes != got {
+ println("failed for", arg1, opName, arg2, "; res=", got, "want=", wantRes)
+ failed = true
+ }
+}
+
+var failed = false
+
+func main() {
+ testAnd(false, false, false)
+ testAnd(false, true, false)
+ testAnd(true, false, false)
+ testAnd(true, true, true)
+
+ testOr(false, false, false)
+ testOr(false, true, true)
+ testOr(true, false, true)
+ testOr(true, true, true)
+
+ if failed {
+ panic("failed")
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// string_ssa.go tests string operations.
+package main
+
+var failed = false
+
+//go:noinline
+func testStringSlice1_ssa(a string, i, j int) string {
+ return a[i:]
+}
+
+//go:noinline
+func testStringSlice2_ssa(a string, i, j int) string {
+ return a[:j]
+}
+
+//go:noinline
+func testStringSlice12_ssa(a string, i, j int) string {
+ return a[i:j]
+}
+
+func testStringSlice() {
+ tests := [...]struct {
+ fn func(string, int, int) string
+ s string
+ low, high int
+ want string
+ }{
+ // -1 means the value is not used.
+ {testStringSlice1_ssa, "foobar", 0, -1, "foobar"},
+ {testStringSlice1_ssa, "foobar", 3, -1, "bar"},
+ {testStringSlice1_ssa, "foobar", 6, -1, ""},
+ {testStringSlice2_ssa, "foobar", -1, 0, ""},
+ {testStringSlice2_ssa, "foobar", -1, 3, "foo"},
+ {testStringSlice2_ssa, "foobar", -1, 6, "foobar"},
+ {testStringSlice12_ssa, "foobar", 0, 6, "foobar"},
+ {testStringSlice12_ssa, "foobar", 0, 0, ""},
+ {testStringSlice12_ssa, "foobar", 6, 6, ""},
+ {testStringSlice12_ssa, "foobar", 1, 5, "ooba"},
+ {testStringSlice12_ssa, "foobar", 3, 3, ""},
+ {testStringSlice12_ssa, "", 0, 0, ""},
+ }
+
+ for i, t := range tests {
+ if got := t.fn(t.s, t.low, t.high); t.want != got {
+ println("#", i, " ", t.s, "[", t.low, ":", t.high, "] = ", got, " want ", t.want)
+ failed = true
+ }
+ }
+}
+
+type prefix struct {
+ prefix string
+}
+
+func (p *prefix) slice_ssa() {
+ p.prefix = p.prefix[:3]
+}
+
+func testStructSlice() {
+ switch {
+ }
+ p := &prefix{"prefix"}
+ p.slice_ssa()
+ if "pre" != p.prefix {
+ println("wrong field slice: wanted %s got %s", "pre", p.prefix)
+ failed = true
+ }
+}
+
+func testStringSlicePanic() {
+ defer func() {
+ if r := recover(); r != nil {
+ println("paniced as expected")
+ }
+ }()
+
+ str := "foobar"
+ println("got ", testStringSlice12_ssa(str, 3, 9))
+ println("expected to panic, but didn't")
+ failed = true
+}
+
+const _Accuracy_name = "BelowExactAbove"
+
+var _Accuracy_index = [...]uint8{0, 5, 10, 15}
+
+//go:noinline
+func testSmallIndexType_ssa(i int) string {
+ return _Accuracy_name[_Accuracy_index[i]:_Accuracy_index[i+1]]
+}
+
+func testSmallIndexType() {
+ tests := []struct {
+ i int
+ want string
+ }{
+ {0, "Below"},
+ {1, "Exact"},
+ {2, "Above"},
+ }
+
+ for i, t := range tests {
+ if got := testSmallIndexType_ssa(t.i); got != t.want {
+ println("#", i, "got ", got, ", wanted", t.want)
+ failed = true
+ }
+ }
+}
+
+//go:noinline
+func testStringElem_ssa(s string, i int) byte {
+ return s[i]
+}
+
+func testStringElem() {
+ tests := []struct {
+ s string
+ i int
+ n byte
+ }{
+ {"foobar", 3, 98},
+ {"foobar", 0, 102},
+ {"foobar", 5, 114},
+ }
+ for _, t := range tests {
+ if got := testStringElem_ssa(t.s, t.i); got != t.n {
+ print("testStringElem \"", t.s, "\"[", t.i, "]=", got, ", wanted ", t.n, "\n")
+ failed = true
+ }
+ }
+}
+
+//go:noinline
+func testStringElemConst_ssa(i int) byte {
+ s := "foobar"
+ return s[i]
+}
+
+func testStringElemConst() {
+ if got := testStringElemConst_ssa(3); got != 98 {
+ println("testStringElemConst=", got, ", wanted 98")
+ failed = true
+ }
+}
+
+func main() {
+ testStringSlice()
+ testStringSlicePanic()
+ testStructSlice()
+ testSmallIndexType()
+ testStringElem()
+ testStringElemConst()
+
+ if failed {
+ panic("failed")
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package main
+
+import (
+ "fmt"
+ "runtime"
+ "unsafe"
+)
+
+// global pointer slot
+var a *[8]uint
+
+// unfoldable true
+var b = true
+
+// Test to make sure that a pointer value which is alive
+// across a call is retained, even when there are matching
+// conversions to/from uintptr around the call.
+// We arrange things very carefully to have to/from
+// conversions on either side of the call which cannot be
+// combined with any other conversions.
+func f_ssa() *[8]uint {
+ // Make x a uintptr pointing to where a points.
+ var x uintptr
+ if b {
+ x = uintptr(unsafe.Pointer(a))
+ } else {
+ x = 0
+ }
+ // Clobber the global pointer. The only live ref
+ // to the allocated object is now x.
+ a = nil
+
+ // Convert to pointer so it should hold
+ // the object live across GC call.
+ p := unsafe.Pointer(x)
+
+ // Call gc.
+ runtime.GC()
+
+ // Convert back to uintptr.
+ y := uintptr(p)
+
+ // Mess with y so that the subsequent cast
+ // to unsafe.Pointer can't be combined with the
+ // uintptr cast above.
+ var z uintptr
+ if b {
+ z = y
+ } else {
+ z = 0
+ }
+ return (*[8]uint)(unsafe.Pointer(z))
+}
+
+// g_ssa is the same as f_ssa, but with a bit of pointer
+// arithmetic for added insanity.
+func g_ssa() *[7]uint {
+ // Make x a uintptr pointing to where a points.
+ var x uintptr
+ if b {
+ x = uintptr(unsafe.Pointer(a))
+ } else {
+ x = 0
+ }
+ // Clobber the global pointer. The only live ref
+ // to the allocated object is now x.
+ a = nil
+
+ // Offset x by one int.
+ x += unsafe.Sizeof(int(0))
+
+ // Convert to pointer so it should hold
+ // the object live across GC call.
+ p := unsafe.Pointer(x)
+
+ // Call gc.
+ runtime.GC()
+
+ // Convert back to uintptr.
+ y := uintptr(p)
+
+ // Mess with y so that the subsequent cast
+ // to unsafe.Pointer can't be combined with the
+ // uintptr cast above.
+ var z uintptr
+ if b {
+ z = y
+ } else {
+ z = 0
+ }
+ return (*[7]uint)(unsafe.Pointer(z))
+}
+
+func testf() {
+ a = new([8]uint)
+ for i := 0; i < 8; i++ {
+ a[i] = 0xabcd
+ }
+ c := f_ssa()
+ for i := 0; i < 8; i++ {
+ if c[i] != 0xabcd {
+ fmt.Printf("%d:%x\n", i, c[i])
+ panic("bad c")
+ }
+ }
+}
+
+func testg() {
+ a = new([8]uint)
+ for i := 0; i < 8; i++ {
+ a[i] = 0xabcd
+ }
+ c := g_ssa()
+ for i := 0; i < 7; i++ {
+ if c[i] != 0xabcd {
+ fmt.Printf("%d:%x\n", i, c[i])
+ panic("bad c")
+ }
+ }
+}
+
+func alias_ssa(ui64 *uint64, ui32 *uint32) uint32 {
+ *ui32 = 0xffffffff
+ *ui64 = 0 // store
+ ret := *ui32 // load from same address, should be zero
+ *ui64 = 0xffffffffffffffff // store
+ return ret
+}
+func testdse() {
+ x := int64(-1)
+ // construct two pointers that alias one another
+ ui64 := (*uint64)(unsafe.Pointer(&x))
+ ui32 := (*uint32)(unsafe.Pointer(&x))
+ if want, got := uint32(0), alias_ssa(ui64, ui32); got != want {
+ fmt.Printf("alias_ssa: wanted %d, got %d\n", want, got)
+ panic("alias_ssa")
+ }
+}
+
+func main() {
+ testf()
+ testg()
+ testdse()
+}
--- /dev/null
+// run
+// autogenerated from gen/zeroGen.go - do not edit!
+package main
+
+import "fmt"
+
+type T1 struct {
+ pre [8]byte
+ mid [1]byte
+ post [8]byte
+}
+
+func zero1_ssa(x *[1]byte) {
+ switch {
+ }
+ *x = [1]byte{}
+}
+func testZero1() {
+ a := T1{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [1]byte{255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero1_ssa(&a.mid)
+ want := T1{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [1]byte{0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero1 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T2 struct {
+ pre [8]byte
+ mid [2]byte
+ post [8]byte
+}
+
+func zero2_ssa(x *[2]byte) {
+ switch {
+ }
+ *x = [2]byte{}
+}
+func testZero2() {
+ a := T2{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [2]byte{255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero2_ssa(&a.mid)
+ want := T2{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [2]byte{0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero2 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T3 struct {
+ pre [8]byte
+ mid [3]byte
+ post [8]byte
+}
+
+func zero3_ssa(x *[3]byte) {
+ switch {
+ }
+ *x = [3]byte{}
+}
+func testZero3() {
+ a := T3{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [3]byte{255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero3_ssa(&a.mid)
+ want := T3{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [3]byte{0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero3 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T4 struct {
+ pre [8]byte
+ mid [4]byte
+ post [8]byte
+}
+
+func zero4_ssa(x *[4]byte) {
+ switch {
+ }
+ *x = [4]byte{}
+}
+func testZero4() {
+ a := T4{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [4]byte{255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero4_ssa(&a.mid)
+ want := T4{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [4]byte{0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero4 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T5 struct {
+ pre [8]byte
+ mid [5]byte
+ post [8]byte
+}
+
+func zero5_ssa(x *[5]byte) {
+ switch {
+ }
+ *x = [5]byte{}
+}
+func testZero5() {
+ a := T5{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [5]byte{255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero5_ssa(&a.mid)
+ want := T5{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [5]byte{0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero5 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T6 struct {
+ pre [8]byte
+ mid [6]byte
+ post [8]byte
+}
+
+func zero6_ssa(x *[6]byte) {
+ switch {
+ }
+ *x = [6]byte{}
+}
+func testZero6() {
+ a := T6{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [6]byte{255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero6_ssa(&a.mid)
+ want := T6{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [6]byte{0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero6 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T7 struct {
+ pre [8]byte
+ mid [7]byte
+ post [8]byte
+}
+
+func zero7_ssa(x *[7]byte) {
+ switch {
+ }
+ *x = [7]byte{}
+}
+func testZero7() {
+ a := T7{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [7]byte{255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero7_ssa(&a.mid)
+ want := T7{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [7]byte{0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero7 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T8 struct {
+ pre [8]byte
+ mid [8]byte
+ post [8]byte
+}
+
+func zero8_ssa(x *[8]byte) {
+ switch {
+ }
+ *x = [8]byte{}
+}
+func testZero8() {
+ a := T8{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero8_ssa(&a.mid)
+ want := T8{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero8 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T9 struct {
+ pre [8]byte
+ mid [9]byte
+ post [8]byte
+}
+
+func zero9_ssa(x *[9]byte) {
+ switch {
+ }
+ *x = [9]byte{}
+}
+func testZero9() {
+ a := T9{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [9]byte{255, 255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero9_ssa(&a.mid)
+ want := T9{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [9]byte{0, 0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero9 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T10 struct {
+ pre [8]byte
+ mid [10]byte
+ post [8]byte
+}
+
+func zero10_ssa(x *[10]byte) {
+ switch {
+ }
+ *x = [10]byte{}
+}
+func testZero10() {
+ a := T10{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [10]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero10_ssa(&a.mid)
+ want := T10{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [10]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero10 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T15 struct {
+ pre [8]byte
+ mid [15]byte
+ post [8]byte
+}
+
+func zero15_ssa(x *[15]byte) {
+ switch {
+ }
+ *x = [15]byte{}
+}
+func testZero15() {
+ a := T15{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [15]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero15_ssa(&a.mid)
+ want := T15{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [15]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero15 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T16 struct {
+ pre [8]byte
+ mid [16]byte
+ post [8]byte
+}
+
+func zero16_ssa(x *[16]byte) {
+ switch {
+ }
+ *x = [16]byte{}
+}
+func testZero16() {
+ a := T16{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [16]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero16_ssa(&a.mid)
+ want := T16{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [16]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero16 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T17 struct {
+ pre [8]byte
+ mid [17]byte
+ post [8]byte
+}
+
+func zero17_ssa(x *[17]byte) {
+ switch {
+ }
+ *x = [17]byte{}
+}
+func testZero17() {
+ a := T17{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [17]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero17_ssa(&a.mid)
+ want := T17{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [17]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero17 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T23 struct {
+ pre [8]byte
+ mid [23]byte
+ post [8]byte
+}
+
+func zero23_ssa(x *[23]byte) {
+ switch {
+ }
+ *x = [23]byte{}
+}
+func testZero23() {
+ a := T23{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [23]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero23_ssa(&a.mid)
+ want := T23{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [23]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero23 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T24 struct {
+ pre [8]byte
+ mid [24]byte
+ post [8]byte
+}
+
+func zero24_ssa(x *[24]byte) {
+ switch {
+ }
+ *x = [24]byte{}
+}
+func testZero24() {
+ a := T24{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [24]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero24_ssa(&a.mid)
+ want := T24{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [24]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero24 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T25 struct {
+ pre [8]byte
+ mid [25]byte
+ post [8]byte
+}
+
+func zero25_ssa(x *[25]byte) {
+ switch {
+ }
+ *x = [25]byte{}
+}
+func testZero25() {
+ a := T25{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [25]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero25_ssa(&a.mid)
+ want := T25{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [25]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero25 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T31 struct {
+ pre [8]byte
+ mid [31]byte
+ post [8]byte
+}
+
+func zero31_ssa(x *[31]byte) {
+ switch {
+ }
+ *x = [31]byte{}
+}
+func testZero31() {
+ a := T31{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [31]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero31_ssa(&a.mid)
+ want := T31{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [31]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero31 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T32 struct {
+ pre [8]byte
+ mid [32]byte
+ post [8]byte
+}
+
+func zero32_ssa(x *[32]byte) {
+ switch {
+ }
+ *x = [32]byte{}
+}
+func testZero32() {
+ a := T32{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [32]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero32_ssa(&a.mid)
+ want := T32{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [32]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero32 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T33 struct {
+ pre [8]byte
+ mid [33]byte
+ post [8]byte
+}
+
+func zero33_ssa(x *[33]byte) {
+ switch {
+ }
+ *x = [33]byte{}
+}
+func testZero33() {
+ a := T33{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [33]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero33_ssa(&a.mid)
+ want := T33{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [33]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero33 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T63 struct {
+ pre [8]byte
+ mid [63]byte
+ post [8]byte
+}
+
+func zero63_ssa(x *[63]byte) {
+ switch {
+ }
+ *x = [63]byte{}
+}
+func testZero63() {
+ a := T63{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [63]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero63_ssa(&a.mid)
+ want := T63{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [63]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero63 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T64 struct {
+ pre [8]byte
+ mid [64]byte
+ post [8]byte
+}
+
+func zero64_ssa(x *[64]byte) {
+ switch {
+ }
+ *x = [64]byte{}
+}
+func testZero64() {
+ a := T64{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [64]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero64_ssa(&a.mid)
+ want := T64{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [64]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero64 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T65 struct {
+ pre [8]byte
+ mid [65]byte
+ post [8]byte
+}
+
+func zero65_ssa(x *[65]byte) {
+ switch {
+ }
+ *x = [65]byte{}
+}
+func testZero65() {
+ a := T65{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [65]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero65_ssa(&a.mid)
+ want := T65{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [65]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero65 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T1023 struct {
+ pre [8]byte
+ mid [1023]byte
+ post [8]byte
+}
+
+func zero1023_ssa(x *[1023]byte) {
+ switch {
+ }
+ *x = [1023]byte{}
+}
+func testZero1023() {
+ a := T1023{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [1023]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero1023_ssa(&a.mid)
+ want := T1023{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [1023]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero1023 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T1024 struct {
+ pre [8]byte
+ mid [1024]byte
+ post [8]byte
+}
+
+func zero1024_ssa(x *[1024]byte) {
+ switch {
+ }
+ *x = [1024]byte{}
+}
+func testZero1024() {
+ a := T1024{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [1024]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero1024_ssa(&a.mid)
+ want := T1024{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [1024]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero1024 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+type T1025 struct {
+ pre [8]byte
+ mid [1025]byte
+ post [8]byte
+}
+
+func zero1025_ssa(x *[1025]byte) {
+ switch {
+ }
+ *x = [1025]byte{}
+}
+func testZero1025() {
+ a := T1025{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [1025]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ zero1025_ssa(&a.mid)
+ want := T1025{[8]byte{255, 255, 255, 255, 255, 255, 255, 255}, [1025]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, [8]byte{255, 255, 255, 255, 255, 255, 255, 255}}
+ if a != want {
+ fmt.Printf("zero1025 got=%v, want %v\n", a, want)
+ failed = true
+ }
+}
+
+var failed bool
+
+func main() {
+ testZero1()
+ testZero2()
+ testZero3()
+ testZero4()
+ testZero5()
+ testZero6()
+ testZero7()
+ testZero8()
+ testZero9()
+ testZero10()
+ testZero15()
+ testZero16()
+ testZero17()
+ testZero23()
+ testZero24()
+ testZero25()
+ testZero31()
+ testZero32()
+ testZero33()
+ testZero63()
+ testZero64()
+ testZero65()
+ testZero1023()
+ testZero1024()
+ testZero1025()
+ if failed {
+ panic("failed")
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// This file provides methods that let us export a Type as an ../ssa:Type.
+// We don't export this package's Type directly because it would lead
+// to an import cycle with this package and ../ssa.
+// TODO: move Type to its own package, then we don't need to dance around import cycles.
+
+package gc
+
+import (
+ "cmd/compile/internal/ssa"
+ "fmt"
+)
+
+func (t *Type) Size() int64 {
+ dowidth(t)
+ return t.Width
+}
+
+func (t *Type) Alignment() int64 {
+ dowidth(t)
+ return int64(t.Align)
+}
+
+func (t *Type) SimpleString() string {
+ return Econv(t.Etype)
+}
+
+func (t *Type) Equal(u ssa.Type) bool {
+ x, ok := u.(*Type)
+ if !ok {
+ return false
+ }
+ return Eqtype(t, x)
+}
+
+// Compare compares types for purposes of the SSA back
+// end, returning an ssa.Cmp (one of CMPlt, CMPeq, CMPgt).
+// The answers are correct for an optimizer
+// or code generator, but not for Go source.
+// For example, "type gcDrainFlags int" results in
+// two Go-different types that Compare equal.
+// The order chosen is also arbitrary, only division into
+// equivalence classes (Types that compare CMPeq) matters.
+func (t *Type) Compare(u ssa.Type) ssa.Cmp {
+ x, ok := u.(*Type)
+ // ssa.CompilerType is smaller than gc.Type
+ // bare pointer equality is easy.
+ if !ok {
+ return ssa.CMPgt
+ }
+ if x == t {
+ return ssa.CMPeq
+ }
+ return t.cmp(x)
+}
+
+func cmpForNe(x bool) ssa.Cmp {
+ if x {
+ return ssa.CMPlt
+ }
+ return ssa.CMPgt
+}
+
+func (r *Sym) cmpsym(s *Sym) ssa.Cmp {
+ if r == s {
+ return ssa.CMPeq
+ }
+ if r == nil {
+ return ssa.CMPlt
+ }
+ if s == nil {
+ return ssa.CMPgt
+ }
+ // Fast sort, not pretty sort
+ if len(r.Name) != len(s.Name) {
+ return cmpForNe(len(r.Name) < len(s.Name))
+ }
+ if r.Pkg != s.Pkg {
+ if len(r.Pkg.Prefix) != len(s.Pkg.Prefix) {
+ return cmpForNe(len(r.Pkg.Prefix) < len(s.Pkg.Prefix))
+ }
+ if r.Pkg.Prefix != s.Pkg.Prefix {
+ return cmpForNe(r.Pkg.Prefix < s.Pkg.Prefix)
+ }
+ }
+ if r.Name != s.Name {
+ return cmpForNe(r.Name < s.Name)
+ }
+ return ssa.CMPeq
+}
+
+// cmp compares two *Types t and x, returning ssa.CMPlt,
+// ssa.CMPeq, ssa.CMPgt as t<x, t==x, t>x, for an arbitrary
+// and optimizer-centric notion of comparison.
+func (t *Type) cmp(x *Type) ssa.Cmp {
+ // This follows the structure of Eqtype in subr.go
+ // with two exceptions.
+ // 1. Symbols are compared more carefully because a <,=,> result is desired.
+ // 2. Maps are treated specially to avoid endless recursion -- maps
+ // contain an internal data type not expressible in Go source code.
+ if t == x {
+ return ssa.CMPeq
+ }
+ if t == nil {
+ return ssa.CMPlt
+ }
+ if x == nil {
+ return ssa.CMPgt
+ }
+
+ if t.Etype != x.Etype {
+ return cmpForNe(t.Etype < x.Etype)
+ }
+
+ if t.Sym != nil || x.Sym != nil {
+ // Special case: we keep byte and uint8 separate
+ // for error messages. Treat them as equal.
+ switch t.Etype {
+ case TUINT8:
+ if (t == Types[TUINT8] || t == bytetype) && (x == Types[TUINT8] || x == bytetype) {
+ return ssa.CMPeq
+ }
+
+ case TINT32:
+ if (t == Types[runetype.Etype] || t == runetype) && (x == Types[runetype.Etype] || x == runetype) {
+ return ssa.CMPeq
+ }
+ }
+ }
+
+ csym := t.Sym.cmpsym(x.Sym)
+ if csym != ssa.CMPeq {
+ return csym
+ }
+
+ if x.Sym != nil {
+ // Syms non-nil, if vargens match then equal.
+ if t.Vargen == x.Vargen {
+ return ssa.CMPeq
+ }
+ if t.Vargen < x.Vargen {
+ return ssa.CMPlt
+ }
+ return ssa.CMPgt
+ }
+ // both syms nil, look at structure below.
+
+ switch t.Etype {
+ case TBOOL, TFLOAT32, TFLOAT64, TCOMPLEX64, TCOMPLEX128, TUNSAFEPTR, TUINTPTR,
+ TINT8, TINT16, TINT32, TINT64, TINT, TUINT8, TUINT16, TUINT32, TUINT64, TUINT:
+ return ssa.CMPeq
+ }
+
+ switch t.Etype {
+ case TMAP, TFIELD:
+ // No special cases for these two, they are handled
+ // by the general code after the switch.
+
+ case TPTR32, TPTR64:
+ return t.Type.cmp(x.Type)
+
+ case TSTRUCT:
+ if t.Map == nil {
+ if x.Map != nil {
+ return ssa.CMPlt // nil < non-nil
+ }
+ // to the fallthrough
+ } else if x.Map == nil {
+ return ssa.CMPgt // nil > non-nil
+ } else if t.Map.Bucket == t {
+ // Both have non-nil Map
+ // Special case for Maps which include a recursive type where the recursion is not broken with a named type
+ if x.Map.Bucket != x {
+ return ssa.CMPlt // bucket maps are least
+ }
+ return t.Map.cmp(x.Map)
+ } // If t != t.Map.Bucket, fall through to general case
+
+ fallthrough
+ case TINTER:
+ t1 := t.Type
+ x1 := x.Type
+ for ; t1 != nil && x1 != nil; t1, x1 = t1.Down, x1.Down {
+ if t1.Embedded != x1.Embedded {
+ if t1.Embedded < x1.Embedded {
+ return ssa.CMPlt
+ }
+ return ssa.CMPgt
+ }
+ if t1.Note != x1.Note {
+ if t1.Note == nil {
+ return ssa.CMPlt
+ }
+ if x1.Note == nil {
+ return ssa.CMPgt
+ }
+ if *t1.Note != *x1.Note {
+ if *t1.Note < *x1.Note {
+ return ssa.CMPlt
+ }
+ return ssa.CMPgt
+ }
+ }
+ c := t1.Sym.cmpsym(x1.Sym)
+ if c != ssa.CMPeq {
+ return c
+ }
+ c = t1.Type.cmp(x1.Type)
+ if c != ssa.CMPeq {
+ return c
+ }
+ }
+ if t1 == x1 {
+ return ssa.CMPeq
+ }
+ if t1 == nil {
+ return ssa.CMPlt
+ }
+ return ssa.CMPgt
+
+ case TFUNC:
+ t1 := t.Type
+ t2 := x.Type
+ for ; t1 != nil && t2 != nil; t1, t2 = t1.Down, t2.Down {
+ // Loop over fields in structs, ignoring argument names.
+ ta := t1.Type
+ tb := t2.Type
+ for ; ta != nil && tb != nil; ta, tb = ta.Down, tb.Down {
+ if ta.Isddd != tb.Isddd {
+ if ta.Isddd {
+ return ssa.CMPgt
+ }
+ return ssa.CMPlt
+ }
+ c := ta.Type.cmp(tb.Type)
+ if c != ssa.CMPeq {
+ return c
+ }
+ }
+
+ if ta != tb {
+ if t1 == nil {
+ return ssa.CMPlt
+ }
+ return ssa.CMPgt
+ }
+ }
+ if t1 != t2 {
+ if t1 == nil {
+ return ssa.CMPlt
+ }
+ return ssa.CMPgt
+ }
+ return ssa.CMPeq
+
+ case TARRAY:
+ if t.Bound != x.Bound {
+ return cmpForNe(t.Bound < x.Bound)
+ }
+
+ case TCHAN:
+ if t.Chan != x.Chan {
+ return cmpForNe(t.Chan < x.Chan)
+ }
+
+ default:
+ e := fmt.Sprintf("Do not know how to compare %s with %s", t, x)
+ panic(e)
+ }
+
+ c := t.Down.cmp(x.Down)
+ if c != ssa.CMPeq {
+ return c
+ }
+ return t.Type.cmp(x.Type)
+}
+
+func (t *Type) IsBoolean() bool {
+ return t.Etype == TBOOL
+}
+
+func (t *Type) IsInteger() bool {
+ switch t.Etype {
+ case TINT8, TUINT8, TINT16, TUINT16, TINT32, TUINT32, TINT64, TUINT64, TINT, TUINT, TUINTPTR:
+ return true
+ }
+ return false
+}
+
+func (t *Type) IsSigned() bool {
+ switch t.Etype {
+ case TINT8, TINT16, TINT32, TINT64, TINT:
+ return true
+ }
+ return false
+}
+
+func (t *Type) IsFloat() bool {
+ return t.Etype == TFLOAT32 || t.Etype == TFLOAT64
+}
+
+func (t *Type) IsComplex() bool {
+ return t.Etype == TCOMPLEX64 || t.Etype == TCOMPLEX128
+}
+
+func (t *Type) IsPtr() bool {
+ return t.Etype == TPTR32 || t.Etype == TPTR64 || t.Etype == TUNSAFEPTR ||
+ t.Etype == TMAP || t.Etype == TCHAN || t.Etype == TFUNC
+}
+
+func (t *Type) IsString() bool {
+ return t.Etype == TSTRING
+}
+
+func (t *Type) IsMap() bool {
+ return t.Etype == TMAP
+}
+
+func (t *Type) IsChan() bool {
+ return t.Etype == TCHAN
+}
+
+func (t *Type) IsSlice() bool {
+ return t.Etype == TARRAY && t.Bound < 0
+}
+
+func (t *Type) IsArray() bool {
+ return t.Etype == TARRAY && t.Bound >= 0
+}
+
+func (t *Type) IsStruct() bool {
+ return t.Etype == TSTRUCT
+}
+
+func (t *Type) IsInterface() bool {
+ return t.Etype == TINTER
+}
+
+func (t *Type) Elem() ssa.Type {
+ return t.Type
+}
+func (t *Type) PtrTo() ssa.Type {
+ return Ptrto(t)
+}
+
+func (t *Type) NumFields() int64 {
+ return int64(countfield(t))
+}
+func (t *Type) FieldType(i int64) ssa.Type {
+ // TODO: store fields in a slice so we can
+ // look them up by index in constant time.
+ for t1 := t.Type; t1 != nil; t1 = t1.Down {
+ if t1.Etype != TFIELD {
+ panic("non-TFIELD in a TSTRUCT")
+ }
+ if i == 0 {
+ return t1.Type
+ }
+ i--
+ }
+ panic("not enough fields")
+}
+func (t *Type) FieldOff(i int64) int64 {
+ for t1 := t.Type; t1 != nil; t1 = t1.Down {
+ if t1.Etype != TFIELD {
+ panic("non-TFIELD in a TSTRUCT")
+ }
+ if i == 0 {
+ return t1.Width
+ }
+ i--
+ }
+ panic("not enough fields")
+}
+
+func (t *Type) NumElem() int64 {
+ if t.Etype != TARRAY {
+ panic("NumElem on non-TARRAY")
+ }
+ return int64(t.Bound)
+}
+
+func (t *Type) IsMemory() bool { return false }
+func (t *Type) IsFlags() bool { return false }
+func (t *Type) IsVoid() bool { return false }
}
}
+func typecheckslice(l []*Node, top int) {
+ for i := range l {
+ typecheck(&l[i], top)
+ }
+}
+
var _typekind = []string{
TINT: "int",
TUINT: "uint",
return true
}
- return callrecv(n.Left) || callrecv(n.Right) || callrecvlist(n.Ninit) || callrecvlist(n.Nbody) || callrecvlist(n.List) || callrecvlist(n.Rlist)
+ return callrecv(n.Left) || callrecv(n.Right) || callrecvlist(n.Ninit) || callrecvslice(n.Nbody.Slice()) || callrecvlist(n.List) || callrecvlist(n.Rlist)
}
func callrecvlist(l *NodeList) bool {
return false
}
+func callrecvslice(l []*Node) bool {
+ for _, n := range l {
+ if callrecv(n) {
+ return true
+ }
+ }
+ return false
+}
+
// indexlit implements typechecking of untyped values as
// array/slice indexes. It is equivalent to defaultlit
// except for constants of numerical kind, which are acceptable
}
// ideal mixed with non-ideal
- defaultlit2(&l, &r, 0)
+ defaultlit2(&l, &r, false)
n.Left = l
n.Right = r
}
if t.Etype != TIDEAL && !Eqtype(l.Type, r.Type) {
- defaultlit2(&l, &r, 1)
+ defaultlit2(&l, &r, true)
if n.Op == OASOP && n.Implicit {
Yyerror("invalid operation: %v (non-numeric type %v)", n, l.Type)
n.Type = nil
evconst(n)
t = idealbool
if n.Op != OLITERAL {
- defaultlit2(&l, &r, 1)
+ defaultlit2(&l, &r, true)
n.Left = l
n.Right = r
}
n.Type = n.Right.Type
n.Right = nil
if n.Type == nil {
- n.Type = nil
return
}
}
n.Type = nil
return
}
- defaultlit2(&l, &r, 0)
+ defaultlit2(&l, &r, false)
if l.Type == nil || r.Type == nil {
n.Type = nil
return
}
}
typecheck(&n.Right, Etop)
- typechecklist(n.Nbody, Etop)
+ typecheckslice(n.Nbody.Slice(), Etop)
decldepth--
break OpSwitch
Yyerror("non-bool %v used as if condition", Nconv(n.Left, obj.FmtLong))
}
}
- typechecklist(n.Nbody, Etop)
+ typecheckslice(n.Nbody.Slice(), Etop)
typechecklist(n.Rlist, Etop)
break OpSwitch
case OXCASE:
ok |= Etop
typechecklist(n.List, Erv)
- typechecklist(n.Nbody, Etop)
+ typecheckslice(n.Nbody.Slice(), Etop)
break OpSwitch
case ODCLFUNC:
cmp.Right = a.Left
evconst(&cmp)
if cmp.Op == OLITERAL {
- // Sometimes evconst fails. See issue 12536.
+ // Sometimes evconst fails. See issue 12536.
b = cmp.Val().U.(bool)
}
}
Yyerror("implicit assignment of unexported field '%s' in %v literal", s.Name, t)
}
- // No pushtype allowed here. Must name fields for that.
+ // No pushtype allowed here. Must name fields for that.
ll.N = assignconv(ll.N, f.Type, "field value")
ll.N = Nod(OKEY, newname(f.Sym), ll.N)
}
// Sym might have resolved to name in other top-level
- // package, because of import dot. Redirect to correct sym
+ // package, because of import dot. Redirect to correct sym
// before we do the lookup.
if s.Pkg != localpkg && exportname(s.Name) {
s1 = Lookup(s.Name)
fielddup(newname(s), hash)
r = l.Right
- // No pushtype allowed here. Tried and rejected.
+ // No pushtype allowed here. Tried and rejected.
typecheck(&r, Erv)
l.Right = assignconv(r, f.Type, "field value")
addmethod(n.Func.Shortname.Sym, t, true, n.Func.Nname.Nointerface)
}
- for l := n.Func.Dcl; l != nil; l = l.Next {
- if l.N.Op == ONAME && (l.N.Class == PPARAM || l.N.Class == PPARAMOUT) {
- l.N.Name.Decldepth = 1
+ for _, ln := range n.Func.Dcl {
+ if ln.Op == ONAME && (ln.Class == PPARAM || ln.Class == PPARAMOUT) {
+ ln.Name.Decldepth = 1
}
}
}
// }
// then even though I.M looks like it doesn't care about the
// value of its argument, a specific implementation of I may
- // care. The _ would suppress the assignment to that argument
+ // care. The _ would suppress the assignment to that argument
// while generating a call, so remove it.
for t := getinargx(nt.Type).Type; t != nil; t = t.Down {
if t.Sym != nil && t.Sym.Name == "_" {
markbreak(n.Right, implicit)
markbreaklist(n.Ninit, implicit)
- markbreaklist(n.Nbody, implicit)
+ markbreakslice(n.Nbody.Slice(), implicit)
markbreaklist(n.List, implicit)
markbreaklist(n.Rlist, implicit)
}
}
}
-func isterminating(l *NodeList, top int) bool {
- if l == nil {
- return false
- }
- if top != 0 {
- for l.Next != nil && l.N.Op != OLABEL {
- l = l.Next
+func markbreakslice(l []*Node, implicit *Node) {
+ for i := 0; i < len(l); i++ {
+ n := l[i]
+ if n.Op == OLABEL && i+1 < len(l) && n.Name.Defn == l[i+1] {
+ switch n.Name.Defn.Op {
+ case OFOR, OSWITCH, OTYPESW, OSELECT, ORANGE:
+ lab := new(Label)
+ lab.Def = n.Name.Defn
+ n.Left.Sym.Label = lab
+ markbreak(n.Name.Defn, n.Name.Defn)
+ n.Left.Sym.Label = nil
+ i++
+ continue
+ }
}
- markbreaklist(l, nil)
+
+ markbreak(n, implicit)
}
+}
+// Isterminating returns whether the NodeList l ends with a
+// terminating statement.
+func (l *NodeList) isterminating() bool {
+ if l == nil {
+ return false
+ }
for l.Next != nil {
l = l.Next
}
- n := l.N
+ return l.N.isterminating()
+}
- if n == nil {
+// Isterminating whether the Nodes list ends with a terminating
+// statement.
+func (l Nodes) isterminating() bool {
+ c := len(l.Slice())
+ if c == 0 {
return false
}
+ return l.Slice()[c-1].isterminating()
+}
+// Isterminating returns whether the node n, the last one in a
+// statement list, is a terminating statement.
+func (n *Node) isterminating() bool {
switch n.Op {
// NOTE: OLABEL is treated as a separate statement,
// not a separate prefix, so skipping to the last statement
// skipping over the label. No case OLABEL here.
case OBLOCK:
- return isterminating(n.List, 0)
+ return n.List.isterminating()
case OGOTO,
ORETURN,
return true
case OIF:
- return isterminating(n.Nbody, 0) && isterminating(n.Rlist, 0)
+ return n.Nbody.isterminating() && n.Rlist.isterminating()
case OSWITCH, OTYPESW, OSELECT:
if n.Hasbreak {
return false
}
def := 0
- for l = n.List; l != nil; l = l.Next {
- if !isterminating(l.N.Nbody, 0) {
+ for l := n.List; l != nil; l = l.Next {
+ if !l.N.Nbody.isterminating() {
return false
}
if l.N.List == nil { // default
}
func checkreturn(fn *Node) {
- if fn.Type.Outtuple != 0 && fn.Nbody != nil {
- if !isterminating(fn.Nbody, 1) {
+ if fn.Type.Outtuple != 0 && len(fn.Nbody.Slice()) != 0 {
+ markbreakslice(fn.Nbody.Slice(), nil)
+ if !fn.Nbody.isterminating() {
yyerrorl(int(fn.Func.Endlineno), "missing return at end of function")
}
}
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
package gc
import (
if Debug['W'] != 0 {
s := fmt.Sprintf("\nbefore %v", Curfn.Func.Nname.Sym)
- dumplist(s, Curfn.Nbody)
+ dumpslice(s, Curfn.Nbody.Slice())
}
lno := int(lineno)
// Final typecheck for any unused variables.
// It's hard to be on the heap when not-used, but best to be consistent about &~PHEAP here and below.
- for l := fn.Func.Dcl; l != nil; l = l.Next {
- if l.N.Op == ONAME && l.N.Class&^PHEAP == PAUTO {
- typecheck(&l.N, Erv|Easgn)
+ for i, ln := range fn.Func.Dcl {
+ if ln.Op == ONAME && ln.Class&^PHEAP == PAUTO {
+ typecheck(&ln, Erv|Easgn)
+ fn.Func.Dcl[i] = ln
}
}
// Propagate the used flag for typeswitch variables up to the NONAME in it's definition.
- for l := fn.Func.Dcl; l != nil; l = l.Next {
- if l.N.Op == ONAME && l.N.Class&^PHEAP == PAUTO && l.N.Name.Defn != nil && l.N.Name.Defn.Op == OTYPESW && l.N.Used {
- l.N.Name.Defn.Left.Used = true
+ for _, ln := range fn.Func.Dcl {
+ if ln.Op == ONAME && ln.Class&^PHEAP == PAUTO && ln.Name.Defn != nil && ln.Name.Defn.Op == OTYPESW && ln.Used {
+ ln.Name.Defn.Left.Used = true
}
}
- for l := fn.Func.Dcl; l != nil; l = l.Next {
- if l.N.Op != ONAME || l.N.Class&^PHEAP != PAUTO || l.N.Sym.Name[0] == '&' || l.N.Used {
+ for _, ln := range fn.Func.Dcl {
+ if ln.Op != ONAME || ln.Class&^PHEAP != PAUTO || ln.Sym.Name[0] == '&' || ln.Used {
continue
}
- if defn := l.N.Name.Defn; defn != nil && defn.Op == OTYPESW {
+ if defn := ln.Name.Defn; defn != nil && defn.Op == OTYPESW {
if defn.Left.Used {
continue
}
lineno = defn.Left.Lineno
- Yyerror("%v declared and not used", l.N.Sym)
+ Yyerror("%v declared and not used", ln.Sym)
defn.Left.Used = true // suppress repeats
} else {
- lineno = l.N.Lineno
- Yyerror("%v declared and not used", l.N.Sym)
+ lineno = ln.Lineno
+ Yyerror("%v declared and not used", ln.Sym)
}
}
if nerrors != 0 {
return
}
- walkstmtlist(Curfn.Nbody)
+ walkstmtslice(Curfn.Nbody.Slice())
if Debug['W'] != 0 {
s := fmt.Sprintf("after walk %v", Curfn.Func.Nname.Sym)
- dumplist(s, Curfn.Nbody)
+ dumpslice(s, Curfn.Nbody.Slice())
}
heapmoves()
- if Debug['W'] != 0 && Curfn.Func.Enter != nil {
+ if Debug['W'] != 0 && len(Curfn.Func.Enter.Slice()) > 0 {
s := fmt.Sprintf("enter %v", Curfn.Func.Nname.Sym)
- dumplist(s, Curfn.Func.Enter)
+ dumpslice(s, Curfn.Func.Enter.Slice())
}
}
}
}
+func walkstmtslice(l []*Node) {
+ for i := range l {
+ walkstmt(&l[i])
+ }
+}
+
func samelist(a *NodeList, b *NodeList) bool {
for ; a != nil && b != nil; a, b = a.Next, b.Next {
if a.N != b.N {
}
func paramoutheap(fn *Node) bool {
- for l := fn.Func.Dcl; l != nil; l = l.Next {
- switch l.N.Class {
+ for _, ln := range fn.Func.Dcl {
+ switch ln.Class {
case PPARAMOUT,
PPARAMOUT | PHEAP:
- return l.N.Addrtaken
+ return ln.Addrtaken
// stop early - parameters are over
case PAUTO,
}
walkstmt(&n.Right)
- walkstmtlist(n.Nbody)
+ walkstmtslice(n.Nbody.Slice())
case OIF:
walkexpr(&n.Left, &n.Ninit)
- walkstmtlist(n.Nbody)
+ walkstmtslice(n.Nbody.Slice())
walkstmtlist(n.Rlist)
case OPROC:
var rl *NodeList
var cl Class
- for ll := Curfn.Func.Dcl; ll != nil; ll = ll.Next {
- cl = ll.N.Class &^ PHEAP
+ for _, ln := range Curfn.Func.Dcl {
+ cl = ln.Class &^ PHEAP
if cl == PAUTO {
break
}
if cl == PPARAMOUT {
- rl = list(rl, ll.N)
+ rl = list(rl, ln)
}
}
ll := ascompatee(n.Op, rl, n.List, &n.Ninit)
n.List = reorder3(ll)
for lr := n.List; lr != nil; lr = lr.Next {
- lr.N = applywritebarrier(lr.N, &n.Ninit)
+ lr.N = applywritebarrier(lr.N)
}
break
}
// transformclosure already did all preparation work.
// Prepend captured variables to argument list.
- n.List = concat(n.Left.Func.Enter, n.List)
+ n.List = concat(n.Left.Func.Enter.NodeList(), n.List)
- n.Left.Func.Enter = nil
+ n.Left.Func.Enter.Set(nil)
// Replace OCLOSURE with ONAME/PFUNC.
n.Left = n.Left.Func.Closure.Func.Nname
r := convas(Nod(OAS, n.Left, n.Right), init)
r.Dodata = n.Dodata
n = r
- n = applywritebarrier(n, init)
+ n = applywritebarrier(n)
}
case OAS2:
ll := ascompatee(OAS, n.List, n.Rlist, init)
ll = reorder3(ll)
for lr := ll; lr != nil; lr = lr.Next {
- lr.N = applywritebarrier(lr.N, init)
+ lr.N = applywritebarrier(lr.N)
}
n = liststmt(ll)
ll := ascompatet(n.Op, n.List, &r.Type, 0, init)
for lr := ll; lr != nil; lr = lr.Next {
- lr.N = applywritebarrier(lr.N, init)
+ lr.N = applywritebarrier(lr.N)
}
n = liststmt(concat(list1(r), ll))
n2 := Nod(OIF, nil, nil)
n2.Left = Nod(OEQ, l, nodnil())
- n2.Nbody = list1(Nod(OAS, l, n1))
+ n2.Nbody.Set([]*Node{Nod(OAS, l, n1)})
n2.Likely = -1
typecheck(&n2, Etop)
*init = list(*init, n2)
// TODO(rsc): Perhaps componentgen should run before this.
-func applywritebarrier(n *Node, init **NodeList) *Node {
+func applywritebarrier(n *Node) *Node {
if n.Left != nil && n.Right != nil && needwritebarrier(n.Left, n.Right) {
if Debug_wb > 1 {
Warnl(int(n.Lineno), "marking %v for barrier", Nconv(n.Left, 0))
// walk through argin parameters.
// generate and return code to allocate
// copies of escaped parameters to the heap.
-func paramstoheap(argin **Type, out int) *NodeList {
+func paramstoheap(argin **Type, out int) []*Node {
var savet Iter
var v *Node
var as *Node
- var nn *NodeList
+ var nn []*Node
for t := Structfirst(&savet, argin); t != nil; t = structnext(&savet) {
v = t.Nname
if v != nil && v.Sym != nil && v.Sym.Name[0] == '~' && v.Sym.Name[1] == 'r' { // unnamed result
// Defer might stop a panic and show the
// return values as they exist at the time of panic.
// Make sure to zero them on entry to the function.
- nn = list(nn, Nod(OAS, nodarg(t, 1), nil))
+ nn = append(nn, Nod(OAS, nodarg(t, -1), nil))
}
if v == nil || v.Class&PHEAP == 0 {
if prealloc[v] == nil {
prealloc[v] = callnew(v.Type)
}
- nn = list(nn, Nod(OAS, v.Name.Heapaddr, prealloc[v]))
+ nn = append(nn, Nod(OAS, v.Name.Heapaddr, prealloc[v]))
if v.Class&^PHEAP != PPARAMOUT {
as = Nod(OAS, v, v.Name.Param.Stackparam)
v.Name.Param.Stackparam.Typecheck = 1
typecheck(&as, Etop)
- as = applywritebarrier(as, &nn)
- nn = list(nn, as)
+ as = applywritebarrier(as)
+ nn = append(nn, as)
}
}
}
// walk through argout parameters copying back to stack
-func returnsfromheap(argin **Type) *NodeList {
+func returnsfromheap(argin **Type) []*Node {
var savet Iter
var v *Node
- var nn *NodeList
+ var nn []*Node
for t := Structfirst(&savet, argin); t != nil; t = structnext(&savet) {
v = t.Nname
if v == nil || v.Class != PHEAP|PPARAMOUT {
continue
}
- nn = list(nn, Nod(OAS, v.Name.Param.Stackparam, v))
+ nn = append(nn, Nod(OAS, v.Name.Param.Stackparam, v))
}
return nn
lno := lineno
lineno = Curfn.Lineno
nn := paramstoheap(getthis(Curfn.Type), 0)
- nn = concat(nn, paramstoheap(getinarg(Curfn.Type), 0))
- nn = concat(nn, paramstoheap(Getoutarg(Curfn.Type), 1))
- Curfn.Func.Enter = concat(Curfn.Func.Enter, nn)
+ nn = append(nn, paramstoheap(getinarg(Curfn.Type), 0)...)
+ nn = append(nn, paramstoheap(Getoutarg(Curfn.Type), 1)...)
+ Curfn.Func.Enter.Append(nn...)
lineno = Curfn.Func.Endlineno
- Curfn.Func.Exit = returnsfromheap(Getoutarg(Curfn.Type))
+ Curfn.Func.Exit.Append(returnsfromheap(Getoutarg(Curfn.Type))...)
lineno = lno
}
if n.Esc == EscNone {
sz := int64(0)
for l := n.List; l != nil; l = l.Next {
- if n.Op == OLITERAL {
- sz += int64(len(n.Val().U.(string)))
+ if l.N.Op == OLITERAL {
+ sz += int64(len(l.N.Val().U.(string)))
}
}
// walkexprlistsafe will leave OINDEX (s[n]) alone if both s
// and n are name or literal, but those may index the slice we're
- // modifying here. Fix explicitly.
+ // modifying here. Fix explicitly.
for l := n.List; l != nil; l = l.Next {
l.N = cheapexpr(l.N, init)
}
substArgTypes(fn, s.Type.Type, s.Type.Type)
// s = growslice_n(T, s, n)
- nif.Nbody = list1(Nod(OAS, s, mkcall1(fn, s.Type, &nif.Ninit, typename(s.Type), s, nt)))
+ nif.Nbody.Set([]*Node{Nod(OAS, s, mkcall1(fn, s.Type, &nif.Ninit, typename(s.Type), s, nt))})
l = list(l, nif)
// walkexprlistsafe will leave OINDEX (s[n]) alone if both s
// and n are name or literal, but those may index the slice we're
- // modifying here. Fix explicitly.
+ // modifying here. Fix explicitly.
// Using cheapexpr also makes sure that the evaluation
// of all arguments (and especially any panics) happen
// before we begin to modify the slice in a visible way.
fn := syslook("growslice", 1) // growslice(<type>, old []T, mincap int) (ret []T)
substArgTypes(fn, ns.Type.Type, ns.Type.Type)
- nx.Nbody = list1(Nod(OAS, ns, mkcall1(fn, ns.Type, &nx.Ninit, typename(ns.Type), ns, Nod(OADD, Nod(OLEN, ns, nil), na))))
+ nx.Nbody.Set([]*Node{Nod(OAS, ns, mkcall1(fn, ns.Type, &nx.Ninit, typename(ns.Type), ns, Nod(OADD, Nod(OLEN, ns, nil), na)))})
l = list(l, nx)
nif := Nod(OIF, nil, nil)
nif.Left = Nod(OGT, nlen, Nod(OLEN, nr, nil))
- nif.Nbody = list(nif.Nbody, Nod(OAS, nlen, Nod(OLEN, nr, nil)))
+ nif.Nbody.Append(Nod(OAS, nlen, Nod(OLEN, nr, nil)))
l = list(l, nif)
// Call memmove.
return
}
+ if t.Etype == TARRAY {
+ // Zero- or single-element array, of any type.
+ switch t.Bound {
+ case 0:
+ finishcompare(np, n, Nodbool(n.Op == OEQ), init)
+ return
+ case 1:
+ l0 := Nod(OINDEX, l, Nodintconst(0))
+ r0 := Nod(OINDEX, r, Nodintconst(0))
+ a := Nod(n.Op, l0, r0)
+ finishcompare(np, n, a, init)
+ return
+ }
+ }
+
if t.Etype == TSTRUCT && countfield(t) <= 4 {
// Struct of four or fewer fields.
// Inline comparisons.
return
}
- // Chose not to inline. Call equality function directly.
+ // Chose not to inline. Call equality function directly.
var needsize int
call := Nod(OCALL, eqfor(t, &needsize), nil)
return true
}
+func candiscardslice(l []*Node) bool {
+ for _, n := range l {
+ if !candiscard(n) {
+ return false
+ }
+ }
+ return true
+}
+
func candiscard(n *Node) bool {
if n == nil {
return true
return false
}
- if !candiscard(n.Left) || !candiscard(n.Right) || !candiscardlist(n.Ninit) || !candiscardlist(n.Nbody) || !candiscardlist(n.List) || !candiscardlist(n.Rlist) {
+ if !candiscard(n.Left) || !candiscard(n.Right) || !candiscardlist(n.Ninit) || !candiscardslice(n.Nbody.Slice()) || !candiscardlist(n.List) || !candiscardlist(n.Rlist) {
return false
}
typecheck(&a, Etop)
walkstmt(&a)
- fn.Nbody = list1(a)
+ fn.Nbody.Set([]*Node{a})
funcbody(fn)
typecheck(&fn, Etop)
- typechecklist(fn.Nbody, Etop)
+ typecheckslice(fn.Nbody.Slice(), Etop)
xtop = list(xtop, fn)
Curfn = oldfn
// TODO: Instead of generating ADDV $-8,R8; ADDV
// $-8,R7; n*(MOVV 8(R8),R9; ADDV $8,R8; MOVV R9,8(R7);
// ADDV $8,R7;) just generate the offsets directly and
- // eliminate the ADDs. That will produce shorter, more
+ // eliminate the ADDs. That will produce shorter, more
// pipeline-able code.
var p *obj.Prog
for ; c > 0; c-- {
)
func defframe(ptxt *obj.Prog) {
- var n *gc.Node
-
// fill in argument size, stack size
ptxt.To.Type = obj.TYPE_TEXTSIZE
lo := hi
// iterate through declarations - they are sorted in decreasing xoffset order.
- for l := gc.Curfn.Func.Dcl; l != nil; l = l.Next {
- n = l.N
+ for _, n := range gc.Curfn.Func.Dcl {
if !n.Name.Needzero {
continue
}
// distinguish between moves that moves that *must*
// sign/zero extend and moves that don't care so they
// can eliminate moves that don't care without
- // breaking moves that do care. This might let us
+ // breaking moves that do care. This might let us
// simplify or remove the next peep loop, too.
if p.As == mips.AMOVV || p.As == mips.AMOVF || p.As == mips.AMOVD {
if regtyp(&p.To) {
// copyas returns 1 if a and v address the same register.
//
// If a is the from operand, this means this operation reads the
-// register in v. If a is the to operand, this means this operation
+// register in v. If a is the to operand, this means this operation
// writes the register in v.
func copyas(a *obj.Addr, v *obj.Addr) bool {
if regtyp(v) {
// same register as v.
//
// If a is the from operand, this means this operation reads the
-// register in v. If a is the to operand, this means the operation
+// register in v. If a is the to operand, this means the operation
// either reads or writes the register in v (if !copyas(a, v), then
// the operation reads the register in v).
func copyau(a *obj.Addr, v *obj.Addr) bool {
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// TODO(austin): Instead of generating ADD $-8,R8; ADD
// $-8,R7; n*(MOVDU 8(R8),R9; MOVDU R9,8(R7);) just
// generate the offsets directly and eliminate the
- // ADDs. That will produce shorter, more
+ // ADDs. That will produce shorter, more
// pipeline-able code.
var p *obj.Prog
for ; c > 0; c-- {
)
func defframe(ptxt *obj.Prog) {
- var n *gc.Node
-
// fill in argument size, stack size
ptxt.To.Type = obj.TYPE_TEXTSIZE
lo := hi
// iterate through declarations - they are sorted in decreasing xoffset order.
- for l := gc.Curfn.Func.Dcl; l != nil; l = l.Next {
- n = l.N
+ for _, n := range gc.Curfn.Func.Dcl {
if !n.Name.Needzero {
continue
}
// The hardware will generate undefined result.
// Also need to explicitly trap on division on zero,
// the hardware will silently generate undefined result.
- // DIVW will leave unpredicable result in higher 32-bit,
+ // DIVW will leave unpredictable result in higher 32-bit,
// so always use DIVD/DIVDU.
t := nl.Type
ppc64.REGZERO,
ppc64.REGSP, // reserved for SP
// We need to preserve the C ABI TLS pointer because sigtramp
- // may happen during C code and needs to access the g. C
+ // may happen during C code and needs to access the g. C
// clobbers REGG, so if Go were to clobber REGTLS, sigtramp
- // won't know which convention to use. By preserving REGTLS,
+ // won't know which convention to use. By preserving REGTLS,
// we can just retrieve g from TLS when we aren't sure.
ppc64.REGTLS,
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ppc64
// Many Power ISA arithmetic and logical instructions come in four
-// standard variants. These bits let us map between variants.
+// standard variants. These bits let us map between variants.
const (
V_CC = 1 << 0 // xCC (affect CR field 0 flags)
V_V = 1 << 1 // xV (affect SO and OV flags)
// distinguish between moves that moves that *must*
// sign/zero extend and moves that don't care so they
// can eliminate moves that don't care without
- // breaking moves that do care. This might let us
+ // breaking moves that do care. This might let us
// simplify or remove the next peep loop, too.
if p.As == ppc64.AMOVD || p.As == ppc64.AFMOVD {
if regtyp(&p.To) {
// copyas returns 1 if a and v address the same register.
//
// If a is the from operand, this means this operation reads the
-// register in v. If a is the to operand, this means this operation
+// register in v. If a is the to operand, this means this operation
// writes the register in v.
func copyas(a *obj.Addr, v *obj.Addr) bool {
if regtyp(v) {
// same register as v.
//
// If a is the from operand, this means this operation reads the
-// register in v. If a is the to operand, this means the operation
+// register in v. If a is the to operand, this means the operation
// either reads or writes the register in v (if !copyas(a, v), then
// the operation reads the register in v).
func copyau(a *obj.Addr, v *obj.Addr) bool {
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
}
-// Instruction variants table. Initially this contains entries only
-// for the "base" form of each instruction. On the first call to
+// Instruction variants table. Initially this contains entries only
+// for the "base" form of each instruction. On the first call to
// as2variant or variant2as, we'll add the variants to the table.
var varianttable = [ppc64.ALAST][4]int{
ppc64.AADD: {ppc64.AADD, ppc64.AADDCC, ppc64.AADDV, ppc64.AADDVCC},
--- /dev/null
+This is a list of things that need to be worked on. It will hopefully
+be complete soon.
+
+Correctness
+-----------
+- Debugging info (check & fix as much as we can)
+
+Optimizations (better compiled code)
+------------------------------------
+- Reduce register pressure in scheduler
+- More strength reduction: multiply -> shift/add combos (Worth doing?)
+- Add a value range propagation pass (for bounds elim & bitwidth reduction)
+- Make dead store pass inter-block
+- If there are a lot of MOVQ $0, ..., then load
+ 0 into a register and use the register as the source instead.
+- Allow arrays of length 1 (or longer, with all constant indexes?) to be SSAable.
+- If strings are being passed around without being interpreted (ptr
+ and len fields being accessed) pass them in xmm registers?
+ Same for interfaces?
+- Non-constant rotate detection.
+- Do 0 <= x && x < n with one unsigned compare
+- nil-check removal in indexed load/store case:
+ lea (%rdx,%rax,1),%rcx
+ test %al,(%rcx) // nil check
+ mov (%rdx,%rax,1),%cl // load to same address
+- any pointer generated by unsafe arithmetic must be non-nil?
+ (Of course that may not be true in general, but it is for all uses
+ in the runtime, and we can play games with unsafe.)
+
+Optimizations (better compiler)
+-------------------------------
+- OpStore uses 3 args. Increase the size of Value.argstorage to 3?
+- Use a constant cache for OpConstNil, OpConstInterface, OpConstSlice, maybe OpConstString
+- Handle signed division overflow and sign extension earlier
+
+Regalloc
+--------
+- Make less arch-dependent
+- Handle 2-address instructions
+- Make liveness analysis non-quadratic
+
+Future/other
+------------
+- Start another architecture (arm?)
+- 64-bit ops on 32-bit machines
+- Investigate type equality. During SSA generation, should we use n.Type or (say) TypeBool?
+- Should we get rid of named types in favor of underlying types during SSA generation?
+- Should we introduce a new type equality routine that is less strict than the frontend's?
+- Infrastructure for enabling/disabling/configuring passes
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import "fmt"
+
+// Block represents a basic block in the control flow graph of a function.
+type Block struct {
+ // A unique identifier for the block. The system will attempt to allocate
+ // these IDs densely, but no guarantees.
+ ID ID
+
+ // The kind of block this is.
+ Kind BlockKind
+
+ // Subsequent blocks, if any. The number and order depend on the block kind.
+ // All successors must be distinct (to make phi values in successors unambiguous).
+ Succs []*Block
+
+ // Inverse of successors.
+ // The order is significant to Phi nodes in the block.
+ Preds []*Block
+ // TODO: predecessors is a pain to maintain. Can we somehow order phi
+ // arguments by block id and have this field computed explicitly when needed?
+
+ // A value that determines how the block is exited. Its value depends on the kind
+ // of the block. For instance, a BlockIf has a boolean control value and BlockExit
+ // has a memory control value.
+ Control *Value
+
+ // Auxiliary info for the block. Its value depends on the Kind.
+ Aux interface{}
+
+ // The unordered set of Values that define the operation of this block.
+ // The list must include the control value, if any. (TODO: need this last condition?)
+ // After the scheduling pass, this list is ordered.
+ Values []*Value
+
+ // The containing function
+ Func *Func
+
+ // Line number for block's control operation
+ Line int32
+
+ // Likely direction for branches.
+ // If BranchLikely, Succs[0] is the most likely branch taken.
+ // If BranchUnlikely, Succs[1] is the most likely branch taken.
+ // Ignored if len(Succs) < 2.
+ // Fatal if not BranchUnknown and len(Succs) > 2.
+ Likely BranchPrediction
+
+ // After flagalloc, records whether flags are live at the end of the block.
+ FlagsLiveAtEnd bool
+
+ // Storage for Succs, Preds, and Values
+ succstorage [2]*Block
+ predstorage [4]*Block
+ valstorage [8]*Value
+}
+
+// kind control successors
+// ------------------------------------------
+// Exit return mem []
+// Plain nil [next]
+// If a boolean Value [then, else]
+// Call mem [nopanic, panic] (control opcode should be OpCall or OpStaticCall)
+type BlockKind int32
+
+// short form print
+func (b *Block) String() string {
+ return fmt.Sprintf("b%d", b.ID)
+}
+
+// long form print
+func (b *Block) LongString() string {
+ s := b.Kind.String()
+ if b.Aux != nil {
+ s += fmt.Sprintf(" %s", b.Aux)
+ }
+ if b.Control != nil {
+ s += fmt.Sprintf(" %s", b.Control)
+ }
+ if len(b.Succs) > 0 {
+ s += " ->"
+ for _, c := range b.Succs {
+ s += " " + c.String()
+ }
+ }
+ switch b.Likely {
+ case BranchUnlikely:
+ s += " (unlikely)"
+ case BranchLikely:
+ s += " (likely)"
+ }
+ return s
+}
+
+// AddEdgeTo adds an edge from block b to block c. Used during building of the
+// SSA graph; do not use on an already-completed SSA graph.
+func (b *Block) AddEdgeTo(c *Block) {
+ b.Succs = append(b.Succs, c)
+ c.Preds = append(c.Preds, b)
+}
+
+func (b *Block) Logf(msg string, args ...interface{}) { b.Func.Logf(msg, args...) }
+func (b *Block) Log() bool { return b.Func.Log() }
+func (b *Block) Fatalf(msg string, args ...interface{}) { b.Func.Fatalf(msg, args...) }
+func (b *Block) Unimplementedf(msg string, args ...interface{}) { b.Func.Unimplementedf(msg, args...) }
+
+type BranchPrediction int8
+
+const (
+ BranchUnlikely = BranchPrediction(-1)
+ BranchUnknown = BranchPrediction(0)
+ BranchLikely = BranchPrediction(+1)
+)
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// checkFunc checks invariants of f.
+func checkFunc(f *Func) {
+ blockMark := make([]bool, f.NumBlocks())
+ valueMark := make([]bool, f.NumValues())
+
+ for _, b := range f.Blocks {
+ if blockMark[b.ID] {
+ f.Fatalf("block %s appears twice in %s!", b, f.Name)
+ }
+ blockMark[b.ID] = true
+ if b.Func != f {
+ f.Fatalf("%s.Func=%s, want %s", b, b.Func.Name, f.Name)
+ }
+
+ if f.RegAlloc == nil {
+ for i, c := range b.Succs {
+ for j, d := range b.Succs {
+ if i != j && c == d {
+ f.Fatalf("%s.Succs has duplicate block %s", b, c)
+ }
+ }
+ }
+ }
+ // Note: duplicate successors are hard in the following case:
+ // if(...) goto x else goto x
+ // x: v = phi(a, b)
+ // If the conditional is true, does v get the value of a or b?
+ // We could solve this other ways, but the easiest is just to
+ // require (by possibly adding empty control-flow blocks) that
+ // all successors are distinct. They will need to be distinct
+ // anyway for register allocation (duplicate successors implies
+ // the existence of critical edges).
+ // After regalloc we can allow non-distinct predecessors.
+
+ for _, p := range b.Preds {
+ var found bool
+ for _, c := range p.Succs {
+ if c == b {
+ found = true
+ break
+ }
+ }
+ if !found {
+ f.Fatalf("block %s is not a succ of its pred block %s", b, p)
+ }
+ }
+
+ switch b.Kind {
+ case BlockExit:
+ if len(b.Succs) != 0 {
+ f.Fatalf("exit block %s has successors", b)
+ }
+ if b.Control == nil {
+ f.Fatalf("exit block %s has no control value", b)
+ }
+ if !b.Control.Type.IsMemory() {
+ f.Fatalf("exit block %s has non-memory control value %s", b, b.Control.LongString())
+ }
+ case BlockRet:
+ if len(b.Succs) != 0 {
+ f.Fatalf("ret block %s has successors", b)
+ }
+ if b.Control == nil {
+ f.Fatalf("ret block %s has nil control %s", b)
+ }
+ if !b.Control.Type.IsMemory() {
+ f.Fatalf("ret block %s has non-memory control value %s", b, b.Control.LongString())
+ }
+ case BlockRetJmp:
+ if len(b.Succs) != 0 {
+ f.Fatalf("retjmp block %s len(Succs)==%d, want 0", b, len(b.Succs))
+ }
+ if b.Control == nil {
+ f.Fatalf("retjmp block %s has nil control %s", b)
+ }
+ if !b.Control.Type.IsMemory() {
+ f.Fatalf("retjmp block %s has non-memory control value %s", b, b.Control.LongString())
+ }
+ if b.Aux == nil {
+ f.Fatalf("retjmp block %s has nil Aux field", b)
+ }
+ case BlockDead:
+ if len(b.Succs) != 0 {
+ f.Fatalf("dead block %s has successors", b)
+ }
+ if len(b.Preds) != 0 {
+ f.Fatalf("dead block %s has predecessors", b)
+ }
+ if len(b.Values) != 0 {
+ f.Fatalf("dead block %s has values", b)
+ }
+ if b.Control != nil {
+ f.Fatalf("dead block %s has a control value", b)
+ }
+ case BlockPlain:
+ if len(b.Succs) != 1 {
+ f.Fatalf("plain block %s len(Succs)==%d, want 1", b, len(b.Succs))
+ }
+ if b.Control != nil {
+ f.Fatalf("plain block %s has non-nil control %s", b, b.Control.LongString())
+ }
+ case BlockIf:
+ if len(b.Succs) != 2 {
+ f.Fatalf("if block %s len(Succs)==%d, want 2", b, len(b.Succs))
+ }
+ if b.Control == nil {
+ f.Fatalf("if block %s has no control value", b)
+ }
+ if !b.Control.Type.IsBoolean() {
+ f.Fatalf("if block %s has non-bool control value %s", b, b.Control.LongString())
+ }
+ case BlockCall:
+ if len(b.Succs) != 1 {
+ f.Fatalf("call block %s len(Succs)==%d, want 1", b, len(b.Succs))
+ }
+ if b.Control == nil {
+ f.Fatalf("call block %s has no control value", b)
+ }
+ if !b.Control.Type.IsMemory() {
+ f.Fatalf("call block %s has non-memory control value %s", b, b.Control.LongString())
+ }
+ case BlockCheck:
+ if len(b.Succs) != 1 {
+ f.Fatalf("check block %s len(Succs)==%d, want 1", b, len(b.Succs))
+ }
+ if b.Control == nil {
+ f.Fatalf("check block %s has no control value", b)
+ }
+ if !b.Control.Type.IsVoid() {
+ f.Fatalf("check block %s has non-void control value %s", b, b.Control.LongString())
+ }
+ case BlockFirst:
+ if len(b.Succs) != 2 {
+ f.Fatalf("plain/dead block %s len(Succs)==%d, want 2", b, len(b.Succs))
+ }
+ if b.Control != nil {
+ f.Fatalf("plain/dead block %s has a control value", b)
+ }
+ }
+ if len(b.Succs) > 2 && b.Likely != BranchUnknown {
+ f.Fatalf("likeliness prediction %d for block %s with %d successors: %s", b.Likely, b, len(b.Succs))
+ }
+
+ for _, v := range b.Values {
+ // Check to make sure argument count makes sense (argLen of -1 indicates
+ // variable length args)
+ nArgs := opcodeTable[v.Op].argLen
+ if nArgs != -1 && int32(len(v.Args)) != nArgs {
+ f.Fatalf("value %v has %d args, expected %d", v.LongString(),
+ len(v.Args), nArgs)
+ }
+
+ // Check to make sure aux values make sense.
+ canHaveAux := false
+ canHaveAuxInt := false
+ switch opcodeTable[v.Op].auxType {
+ case auxNone:
+ case auxBool, auxInt8, auxInt16, auxInt32, auxInt64, auxFloat:
+ canHaveAuxInt = true
+ case auxString, auxSym:
+ canHaveAux = true
+ case auxSymOff, auxSymValAndOff:
+ canHaveAuxInt = true
+ canHaveAux = true
+ default:
+ f.Fatalf("unknown aux type for %s", v.Op)
+ }
+ if !canHaveAux && v.Aux != nil {
+ f.Fatalf("value %v has an Aux value %v but shouldn't", v.LongString(), v.Aux)
+ }
+ if !canHaveAuxInt && v.AuxInt != 0 {
+ f.Fatalf("value %v has an AuxInt value %d but shouldn't", v.LongString(), v.AuxInt)
+ }
+
+ for _, arg := range v.Args {
+ if arg == nil {
+ f.Fatalf("value %v has nil arg", v.LongString())
+ }
+ }
+
+ if valueMark[v.ID] {
+ f.Fatalf("value %s appears twice!", v.LongString())
+ }
+ valueMark[v.ID] = true
+
+ if v.Block != b {
+ f.Fatalf("%s.block != %s", v, b)
+ }
+ if v.Op == OpPhi && len(v.Args) != len(b.Preds) {
+ f.Fatalf("phi length %s does not match pred length %d for block %s", v.LongString(), len(b.Preds), b)
+ }
+
+ if v.Op == OpAddr {
+ if len(v.Args) == 0 {
+ f.Fatalf("no args for OpAddr %s", v.LongString())
+ }
+ if v.Args[0].Op != OpSP && v.Args[0].Op != OpSB {
+ f.Fatalf("bad arg to OpAddr %v", v)
+ }
+ }
+
+ // TODO: check for cycles in values
+ // TODO: check type
+ }
+ }
+
+ // Check to make sure all Blocks referenced are in the function.
+ if !blockMark[f.Entry.ID] {
+ f.Fatalf("entry block %v is missing", f.Entry)
+ }
+ for _, b := range f.Blocks {
+ for _, c := range b.Preds {
+ if !blockMark[c.ID] {
+ f.Fatalf("predecessor block %v for %v is missing", c, b)
+ }
+ }
+ for _, c := range b.Succs {
+ if !blockMark[c.ID] {
+ f.Fatalf("successor block %v for %v is missing", c, b)
+ }
+ }
+ }
+
+ if len(f.Entry.Preds) > 0 {
+ f.Fatalf("entry block %s of %s has predecessor(s) %v", f.Entry, f.Name, f.Entry.Preds)
+ }
+
+ // Check to make sure all Values referenced are in the function.
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ for i, a := range v.Args {
+ if !valueMark[a.ID] {
+ f.Fatalf("%v, arg %d of %v, is missing", a, i, v)
+ }
+ }
+ }
+ if b.Control != nil && !valueMark[b.Control.ID] {
+ f.Fatalf("control value for %s is missing: %v", b, b.Control)
+ }
+ }
+ for b := f.freeBlocks; b != nil; b = b.succstorage[0] {
+ if blockMark[b.ID] {
+ f.Fatalf("used block b%d in free list", b.ID)
+ }
+ }
+ for v := f.freeValues; v != nil; v = v.argstorage[0] {
+ if valueMark[v.ID] {
+ f.Fatalf("used value v%d in free list", v.ID)
+ }
+ }
+
+ // Check to make sure all args dominate uses.
+ if f.RegAlloc == nil {
+ // Note: regalloc introduces non-dominating args.
+ // See TODO in regalloc.go.
+ idom := dominators(f)
+ sdom := newSparseTree(f, idom)
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ for i, arg := range v.Args {
+ x := arg.Block
+ y := b
+ if v.Op == OpPhi {
+ y = b.Preds[i]
+ }
+ if !domCheck(f, sdom, x, y) {
+ f.Fatalf("arg %d of value %s does not dominate, arg=%s", i, v.LongString(), arg.LongString())
+ }
+ }
+ }
+ if b.Control != nil && !domCheck(f, sdom, b.Control.Block, b) {
+ f.Fatalf("control value %s for %s doesn't dominate", b.Control, b)
+ }
+ }
+ }
+}
+
+// domCheck reports whether x dominates y (including x==y).
+func domCheck(f *Func, sdom sparseTree, x, y *Block) bool {
+ if !sdom.isAncestorEq(y, f.Entry) {
+ // unreachable - ignore
+ return true
+ }
+ return sdom.isAncestorEq(x, y)
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import (
+ "fmt"
+ "log"
+ "runtime"
+ "strings"
+ "time"
+)
+
+// Compile is the main entry point for this package.
+// Compile modifies f so that on return:
+// · all Values in f map to 0 or 1 assembly instructions of the target architecture
+// · the order of f.Blocks is the order to emit the Blocks
+// · the order of b.Values is the order to emit the Values in each Block
+// · f has a non-nil regAlloc field
+func Compile(f *Func) {
+ // TODO: debugging - set flags to control verbosity of compiler,
+ // which phases to dump IR before/after, etc.
+ if f.Log() {
+ f.Logf("compiling %s\n", f.Name)
+ }
+
+ // hook to print function & phase if panic happens
+ phaseName := "init"
+ defer func() {
+ if phaseName != "" {
+ err := recover()
+ stack := make([]byte, 16384)
+ n := runtime.Stack(stack, false)
+ stack = stack[:n]
+ f.Fatalf("panic during %s while compiling %s:\n\n%v\n\n%s\n", phaseName, f.Name, err, stack)
+ }
+ }()
+
+ // Run all the passes
+ printFunc(f)
+ f.Config.HTML.WriteFunc("start", f)
+ checkFunc(f)
+ const logMemStats = false
+ for _, p := range passes {
+ if !f.Config.optimize && !p.required {
+ continue
+ }
+ f.pass = &p
+ phaseName = p.name
+ if f.Log() {
+ f.Logf(" pass %s begin\n", p.name)
+ }
+ // TODO: capture logging during this pass, add it to the HTML
+ var mStart runtime.MemStats
+ if logMemStats || p.mem {
+ runtime.ReadMemStats(&mStart)
+ }
+
+ tStart := time.Now()
+ p.fn(f)
+ tEnd := time.Now()
+
+ // Need something less crude than "Log the whole intermediate result".
+ if f.Log() || f.Config.HTML != nil {
+ time := tEnd.Sub(tStart).Nanoseconds()
+ var stats string
+ if logMemStats {
+ var mEnd runtime.MemStats
+ runtime.ReadMemStats(&mEnd)
+ nBytes := mEnd.TotalAlloc - mStart.TotalAlloc
+ nAllocs := mEnd.Mallocs - mStart.Mallocs
+ stats = fmt.Sprintf("[%d ns %d allocs %d bytes]", time, nAllocs, nBytes)
+ } else {
+ stats = fmt.Sprintf("[%d ns]", time)
+ }
+
+ f.Logf(" pass %s end %s\n", p.name, stats)
+ printFunc(f)
+ f.Config.HTML.WriteFunc(fmt.Sprintf("after %s <span class=\"stats\">%s</span>", phaseName, stats), f)
+ }
+ if p.time || p.mem {
+ // Surround timing information w/ enough context to allow comparisons.
+ time := tEnd.Sub(tStart).Nanoseconds()
+ if p.time {
+ f.logStat("TIME(ns)", time)
+ }
+ if p.mem {
+ var mEnd runtime.MemStats
+ runtime.ReadMemStats(&mEnd)
+ nBytes := mEnd.TotalAlloc - mStart.TotalAlloc
+ nAllocs := mEnd.Mallocs - mStart.Mallocs
+ f.logStat("TIME(ns):BYTES:ALLOCS", time, nBytes, nAllocs)
+ }
+ }
+ checkFunc(f)
+ }
+
+ // Squash error printing defer
+ phaseName = ""
+}
+
+type pass struct {
+ name string
+ fn func(*Func)
+ required bool
+ disabled bool
+ time bool // report time to run pass
+ mem bool // report mem stats to run pass
+ stats int // pass reports own "stats" (e.g., branches removed)
+ debug int // pass performs some debugging. =1 should be in error-testing-friendly Warnl format.
+ test int // pass-specific ad-hoc option, perhaps useful in development
+}
+
+// PhaseOption sets the specified flag in the specified ssa phase,
+// returning empty string if this was successful or a string explaining
+// the error if it was not. A version of the phase name with "_"
+// replaced by " " is also checked for a match.
+// See gc/lex.go for dissection of the option string. Example use:
+// GO_GCFLAGS=-d=ssa/generic_cse/time,ssa/generic_cse/stats,ssa/generic_cse/debug=3 ./make.bash ...
+//
+func PhaseOption(phase, flag string, val int) string {
+ underphase := strings.Replace(phase, "_", " ", -1)
+ for i, p := range passes {
+ if p.name == phase || p.name == underphase {
+ switch flag {
+ case "on":
+ p.disabled = val == 0
+ case "off":
+ p.disabled = val != 0
+ case "time":
+ p.time = val != 0
+ case "mem":
+ p.mem = val != 0
+ case "debug":
+ p.debug = val
+ case "stats":
+ p.stats = val
+ case "test":
+ p.test = val
+ default:
+ return fmt.Sprintf("Did not find a flag matching %s in -d=ssa/%s debug option", flag, phase)
+ }
+ if p.disabled && p.required {
+ return fmt.Sprintf("Cannot disable required SSA phase %s using -d=ssa/%s debug option", phase, phase)
+ }
+ passes[i] = p
+ return ""
+ }
+ }
+ return fmt.Sprintf("Did not find a phase matching %s in -d=ssa/... debug option", phase)
+}
+
+// list of passes for the compiler
+var passes = [...]pass{
+ // TODO: combine phielim and copyelim into a single pass?
+ {name: "early phielim", fn: phielim},
+ {name: "early copyelim", fn: copyelim},
+ {name: "early deadcode", fn: deadcode}, // remove generated dead code to avoid doing pointless work during opt
+ {name: "short circuit", fn: shortcircuit},
+ {name: "decompose user", fn: decomposeUser, required: true},
+ {name: "decompose builtin", fn: decomposeBuiltIn, required: true},
+ {name: "opt", fn: opt, required: true}, // TODO: split required rules and optimizing rules
+ {name: "zero arg cse", fn: zcse, required: true}, // required to merge OpSB values
+ {name: "opt deadcode", fn: deadcode}, // remove any blocks orphaned during opt
+ {name: "generic cse", fn: cse},
+ {name: "phiopt", fn: phiopt},
+ {name: "nilcheckelim", fn: nilcheckelim},
+ {name: "prove", fn: prove},
+ {name: "generic deadcode", fn: deadcode},
+ {name: "fuse", fn: fuse},
+ {name: "dse", fn: dse},
+ {name: "tighten", fn: tighten}, // move values closer to their uses
+ {name: "lower", fn: lower, required: true},
+ {name: "lowered cse", fn: cse},
+ {name: "lowered deadcode", fn: deadcode, required: true},
+ {name: "checkLower", fn: checkLower, required: true},
+ {name: "late phielim", fn: phielim},
+ {name: "late copyelim", fn: copyelim},
+ {name: "late deadcode", fn: deadcode},
+ {name: "critical", fn: critical, required: true}, // remove critical edges
+ {name: "likelyadjust", fn: likelyadjust},
+ {name: "layout", fn: layout, required: true}, // schedule blocks
+ {name: "schedule", fn: schedule, required: true}, // schedule values
+ {name: "flagalloc", fn: flagalloc, required: true}, // allocate flags register
+ {name: "regalloc", fn: regalloc, required: true}, // allocate int & float registers + stack slots
+ {name: "trim", fn: trim}, // remove empty blocks
+}
+
+// Double-check phase ordering constraints.
+// This code is intended to document the ordering requirements
+// between different phases. It does not override the passes
+// list above.
+type constraint struct {
+ a, b string // a must come before b
+}
+
+var passOrder = [...]constraint{
+ // prove reliese on common-subexpression elimination for maximum benefits.
+ {"generic cse", "prove"},
+ // deadcode after prove to eliminate all new dead blocks.
+ {"prove", "generic deadcode"},
+ // common-subexpression before dead-store elim, so that we recognize
+ // when two address expressions are the same.
+ {"generic cse", "dse"},
+ // cse substantially improves nilcheckelim efficacy
+ {"generic cse", "nilcheckelim"},
+ // allow deadcode to clean up after nilcheckelim
+ {"nilcheckelim", "generic deadcode"},
+ // nilcheckelim generates sequences of plain basic blocks
+ {"nilcheckelim", "fuse"},
+ // nilcheckelim relies on opt to rewrite user nil checks
+ {"opt", "nilcheckelim"},
+ // tighten should happen before lowering to avoid splitting naturally paired instructions such as CMP/SET
+ {"tighten", "lower"},
+ // tighten will be most effective when as many values have been removed as possible
+ {"generic deadcode", "tighten"},
+ {"generic cse", "tighten"},
+ // don't run optimization pass until we've decomposed builtin objects
+ {"decompose builtin", "opt"},
+ // don't layout blocks until critical edges have been removed
+ {"critical", "layout"},
+ // regalloc requires the removal of all critical edges
+ {"critical", "regalloc"},
+ // regalloc requires all the values in a block to be scheduled
+ {"schedule", "regalloc"},
+ // checkLower must run after lowering & subsequent dead code elim
+ {"lower", "checkLower"},
+ {"lowered deadcode", "checkLower"},
+ // flagalloc needs instructions to be scheduled.
+ {"schedule", "flagalloc"},
+ // regalloc needs flags to be allocated first.
+ {"flagalloc", "regalloc"},
+ // trim needs regalloc to be done first.
+ {"regalloc", "trim"},
+}
+
+func init() {
+ for _, c := range passOrder {
+ a, b := c.a, c.b
+ i := -1
+ j := -1
+ for k, p := range passes {
+ if p.name == a {
+ i = k
+ }
+ if p.name == b {
+ j = k
+ }
+ }
+ if i < 0 {
+ log.Panicf("pass %s not found", a)
+ }
+ if j < 0 {
+ log.Panicf("pass %s not found", b)
+ }
+ if i >= j {
+ log.Panicf("passes %s and %s out of order", a, b)
+ }
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import (
+ "cmd/internal/obj"
+ "crypto/sha1"
+ "fmt"
+ "os"
+ "strings"
+)
+
+type Config struct {
+ arch string // "amd64", etc.
+ IntSize int64 // 4 or 8
+ PtrSize int64 // 4 or 8
+ lowerBlock func(*Block) bool // lowering function
+ lowerValue func(*Value, *Config) bool // lowering function
+ fe Frontend // callbacks into compiler frontend
+ HTML *HTMLWriter // html writer, for debugging
+ ctxt *obj.Link // Generic arch information
+ optimize bool // Do optimization
+ curFunc *Func
+
+ // TODO: more stuff. Compiler flags of interest, ...
+
+ // Given an environment variable used for debug hash match,
+ // what file (if any) receives the yes/no logging?
+ logfiles map[string]*os.File
+
+ // Storage for low-numbered values and blocks.
+ values [2000]Value
+ blocks [200]Block
+
+ domblockstore []ID // scratch space for computing dominators
+ scrSparse []*sparseSet // scratch sparse sets to be re-used.
+}
+
+type TypeSource interface {
+ TypeBool() Type
+ TypeInt8() Type
+ TypeInt16() Type
+ TypeInt32() Type
+ TypeInt64() Type
+ TypeUInt8() Type
+ TypeUInt16() Type
+ TypeUInt32() Type
+ TypeUInt64() Type
+ TypeInt() Type
+ TypeFloat32() Type
+ TypeFloat64() Type
+ TypeUintptr() Type
+ TypeString() Type
+ TypeBytePtr() Type // TODO: use unsafe.Pointer instead?
+
+ CanSSA(t Type) bool
+}
+
+type Logger interface {
+ // Logf logs a message from the compiler.
+ Logf(string, ...interface{})
+
+ // Log returns true if logging is not a no-op
+ // some logging calls account for more than a few heap allocations.
+ Log() bool
+
+ // Fatal reports a compiler error and exits.
+ Fatalf(line int32, msg string, args ...interface{})
+
+ // Unimplemented reports that the function cannot be compiled.
+ // It will be removed once SSA work is complete.
+ Unimplementedf(line int32, msg string, args ...interface{})
+
+ // Warnl writes compiler messages in the form expected by "errorcheck" tests
+ Warnl(line int, fmt_ string, args ...interface{})
+
+ // Fowards the Debug_checknil flag from gc
+ Debug_checknil() bool
+}
+
+type Frontend interface {
+ TypeSource
+ Logger
+
+ // StringData returns a symbol pointing to the given string's contents.
+ StringData(string) interface{} // returns *gc.Sym
+
+ // Auto returns a Node for an auto variable of the given type.
+ // The SSA compiler uses this function to allocate space for spills.
+ Auto(Type) GCNode
+
+ // Line returns a string describing the given line number.
+ Line(int32) string
+}
+
+// interface used to hold *gc.Node. We'd use *gc.Node directly but
+// that would lead to an import cycle.
+type GCNode interface {
+ Typ() Type
+ String() string
+}
+
+// NewConfig returns a new configuration object for the given architecture.
+func NewConfig(arch string, fe Frontend, ctxt *obj.Link, optimize bool) *Config {
+ c := &Config{arch: arch, fe: fe}
+ switch arch {
+ case "amd64":
+ c.IntSize = 8
+ c.PtrSize = 8
+ c.lowerBlock = rewriteBlockAMD64
+ c.lowerValue = rewriteValueAMD64
+ case "386":
+ c.IntSize = 4
+ c.PtrSize = 4
+ c.lowerBlock = rewriteBlockAMD64
+ c.lowerValue = rewriteValueAMD64 // TODO(khr): full 32-bit support
+ default:
+ fe.Unimplementedf(0, "arch %s not implemented", arch)
+ }
+ c.ctxt = ctxt
+ c.optimize = optimize
+
+ // Assign IDs to preallocated values/blocks.
+ for i := range c.values {
+ c.values[i].ID = ID(i)
+ }
+ for i := range c.blocks {
+ c.blocks[i].ID = ID(i)
+ }
+
+ c.logfiles = make(map[string]*os.File)
+
+ return c
+}
+
+func (c *Config) Frontend() Frontend { return c.fe }
+
+// NewFunc returns a new, empty function object.
+// Caller must call f.Free() before calling NewFunc again.
+func (c *Config) NewFunc() *Func {
+ // TODO(khr): should this function take name, type, etc. as arguments?
+ if c.curFunc != nil {
+ c.Fatalf(0, "NewFunc called without previous Free")
+ }
+ f := &Func{Config: c, NamedValues: map[LocalSlot][]*Value{}}
+ c.curFunc = f
+ return f
+}
+
+func (c *Config) Logf(msg string, args ...interface{}) { c.fe.Logf(msg, args...) }
+func (c *Config) Log() bool { return c.fe.Log() }
+func (c *Config) Fatalf(line int32, msg string, args ...interface{}) { c.fe.Fatalf(line, msg, args...) }
+func (c *Config) Unimplementedf(line int32, msg string, args ...interface{}) {
+ c.fe.Unimplementedf(line, msg, args...)
+}
+func (c *Config) Warnl(line int, msg string, args ...interface{}) { c.fe.Warnl(line, msg, args...) }
+func (c *Config) Debug_checknil() bool { return c.fe.Debug_checknil() }
+
+func (c *Config) logDebugHashMatch(evname, name string) {
+ var file *os.File
+ file = c.logfiles[evname]
+ if file == nil {
+ file = os.Stdout
+ tmpfile := os.Getenv("GSHS_LOGFILE")
+ if tmpfile != "" {
+ var ok error
+ file, ok = os.Create(tmpfile)
+ if ok != nil {
+ c.Fatalf(0, "Could not open hash-testing logfile %s", tmpfile)
+ }
+ }
+ c.logfiles[evname] = file
+ }
+ s := fmt.Sprintf("%s triggered %s\n", evname, name)
+ file.WriteString(s)
+ file.Sync()
+}
+
+// DebugHashMatch returns true if environment variable evname
+// 1) is empty (this is a special more-quickly implemented case of 3)
+// 2) is "y" or "Y"
+// 3) is a suffix of the sha1 hash of name
+// 4) is a suffix of the environment variable
+// fmt.Sprintf("%s%d", evname, n)
+// provided that all such variables are nonempty for 0 <= i <= n
+// Otherwise it returns false.
+// When true is returned the message
+// "%s triggered %s\n", evname, name
+// is printed on the file named in environment variable
+// GSHS_LOGFILE
+// or standard out if that is empty or there is an error
+// opening the file.
+
+func (c *Config) DebugHashMatch(evname, name string) bool {
+ evhash := os.Getenv(evname)
+ if evhash == "" {
+ return true // default behavior with no EV is "on"
+ }
+ if evhash == "y" || evhash == "Y" {
+ c.logDebugHashMatch(evname, name)
+ return true
+ }
+ if evhash == "n" || evhash == "N" {
+ return false
+ }
+ // Check the hash of the name against a partial input hash.
+ // We use this feature to do a binary search to
+ // find a function that is incorrectly compiled.
+ hstr := ""
+ for _, b := range sha1.Sum([]byte(name)) {
+ hstr += fmt.Sprintf("%08b", b)
+ }
+
+ if strings.HasSuffix(hstr, evhash) {
+ c.logDebugHashMatch(evname, name)
+ return true
+ }
+
+ // Iteratively try additional hashes to allow tests for multi-point
+ // failure.
+ for i := 0; true; i++ {
+ ev := fmt.Sprintf("%s%d", evname, i)
+ evv := os.Getenv(ev)
+ if evv == "" {
+ break
+ }
+ if strings.HasSuffix(hstr, evv) {
+ c.logDebugHashMatch(ev, name)
+ return true
+ }
+ }
+ return false
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// copyelim removes all copies from f.
+func copyelim(f *Func) {
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ copyelimValue(v)
+ }
+ v := b.Control
+ if v != nil {
+ for v.Op == OpCopy {
+ v = v.Args[0]
+ }
+ b.Control = v
+ }
+ }
+
+ // Update named values.
+ for _, name := range f.Names {
+ values := f.NamedValues[name]
+ for i, v := range values {
+ x := v
+ for x.Op == OpCopy {
+ x = x.Args[0]
+ }
+ if x != v {
+ values[i] = v
+ }
+ }
+ }
+}
+
+func copyelimValue(v *Value) {
+ // elide any copies generated during rewriting
+ for i, a := range v.Args {
+ if a.Op != OpCopy {
+ continue
+ }
+ // Rewriting can generate OpCopy loops.
+ // They are harmless (see removePredecessor),
+ // but take care to stop if we find a cycle.
+ slow := a // advances every other iteration
+ var advance bool
+ for a.Op == OpCopy {
+ a = a.Args[0]
+ if slow == a {
+ break
+ }
+ if advance {
+ slow = slow.Args[0]
+ }
+ advance = !advance
+ }
+ v.Args[i] = a
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// critical splits critical edges (those that go from a block with
+// more than one outedge to a block with more than one inedge).
+// Regalloc wants a critical-edge-free CFG so it can implement phi values.
+func critical(f *Func) {
+ for _, b := range f.Blocks {
+ if len(b.Preds) <= 1 {
+ continue
+ }
+
+ // split input edges coming from multi-output blocks.
+ for i, c := range b.Preds {
+ if c.Kind == BlockPlain {
+ continue // only single output block
+ }
+
+ // allocate a new block to place on the edge
+ d := f.NewBlock(BlockPlain)
+ d.Line = c.Line
+
+ // splice it in
+ d.Preds = append(d.Preds, c)
+ d.Succs = append(d.Succs, b)
+ b.Preds[i] = d
+ // replace b with d in c's successor list.
+ for j, b2 := range c.Succs {
+ if b2 == b {
+ c.Succs[j] = d
+ break
+ }
+ }
+ }
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import (
+ "fmt"
+ "sort"
+)
+
+const (
+ cmpDepth = 4
+)
+
+// cse does common-subexpression elimination on the Function.
+// Values are just relinked, nothing is deleted. A subsequent deadcode
+// pass is required to actually remove duplicate expressions.
+func cse(f *Func) {
+ // Two values are equivalent if they satisfy the following definition:
+ // equivalent(v, w):
+ // v.op == w.op
+ // v.type == w.type
+ // v.aux == w.aux
+ // v.auxint == w.auxint
+ // len(v.args) == len(w.args)
+ // v.block == w.block if v.op == OpPhi
+ // equivalent(v.args[i], w.args[i]) for i in 0..len(v.args)-1
+
+ // The algorithm searches for a partition of f's values into
+ // equivalence classes using the above definition.
+ // It starts with a coarse partition and iteratively refines it
+ // until it reaches a fixed point.
+
+ // Make initial coarse partitions by using a subset of the conditions above.
+ a := make([]*Value, 0, f.NumValues())
+ auxIDs := auxmap{}
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ if auxIDs[v.Aux] == 0 {
+ auxIDs[v.Aux] = int32(len(auxIDs)) + 1
+ }
+ if v.Type.IsMemory() {
+ continue // memory values can never cse
+ }
+ if opcodeTable[v.Op].commutative && len(v.Args) == 2 && v.Args[1].ID < v.Args[0].ID {
+ // Order the arguments of binary commutative operations.
+ v.Args[0], v.Args[1] = v.Args[1], v.Args[0]
+ }
+ a = append(a, v)
+ }
+ }
+ partition := partitionValues(a, auxIDs)
+
+ // map from value id back to eqclass id
+ valueEqClass := make([]ID, f.NumValues())
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ // Use negative equivalence class #s for unique values.
+ valueEqClass[v.ID] = -v.ID
+ }
+ }
+ for i, e := range partition {
+ if f.pass.debug > 1 && len(e) > 500 {
+ fmt.Printf("CSE.large partition (%d): ", len(e))
+ for j := 0; j < 3; j++ {
+ fmt.Printf("%s ", e[j].LongString())
+ }
+ fmt.Println()
+ }
+
+ for _, v := range e {
+ valueEqClass[v.ID] = ID(i)
+ }
+ if f.pass.debug > 2 && len(e) > 1 {
+ fmt.Printf("CSE.partition #%d:", i)
+ for _, v := range e {
+ fmt.Printf(" %s", v.String())
+ }
+ fmt.Printf("\n")
+ }
+ }
+
+ // Find an equivalence class where some members of the class have
+ // non-equivalent arguments. Split the equivalence class appropriately.
+ // Repeat until we can't find any more splits.
+ for {
+ changed := false
+
+ // partition can grow in the loop. By not using a range loop here,
+ // we process new additions as they arrive, avoiding O(n^2) behavior.
+ for i := 0; i < len(partition); i++ {
+ e := partition[i]
+ v := e[0]
+ // all values in this equiv class that are not equivalent to v get moved
+ // into another equiv class.
+ // To avoid allocating while building that equivalence class,
+ // move the values equivalent to v to the beginning of e
+ // and other values to the end of e.
+ allvals := e
+ eqloop:
+ for j := 1; j < len(e); {
+ w := e[j]
+ equivalent := true
+ for i := 0; i < len(v.Args); i++ {
+ if valueEqClass[v.Args[i].ID] != valueEqClass[w.Args[i].ID] {
+ equivalent = false
+ break
+ }
+ }
+ if !equivalent || !v.Type.Equal(w.Type) {
+ // w is not equivalent to v.
+ // move it to the end and shrink e.
+ e[j], e[len(e)-1] = e[len(e)-1], e[j]
+ e = e[:len(e)-1]
+ valueEqClass[w.ID] = ID(len(partition))
+ changed = true
+ continue eqloop
+ }
+ // v and w are equivalent. Keep w in e.
+ j++
+ }
+ partition[i] = e
+ if len(e) < len(allvals) {
+ partition = append(partition, allvals[len(e):])
+ }
+ }
+
+ if !changed {
+ break
+ }
+ }
+
+ // Compute dominator tree
+ idom := dominators(f)
+ sdom := newSparseTree(f, idom)
+
+ // Compute substitutions we would like to do. We substitute v for w
+ // if v and w are in the same equivalence class and v dominates w.
+ rewrite := make([]*Value, f.NumValues())
+ for _, e := range partition {
+ for len(e) > 1 {
+ // Find a maximal dominant element in e
+ v := e[0]
+ for _, w := range e[1:] {
+ if sdom.isAncestorEq(w.Block, v.Block) {
+ v = w
+ }
+ }
+
+ // Replace all elements of e which v dominates
+ for i := 0; i < len(e); {
+ w := e[i]
+ if w == v {
+ e, e[i] = e[:len(e)-1], e[len(e)-1]
+ } else if sdom.isAncestorEq(v.Block, w.Block) {
+ rewrite[w.ID] = v
+ e, e[i] = e[:len(e)-1], e[len(e)-1]
+ } else {
+ i++
+ }
+ }
+ }
+ }
+
+ rewrites := int64(0)
+
+ // Apply substitutions
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ for i, w := range v.Args {
+ if x := rewrite[w.ID]; x != nil {
+ v.SetArg(i, x)
+ rewrites++
+ }
+ }
+ }
+ if v := b.Control; v != nil {
+ if x := rewrite[v.ID]; x != nil {
+ if v.Op == OpNilCheck {
+ // nilcheck pass will remove the nil checks and log
+ // them appropriately, so don't mess with them here.
+ continue
+ }
+ b.Control = x
+ }
+ }
+ }
+ if f.pass.stats > 0 {
+ f.logStat("CSE REWRITES", rewrites)
+ }
+}
+
+// An eqclass approximates an equivalence class. During the
+// algorithm it may represent the union of several of the
+// final equivalence classes.
+type eqclass []*Value
+
+// partitionValues partitions the values into equivalence classes
+// based on having all the following features match:
+// - opcode
+// - type
+// - auxint
+// - aux
+// - nargs
+// - block # if a phi op
+// - first two arg's opcodes and auxint
+// - NOT first two arg's aux; that can break CSE.
+// partitionValues returns a list of equivalence classes, each
+// being a sorted by ID list of *Values. The eqclass slices are
+// backed by the same storage as the input slice.
+// Equivalence classes of size 1 are ignored.
+func partitionValues(a []*Value, auxIDs auxmap) []eqclass {
+ sort.Sort(sortvalues{a, auxIDs})
+
+ var partition []eqclass
+ for len(a) > 0 {
+ v := a[0]
+ j := 1
+ for ; j < len(a); j++ {
+ w := a[j]
+ if cmpVal(v, w, auxIDs, cmpDepth) != CMPeq {
+ break
+ }
+ }
+ if j > 1 {
+ partition = append(partition, a[:j])
+ }
+ a = a[j:]
+ }
+
+ return partition
+}
+func lt2Cmp(isLt bool) Cmp {
+ if isLt {
+ return CMPlt
+ }
+ return CMPgt
+}
+
+type auxmap map[interface{}]int32
+
+func cmpVal(v, w *Value, auxIDs auxmap, depth int) Cmp {
+ // Try to order these comparison by cost (cheaper first)
+ if v.Op != w.Op {
+ return lt2Cmp(v.Op < w.Op)
+ }
+ if v.AuxInt != w.AuxInt {
+ return lt2Cmp(v.AuxInt < w.AuxInt)
+ }
+ if len(v.Args) != len(w.Args) {
+ return lt2Cmp(len(v.Args) < len(w.Args))
+ }
+ if v.Op == OpPhi && v.Block != w.Block {
+ return lt2Cmp(v.Block.ID < w.Block.ID)
+ }
+
+ if tc := v.Type.Compare(w.Type); tc != CMPeq {
+ return tc
+ }
+
+ if v.Aux != w.Aux {
+ if v.Aux == nil {
+ return CMPlt
+ }
+ if w.Aux == nil {
+ return CMPgt
+ }
+ return lt2Cmp(auxIDs[v.Aux] < auxIDs[w.Aux])
+ }
+
+ if depth > 0 {
+ for i := range v.Args {
+ if v.Args[i] == w.Args[i] {
+ // skip comparing equal args
+ continue
+ }
+ if ac := cmpVal(v.Args[i], w.Args[i], auxIDs, depth-1); ac != CMPeq {
+ return ac
+ }
+ }
+ }
+
+ return CMPeq
+}
+
+// Sort values to make the initial partition.
+type sortvalues struct {
+ a []*Value // array of values
+ auxIDs auxmap // aux -> aux ID map
+}
+
+func (sv sortvalues) Len() int { return len(sv.a) }
+func (sv sortvalues) Swap(i, j int) { sv.a[i], sv.a[j] = sv.a[j], sv.a[i] }
+func (sv sortvalues) Less(i, j int) bool {
+ v := sv.a[i]
+ w := sv.a[j]
+ if cmp := cmpVal(v, w, sv.auxIDs, cmpDepth); cmp != CMPeq {
+ return cmp == CMPlt
+ }
+
+ // Sort by value ID last to keep the sort result deterministic.
+ return v.ID < w.ID
+}
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import "testing"
+
+type tstAux struct {
+ s string
+}
+
+// This tests for a bug found when partitioning, but not sorting by the Aux value.
+func TestCSEAuxPartitionBug(t *testing.T) {
+ c := testConfig(t)
+ arg1Aux := &tstAux{"arg1-aux"}
+ arg2Aux := &tstAux{"arg2-aux"}
+ arg3Aux := &tstAux{"arg3-aux"}
+
+ // construct lots of values with args that have aux values and place
+ // them in an order that triggers the bug
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("start", OpInitMem, TypeMem, 0, nil),
+ Valu("sp", OpSP, TypeBytePtr, 0, nil),
+ Valu("r7", OpAdd64, TypeInt64, 0, nil, "arg3", "arg1"),
+ Valu("r1", OpAdd64, TypeInt64, 0, nil, "arg1", "arg2"),
+ Valu("arg1", OpArg, TypeInt64, 0, arg1Aux),
+ Valu("arg2", OpArg, TypeInt64, 0, arg2Aux),
+ Valu("arg3", OpArg, TypeInt64, 0, arg3Aux),
+ Valu("r9", OpAdd64, TypeInt64, 0, nil, "r7", "r8"),
+ Valu("r4", OpAdd64, TypeInt64, 0, nil, "r1", "r2"),
+ Valu("r8", OpAdd64, TypeInt64, 0, nil, "arg3", "arg2"),
+ Valu("r2", OpAdd64, TypeInt64, 0, nil, "arg1", "arg2"),
+ Valu("raddr", OpAddr, TypeInt64Ptr, 0, nil, "sp"),
+ Valu("raddrdef", OpVarDef, TypeMem, 0, nil, "start"),
+ Valu("r6", OpAdd64, TypeInt64, 0, nil, "r4", "r5"),
+ Valu("r3", OpAdd64, TypeInt64, 0, nil, "arg1", "arg2"),
+ Valu("r5", OpAdd64, TypeInt64, 0, nil, "r2", "r3"),
+ Valu("r10", OpAdd64, TypeInt64, 0, nil, "r6", "r9"),
+ Valu("rstore", OpStore, TypeMem, 8, nil, "raddr", "r10", "raddrdef"),
+ Goto("exit")),
+ Bloc("exit",
+ Exit("rstore")))
+
+ CheckFunc(fun.f)
+ cse(fun.f)
+ deadcode(fun.f)
+ CheckFunc(fun.f)
+
+ s1Cnt := 2
+ // r1 == r2 == r3, needs to remove two of this set
+ s2Cnt := 1
+ // r4 == r5, needs to remove one of these
+ for k, v := range fun.values {
+ if v.Op == OpInvalid {
+ switch k {
+ case "r1":
+ fallthrough
+ case "r2":
+ fallthrough
+ case "r3":
+ if s1Cnt == 0 {
+ t.Errorf("cse removed all of r1,r2,r3")
+ }
+ s1Cnt--
+
+ case "r4":
+ fallthrough
+ case "r5":
+ if s2Cnt == 0 {
+ t.Errorf("cse removed all of r4,r5")
+ }
+ s2Cnt--
+ default:
+ t.Errorf("cse removed %s, but shouldn't have", k)
+ }
+ }
+ }
+
+ if s1Cnt != 0 || s2Cnt != 0 {
+ t.Errorf("%d values missed during cse", s1Cnt+s2Cnt)
+ }
+}
+
+// TestZCSE tests the zero arg cse.
+func TestZCSE(t *testing.T) {
+ c := testConfig(t)
+
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("start", OpInitMem, TypeMem, 0, nil),
+ Valu("sp", OpSP, TypeBytePtr, 0, nil),
+ Valu("sb1", OpSB, TypeBytePtr, 0, nil),
+ Valu("sb2", OpSB, TypeBytePtr, 0, nil),
+ Valu("addr1", OpAddr, TypeInt64Ptr, 0, nil, "sb1"),
+ Valu("addr2", OpAddr, TypeInt64Ptr, 0, nil, "sb2"),
+ Valu("a1ld", OpLoad, TypeInt64, 0, nil, "addr1", "start"),
+ Valu("a2ld", OpLoad, TypeInt64, 0, nil, "addr2", "start"),
+ Valu("c1", OpConst64, TypeInt64, 1, nil),
+ Valu("r1", OpAdd64, TypeInt64, 0, nil, "a1ld", "c1"),
+ Valu("c2", OpConst64, TypeInt64, 1, nil),
+ Valu("r2", OpAdd64, TypeInt64, 0, nil, "a2ld", "c2"),
+ Valu("r3", OpAdd64, TypeInt64, 0, nil, "r1", "r2"),
+ Valu("raddr", OpAddr, TypeInt64Ptr, 0, nil, "sp"),
+ Valu("raddrdef", OpVarDef, TypeMem, 0, nil, "start"),
+ Valu("rstore", OpStore, TypeMem, 8, nil, "raddr", "r3", "raddrdef"),
+ Goto("exit")),
+ Bloc("exit",
+ Exit("rstore")))
+
+ CheckFunc(fun.f)
+ zcse(fun.f)
+ deadcode(fun.f)
+ CheckFunc(fun.f)
+
+ if fun.values["c1"].Op != OpInvalid && fun.values["c2"].Op != OpInvalid {
+ t.Errorf("zsce should have removed c1 or c2")
+ }
+ if fun.values["sb1"].Op != OpInvalid && fun.values["sb2"].Op != OpInvalid {
+ t.Errorf("zsce should have removed sb1 or sb2")
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// findlive returns the reachable blocks and live values in f.
+func findlive(f *Func) (reachable []bool, live []bool) {
+ reachable = reachableBlocks(f)
+ live = liveValues(f, reachable)
+ return
+}
+
+// reachableBlocks returns the reachable blocks in f.
+func reachableBlocks(f *Func) []bool {
+ reachable := make([]bool, f.NumBlocks())
+ reachable[f.Entry.ID] = true
+ p := []*Block{f.Entry} // stack-like worklist
+ for len(p) > 0 {
+ // Pop a reachable block
+ b := p[len(p)-1]
+ p = p[:len(p)-1]
+ // Mark successors as reachable
+ s := b.Succs
+ if b.Kind == BlockFirst {
+ s = s[:1]
+ }
+ for _, c := range s {
+ if !reachable[c.ID] {
+ reachable[c.ID] = true
+ p = append(p, c) // push
+ }
+ }
+ }
+ return reachable
+}
+
+// liveValues returns the live values in f.
+// reachable is a map from block ID to whether the block is reachable.
+func liveValues(f *Func, reachable []bool) []bool {
+ live := make([]bool, f.NumValues())
+
+ // After regalloc, consider all values to be live.
+ // See the comment at the top of regalloc.go and in deadcode for details.
+ if f.RegAlloc != nil {
+ for i := range live {
+ live[i] = true
+ }
+ return live
+ }
+
+ // Find all live values
+ var q []*Value // stack-like worklist of unscanned values
+
+ // Starting set: all control values of reachable blocks are live.
+ for _, b := range f.Blocks {
+ if !reachable[b.ID] {
+ continue
+ }
+ if v := b.Control; v != nil && !live[v.ID] {
+ live[v.ID] = true
+ q = append(q, v)
+ }
+ }
+
+ // Compute transitive closure of live values.
+ for len(q) > 0 {
+ // pop a reachable value
+ v := q[len(q)-1]
+ q = q[:len(q)-1]
+ for i, x := range v.Args {
+ if v.Op == OpPhi && !reachable[v.Block.Preds[i].ID] {
+ continue
+ }
+ if !live[x.ID] {
+ live[x.ID] = true
+ q = append(q, x) // push
+ }
+ }
+ }
+
+ return live
+}
+
+// deadcode removes dead code from f.
+func deadcode(f *Func) {
+ // deadcode after regalloc is forbidden for now. Regalloc
+ // doesn't quite generate legal SSA which will lead to some
+ // required moves being eliminated. See the comment at the
+ // top of regalloc.go for details.
+ if f.RegAlloc != nil {
+ f.Fatalf("deadcode after regalloc")
+ }
+
+ // Find reachable blocks.
+ reachable := reachableBlocks(f)
+
+ // Get rid of edges from dead to live code.
+ for _, b := range f.Blocks {
+ if reachable[b.ID] {
+ continue
+ }
+ for _, c := range b.Succs {
+ if reachable[c.ID] {
+ c.removePred(b)
+ }
+ }
+ }
+
+ // Get rid of dead edges from live code.
+ for _, b := range f.Blocks {
+ if !reachable[b.ID] {
+ continue
+ }
+ if b.Kind != BlockFirst {
+ continue
+ }
+ c := b.Succs[1]
+ b.Succs[1] = nil
+ b.Succs = b.Succs[:1]
+ b.Kind = BlockPlain
+ b.Likely = BranchUnknown
+
+ if reachable[c.ID] {
+ // Note: c must be reachable through some other edge.
+ c.removePred(b)
+ }
+ }
+
+ // Splice out any copies introduced during dead block removal.
+ copyelim(f)
+
+ // Find live values.
+ live := liveValues(f, reachable)
+
+ // Remove dead & duplicate entries from namedValues map.
+ s := f.newSparseSet(f.NumValues())
+ defer f.retSparseSet(s)
+ i := 0
+ for _, name := range f.Names {
+ j := 0
+ s.clear()
+ values := f.NamedValues[name]
+ for _, v := range values {
+ if live[v.ID] && !s.contains(v.ID) {
+ values[j] = v
+ j++
+ s.add(v.ID)
+ }
+ }
+ if j == 0 {
+ delete(f.NamedValues, name)
+ } else {
+ f.Names[i] = name
+ i++
+ for k := len(values) - 1; k >= j; k-- {
+ values[k] = nil
+ }
+ f.NamedValues[name] = values[:j]
+ }
+ }
+ for k := len(f.Names) - 1; k >= i; k-- {
+ f.Names[k] = LocalSlot{}
+ }
+ f.Names = f.Names[:i]
+
+ // Remove dead values from blocks' value list. Return dead
+ // values to the allocator.
+ for _, b := range f.Blocks {
+ i := 0
+ for _, v := range b.Values {
+ if live[v.ID] {
+ b.Values[i] = v
+ i++
+ } else {
+ f.freeValue(v)
+ }
+ }
+ // aid GC
+ tail := b.Values[i:]
+ for j := range tail {
+ tail[j] = nil
+ }
+ b.Values = b.Values[:i]
+ }
+
+ // Remove unreachable blocks. Return dead blocks to allocator.
+ i = 0
+ for _, b := range f.Blocks {
+ if reachable[b.ID] {
+ f.Blocks[i] = b
+ i++
+ } else {
+ if len(b.Values) > 0 {
+ b.Fatalf("live values in unreachable block %v: %v", b, b.Values)
+ }
+ f.freeBlock(b)
+ }
+ }
+ // zero remainder to help GC
+ tail := f.Blocks[i:]
+ for j := range tail {
+ tail[j] = nil
+ }
+ f.Blocks = f.Blocks[:i]
+}
+
+// removePred removes the predecessor p from b's predecessor list.
+func (b *Block) removePred(p *Block) {
+ var i int
+ found := false
+ for j, q := range b.Preds {
+ if q == p {
+ i = j
+ found = true
+ break
+ }
+ }
+ // TODO: the above loop could make the deadcode pass take quadratic time
+ if !found {
+ b.Fatalf("can't find predecessor %v of %v\n", p, b)
+ }
+
+ n := len(b.Preds) - 1
+ b.Preds[i] = b.Preds[n]
+ b.Preds[n] = nil // aid GC
+ b.Preds = b.Preds[:n]
+
+ // rewrite phi ops to match the new predecessor list
+ for _, v := range b.Values {
+ if v.Op != OpPhi {
+ continue
+ }
+ v.Args[i] = v.Args[n]
+ v.Args[n] = nil // aid GC
+ v.Args = v.Args[:n]
+ phielimValue(v)
+ // Note: this is trickier than it looks. Replacing
+ // a Phi with a Copy can in general cause problems because
+ // Phi and Copy don't have exactly the same semantics.
+ // Phi arguments always come from a predecessor block,
+ // whereas copies don't. This matters in loops like:
+ // 1: x = (Phi y)
+ // y = (Add x 1)
+ // goto 1
+ // If we replace Phi->Copy, we get
+ // 1: x = (Copy y)
+ // y = (Add x 1)
+ // goto 1
+ // (Phi y) refers to the *previous* value of y, whereas
+ // (Copy y) refers to the *current* value of y.
+ // The modified code has a cycle and the scheduler
+ // will barf on it.
+ //
+ // Fortunately, this situation can only happen for dead
+ // code loops. We know the code we're working with is
+ // not dead, so we're ok.
+ // Proof: If we have a potential bad cycle, we have a
+ // situation like this:
+ // x = (Phi z)
+ // y = (op1 x ...)
+ // z = (op2 y ...)
+ // Where opX are not Phi ops. But such a situation
+ // implies a cycle in the dominator graph. In the
+ // example, x.Block dominates y.Block, y.Block dominates
+ // z.Block, and z.Block dominates x.Block (treating
+ // "dominates" as reflexive). Cycles in the dominator
+ // graph can only happen in an unreachable cycle.
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import "testing"
+
+func TestDeadLoop(t *testing.T) {
+ c := testConfig(t)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem")),
+ // dead loop
+ Bloc("deadblock",
+ // dead value in dead block
+ Valu("deadval", OpConstBool, TypeBool, 1, nil),
+ If("deadval", "deadblock", "exit")))
+
+ CheckFunc(fun.f)
+ Deadcode(fun.f)
+ CheckFunc(fun.f)
+
+ for _, b := range fun.f.Blocks {
+ if b == fun.blocks["deadblock"] {
+ t.Errorf("dead block not removed")
+ }
+ for _, v := range b.Values {
+ if v == fun.values["deadval"] {
+ t.Errorf("control value of dead block not removed")
+ }
+ }
+ }
+}
+
+func TestDeadValue(t *testing.T) {
+ c := testConfig(t)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("deadval", OpConst64, TypeInt64, 37, nil),
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem")))
+
+ CheckFunc(fun.f)
+ Deadcode(fun.f)
+ CheckFunc(fun.f)
+
+ for _, b := range fun.f.Blocks {
+ for _, v := range b.Values {
+ if v == fun.values["deadval"] {
+ t.Errorf("dead value not removed")
+ }
+ }
+ }
+}
+
+func TestNeverTaken(t *testing.T) {
+ c := testConfig(t)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("cond", OpConstBool, TypeBool, 0, nil),
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ If("cond", "then", "else")),
+ Bloc("then",
+ Goto("exit")),
+ Bloc("else",
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem")))
+
+ CheckFunc(fun.f)
+ Opt(fun.f)
+ Deadcode(fun.f)
+ CheckFunc(fun.f)
+
+ if fun.blocks["entry"].Kind != BlockPlain {
+ t.Errorf("if(false) not simplified")
+ }
+ for _, b := range fun.f.Blocks {
+ if b == fun.blocks["then"] {
+ t.Errorf("then block still present")
+ }
+ for _, v := range b.Values {
+ if v == fun.values["cond"] {
+ t.Errorf("constant condition still present")
+ }
+ }
+ }
+
+}
+
+func TestNestedDeadBlocks(t *testing.T) {
+ c := testConfig(t)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("cond", OpConstBool, TypeBool, 0, nil),
+ If("cond", "b2", "b4")),
+ Bloc("b2",
+ If("cond", "b3", "b4")),
+ Bloc("b3",
+ If("cond", "b3", "b4")),
+ Bloc("b4",
+ If("cond", "b3", "exit")),
+ Bloc("exit",
+ Exit("mem")))
+
+ CheckFunc(fun.f)
+ Opt(fun.f)
+ CheckFunc(fun.f)
+ Deadcode(fun.f)
+ CheckFunc(fun.f)
+ if fun.blocks["entry"].Kind != BlockPlain {
+ t.Errorf("if(false) not simplified")
+ }
+ for _, b := range fun.f.Blocks {
+ if b == fun.blocks["b2"] {
+ t.Errorf("b2 block still present")
+ }
+ if b == fun.blocks["b3"] {
+ t.Errorf("b3 block still present")
+ }
+ for _, v := range b.Values {
+ if v == fun.values["cond"] {
+ t.Errorf("constant condition still present")
+ }
+ }
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// dse does dead-store elimination on the Function.
+// Dead stores are those which are unconditionally followed by
+// another store to the same location, with no intervening load.
+// This implementation only works within a basic block. TODO: use something more global.
+func dse(f *Func) {
+ var stores []*Value
+ loadUse := f.newSparseSet(f.NumValues())
+ defer f.retSparseSet(loadUse)
+ storeUse := f.newSparseSet(f.NumValues())
+ defer f.retSparseSet(storeUse)
+ shadowed := f.newSparseSet(f.NumValues())
+ defer f.retSparseSet(shadowed)
+ for _, b := range f.Blocks {
+ // Find all the stores in this block. Categorize their uses:
+ // loadUse contains stores which are used by a subsequent load.
+ // storeUse contains stores which are used by a subsequent store.
+ loadUse.clear()
+ storeUse.clear()
+ stores = stores[:0]
+ for _, v := range b.Values {
+ if v.Op == OpPhi {
+ // Ignore phis - they will always be first and can't be eliminated
+ continue
+ }
+ if v.Type.IsMemory() {
+ stores = append(stores, v)
+ for _, a := range v.Args {
+ if a.Block == b && a.Type.IsMemory() {
+ storeUse.add(a.ID)
+ if v.Op != OpStore && v.Op != OpZero && v.Op != OpVarDef && v.Op != OpVarKill {
+ // CALL, DUFFCOPY, etc. are both
+ // reads and writes.
+ loadUse.add(a.ID)
+ }
+ }
+ }
+ } else {
+ for _, a := range v.Args {
+ if a.Block == b && a.Type.IsMemory() {
+ loadUse.add(a.ID)
+ }
+ }
+ }
+ }
+ if len(stores) == 0 {
+ continue
+ }
+
+ // find last store in the block
+ var last *Value
+ for _, v := range stores {
+ if storeUse.contains(v.ID) {
+ continue
+ }
+ if last != nil {
+ b.Fatalf("two final stores - simultaneous live stores %s %s", last, v)
+ }
+ last = v
+ }
+ if last == nil {
+ b.Fatalf("no last store found - cycle?")
+ }
+
+ // Walk backwards looking for dead stores. Keep track of shadowed addresses.
+ // An "address" is an SSA Value which encodes both the address and size of
+ // the write. This code will not remove dead stores to the same address
+ // of different types.
+ shadowed.clear()
+ v := last
+
+ walkloop:
+ if loadUse.contains(v.ID) {
+ // Someone might be reading this memory state.
+ // Clear all shadowed addresses.
+ shadowed.clear()
+ }
+ if v.Op == OpStore || v.Op == OpZero {
+ if shadowed.contains(v.Args[0].ID) {
+ // Modify store into a copy
+ if v.Op == OpStore {
+ // store addr value mem
+ v.SetArgs1(v.Args[2])
+ } else {
+ // zero addr mem
+ sz := v.Args[0].Type.Elem().Size()
+ if v.AuxInt != sz {
+ f.Fatalf("mismatched zero/store sizes: %d and %d [%s]",
+ v.AuxInt, sz, v.LongString())
+ }
+ v.SetArgs1(v.Args[1])
+ }
+ v.Aux = nil
+ v.AuxInt = 0
+ v.Op = OpCopy
+ } else {
+ shadowed.add(v.Args[0].ID)
+ }
+ }
+ // walk to previous store
+ if v.Op == OpPhi {
+ continue // At start of block. Move on to next block.
+ }
+ for _, a := range v.Args {
+ if a.Block == b && a.Type.IsMemory() {
+ v = a
+ goto walkloop
+ }
+ }
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import "testing"
+
+func TestDeadStore(t *testing.T) {
+ c := testConfig(t)
+ elemType := &TypeImpl{Size_: 8, Name: "testtype"}
+ ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr", Elem_: elemType} // dummy for testing
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("start", OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu("v", OpConstBool, TypeBool, 1, nil),
+ Valu("addr1", OpAddr, ptrType, 0, nil, "sb"),
+ Valu("addr2", OpAddr, ptrType, 0, nil, "sb"),
+ Valu("addr3", OpAddr, ptrType, 0, nil, "sb"),
+ Valu("zero1", OpZero, TypeMem, 8, nil, "addr3", "start"),
+ Valu("store1", OpStore, TypeMem, 1, nil, "addr1", "v", "zero1"),
+ Valu("store2", OpStore, TypeMem, 1, nil, "addr2", "v", "store1"),
+ Valu("store3", OpStore, TypeMem, 1, nil, "addr1", "v", "store2"),
+ Valu("store4", OpStore, TypeMem, 1, nil, "addr3", "v", "store3"),
+ Goto("exit")),
+ Bloc("exit",
+ Exit("store3")))
+
+ CheckFunc(fun.f)
+ dse(fun.f)
+ CheckFunc(fun.f)
+
+ v1 := fun.values["store1"]
+ if v1.Op != OpCopy {
+ t.Errorf("dead store not removed")
+ }
+
+ v2 := fun.values["zero1"]
+ if v2.Op != OpCopy {
+ t.Errorf("dead store (zero) not removed")
+ }
+}
+func TestDeadStorePhi(t *testing.T) {
+ // make sure we don't get into an infinite loop with phi values.
+ c := testConfig(t)
+ ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("start", OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu("v", OpConstBool, TypeBool, 1, nil),
+ Valu("addr", OpAddr, ptrType, 0, nil, "sb"),
+ Goto("loop")),
+ Bloc("loop",
+ Valu("phi", OpPhi, TypeMem, 0, nil, "start", "store"),
+ Valu("store", OpStore, TypeMem, 1, nil, "addr", "v", "phi"),
+ If("v", "loop", "exit")),
+ Bloc("exit",
+ Exit("store")))
+
+ CheckFunc(fun.f)
+ dse(fun.f)
+ CheckFunc(fun.f)
+}
+
+func TestDeadStoreTypes(t *testing.T) {
+ // Make sure a narrow store can't shadow a wider one. We test an even
+ // stronger restriction, that one store can't shadow another unless the
+ // types of the address fields are identical (where identicalness is
+ // decided by the CSE pass).
+ c := testConfig(t)
+ t1 := &TypeImpl{Size_: 8, Ptr: true, Name: "t1"}
+ t2 := &TypeImpl{Size_: 4, Ptr: true, Name: "t2"}
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("start", OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu("v", OpConstBool, TypeBool, 1, nil),
+ Valu("addr1", OpAddr, t1, 0, nil, "sb"),
+ Valu("addr2", OpAddr, t2, 0, nil, "sb"),
+ Valu("store1", OpStore, TypeMem, 1, nil, "addr1", "v", "start"),
+ Valu("store2", OpStore, TypeMem, 1, nil, "addr2", "v", "store1"),
+ Goto("exit")),
+ Bloc("exit",
+ Exit("store2")))
+
+ CheckFunc(fun.f)
+ cse(fun.f)
+ dse(fun.f)
+ CheckFunc(fun.f)
+
+ v := fun.values["store1"]
+ if v.Op == OpCopy {
+ t.Errorf("store %s incorrectly removed", v)
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// decompose converts phi ops on compound builtin types into phi
+// ops on simple types.
+// (The remaining compound ops are decomposed with rewrite rules.)
+func decomposeBuiltIn(f *Func) {
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ if v.Op != OpPhi {
+ continue
+ }
+ decomposeBuiltInPhi(v)
+ }
+ }
+
+ // Split up named values into their components.
+ // NOTE: the component values we are making are dead at this point.
+ // We must do the opt pass before any deadcode elimination or we will
+ // lose the name->value correspondence.
+ for _, name := range f.Names {
+ t := name.Type
+ switch {
+ case t.IsComplex():
+ var elemType Type
+ if t.Size() == 16 {
+ elemType = f.Config.fe.TypeFloat64()
+ } else {
+ elemType = f.Config.fe.TypeFloat32()
+ }
+ rName := LocalSlot{name.N, elemType, name.Off}
+ iName := LocalSlot{name.N, elemType, name.Off + elemType.Size()}
+ f.Names = append(f.Names, rName, iName)
+ for _, v := range f.NamedValues[name] {
+ r := v.Block.NewValue1(v.Line, OpComplexReal, elemType, v)
+ i := v.Block.NewValue1(v.Line, OpComplexImag, elemType, v)
+ f.NamedValues[rName] = append(f.NamedValues[rName], r)
+ f.NamedValues[iName] = append(f.NamedValues[iName], i)
+ }
+ case t.IsString():
+ ptrType := f.Config.fe.TypeBytePtr()
+ lenType := f.Config.fe.TypeInt()
+ ptrName := LocalSlot{name.N, ptrType, name.Off}
+ lenName := LocalSlot{name.N, lenType, name.Off + f.Config.PtrSize}
+ f.Names = append(f.Names, ptrName, lenName)
+ for _, v := range f.NamedValues[name] {
+ ptr := v.Block.NewValue1(v.Line, OpStringPtr, ptrType, v)
+ len := v.Block.NewValue1(v.Line, OpStringLen, lenType, v)
+ f.NamedValues[ptrName] = append(f.NamedValues[ptrName], ptr)
+ f.NamedValues[lenName] = append(f.NamedValues[lenName], len)
+ }
+ case t.IsSlice():
+ ptrType := f.Config.fe.TypeBytePtr()
+ lenType := f.Config.fe.TypeInt()
+ ptrName := LocalSlot{name.N, ptrType, name.Off}
+ lenName := LocalSlot{name.N, lenType, name.Off + f.Config.PtrSize}
+ capName := LocalSlot{name.N, lenType, name.Off + 2*f.Config.PtrSize}
+ f.Names = append(f.Names, ptrName, lenName, capName)
+ for _, v := range f.NamedValues[name] {
+ ptr := v.Block.NewValue1(v.Line, OpSlicePtr, ptrType, v)
+ len := v.Block.NewValue1(v.Line, OpSliceLen, lenType, v)
+ cap := v.Block.NewValue1(v.Line, OpSliceCap, lenType, v)
+ f.NamedValues[ptrName] = append(f.NamedValues[ptrName], ptr)
+ f.NamedValues[lenName] = append(f.NamedValues[lenName], len)
+ f.NamedValues[capName] = append(f.NamedValues[capName], cap)
+ }
+ case t.IsInterface():
+ ptrType := f.Config.fe.TypeBytePtr()
+ typeName := LocalSlot{name.N, ptrType, name.Off}
+ dataName := LocalSlot{name.N, ptrType, name.Off + f.Config.PtrSize}
+ f.Names = append(f.Names, typeName, dataName)
+ for _, v := range f.NamedValues[name] {
+ typ := v.Block.NewValue1(v.Line, OpITab, ptrType, v)
+ data := v.Block.NewValue1(v.Line, OpIData, ptrType, v)
+ f.NamedValues[typeName] = append(f.NamedValues[typeName], typ)
+ f.NamedValues[dataName] = append(f.NamedValues[dataName], data)
+ }
+ case t.Size() > f.Config.IntSize:
+ f.Unimplementedf("undecomposed named type %s", t)
+ }
+ }
+}
+
+func decomposeBuiltInPhi(v *Value) {
+ // TODO: decompose 64-bit ops on 32-bit archs?
+ switch {
+ case v.Type.IsComplex():
+ decomposeComplexPhi(v)
+ case v.Type.IsString():
+ decomposeStringPhi(v)
+ case v.Type.IsSlice():
+ decomposeSlicePhi(v)
+ case v.Type.IsInterface():
+ decomposeInterfacePhi(v)
+ case v.Type.Size() > v.Block.Func.Config.IntSize:
+ v.Unimplementedf("undecomposed type %s", v.Type)
+ }
+}
+
+func decomposeStringPhi(v *Value) {
+ fe := v.Block.Func.Config.fe
+ ptrType := fe.TypeBytePtr()
+ lenType := fe.TypeInt()
+
+ ptr := v.Block.NewValue0(v.Line, OpPhi, ptrType)
+ len := v.Block.NewValue0(v.Line, OpPhi, lenType)
+ for _, a := range v.Args {
+ ptr.AddArg(a.Block.NewValue1(v.Line, OpStringPtr, ptrType, a))
+ len.AddArg(a.Block.NewValue1(v.Line, OpStringLen, lenType, a))
+ }
+ v.reset(OpStringMake)
+ v.AddArg(ptr)
+ v.AddArg(len)
+}
+
+func decomposeSlicePhi(v *Value) {
+ fe := v.Block.Func.Config.fe
+ ptrType := fe.TypeBytePtr()
+ lenType := fe.TypeInt()
+
+ ptr := v.Block.NewValue0(v.Line, OpPhi, ptrType)
+ len := v.Block.NewValue0(v.Line, OpPhi, lenType)
+ cap := v.Block.NewValue0(v.Line, OpPhi, lenType)
+ for _, a := range v.Args {
+ ptr.AddArg(a.Block.NewValue1(v.Line, OpSlicePtr, ptrType, a))
+ len.AddArg(a.Block.NewValue1(v.Line, OpSliceLen, lenType, a))
+ cap.AddArg(a.Block.NewValue1(v.Line, OpSliceCap, lenType, a))
+ }
+ v.reset(OpSliceMake)
+ v.AddArg(ptr)
+ v.AddArg(len)
+ v.AddArg(cap)
+}
+
+func decomposeComplexPhi(v *Value) {
+ fe := v.Block.Func.Config.fe
+ var partType Type
+ switch z := v.Type.Size(); z {
+ case 8:
+ partType = fe.TypeFloat32()
+ case 16:
+ partType = fe.TypeFloat64()
+ default:
+ v.Fatalf("decomposeComplexPhi: bad complex size %d", z)
+ }
+
+ real := v.Block.NewValue0(v.Line, OpPhi, partType)
+ imag := v.Block.NewValue0(v.Line, OpPhi, partType)
+ for _, a := range v.Args {
+ real.AddArg(a.Block.NewValue1(v.Line, OpComplexReal, partType, a))
+ imag.AddArg(a.Block.NewValue1(v.Line, OpComplexImag, partType, a))
+ }
+ v.reset(OpComplexMake)
+ v.AddArg(real)
+ v.AddArg(imag)
+}
+
+func decomposeInterfacePhi(v *Value) {
+ ptrType := v.Block.Func.Config.fe.TypeBytePtr()
+
+ itab := v.Block.NewValue0(v.Line, OpPhi, ptrType)
+ data := v.Block.NewValue0(v.Line, OpPhi, ptrType)
+ for _, a := range v.Args {
+ itab.AddArg(a.Block.NewValue1(v.Line, OpITab, ptrType, a))
+ data.AddArg(a.Block.NewValue1(v.Line, OpIData, ptrType, a))
+ }
+ v.reset(OpIMake)
+ v.AddArg(itab)
+ v.AddArg(data)
+}
+
+func decomposeUser(f *Func) {
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ if v.Op != OpPhi {
+ continue
+ }
+ decomposeUserPhi(v)
+ }
+ }
+ // Split up named values into their components.
+ // NOTE: the component values we are making are dead at this point.
+ // We must do the opt pass before any deadcode elimination or we will
+ // lose the name->value correspondence.
+ i := 0
+ for _, name := range f.Names {
+ t := name.Type
+ switch {
+ case t.IsStruct():
+ n := t.NumFields()
+ for _, v := range f.NamedValues[name] {
+ for i := int64(0); i < n; i++ {
+ fname := LocalSlot{name.N, t.FieldType(i), name.Off + t.FieldOff(i)} // TODO: use actual field name?
+ x := v.Block.NewValue1I(v.Line, OpStructSelect, t.FieldType(i), i, v)
+ f.NamedValues[fname] = append(f.NamedValues[fname], x)
+ }
+ }
+ delete(f.NamedValues, name)
+ default:
+ f.Names[i] = name
+ i++
+ }
+ }
+ f.Names = f.Names[:i]
+}
+
+func decomposeUserPhi(v *Value) {
+ switch {
+ case v.Type.IsStruct():
+ decomposeStructPhi(v)
+ }
+ // TODO: Arrays of length 1?
+}
+
+func decomposeStructPhi(v *Value) {
+ t := v.Type
+ n := t.NumFields()
+ var fields [MaxStruct]*Value
+ for i := int64(0); i < n; i++ {
+ fields[i] = v.Block.NewValue0(v.Line, OpPhi, t.FieldType(i))
+ }
+ for _, a := range v.Args {
+ for i := int64(0); i < n; i++ {
+ fields[i].AddArg(a.Block.NewValue1I(v.Line, OpStructSelect, t.FieldType(i), i, a))
+ }
+ }
+ v.reset(StructMakeOp(n))
+ v.AddArgs(fields[:n]...)
+
+ // Recursively decompose phis for each field.
+ for _, f := range fields[:n] {
+ if f.Type.IsStruct() {
+ decomposeStructPhi(f)
+ }
+ }
+}
+
+// MaxStruct is the maximum number of fields a struct
+// can have and still be SSAable.
+const MaxStruct = 4
+
+// StructMakeOp returns the opcode to construct a struct with the
+// given number of fields.
+func StructMakeOp(nf int64) Op {
+ switch nf {
+ case 0:
+ return OpStructMake0
+ case 1:
+ return OpStructMake1
+ case 2:
+ return OpStructMake2
+ case 3:
+ return OpStructMake3
+ case 4:
+ return OpStructMake4
+ }
+ panic("too many fields in an SSAable struct")
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// mark values
+const (
+ notFound = 0 // block has not been discovered yet
+ notExplored = 1 // discovered and in queue, outedges not processed yet
+ explored = 2 // discovered and in queue, outedges processed
+ done = 3 // all done, in output ordering
+)
+
+// This file contains code to compute the dominator tree
+// of a control-flow graph.
+
+// postorder computes a postorder traversal ordering for the
+// basic blocks in f. Unreachable blocks will not appear.
+func postorder(f *Func) []*Block {
+ mark := make([]byte, f.NumBlocks())
+
+ // result ordering
+ var order []*Block
+
+ // stack of blocks
+ var s []*Block
+ s = append(s, f.Entry)
+ mark[f.Entry.ID] = notExplored
+ for len(s) > 0 {
+ b := s[len(s)-1]
+ switch mark[b.ID] {
+ case explored:
+ // Children have all been visited. Pop & output block.
+ s = s[:len(s)-1]
+ mark[b.ID] = done
+ order = append(order, b)
+ case notExplored:
+ // Children have not been visited yet. Mark as explored
+ // and queue any children we haven't seen yet.
+ mark[b.ID] = explored
+ for _, c := range b.Succs {
+ if mark[c.ID] == notFound {
+ mark[c.ID] = notExplored
+ s = append(s, c)
+ }
+ }
+ default:
+ b.Fatalf("bad stack state %v %d", b, mark[b.ID])
+ }
+ }
+ return order
+}
+
+type linkedBlocks func(*Block) []*Block
+
+const nscratchslices = 8
+
+// experimentally, functions with 512 or fewer blocks account
+// for 75% of memory (size) allocation for dominator computation
+// in make.bash.
+const minscratchblocks = 512
+
+func (cfg *Config) scratchBlocksForDom(maxBlockID int) (a, b, c, d, e, f, g, h []ID) {
+ tot := maxBlockID * nscratchslices
+ scratch := cfg.domblockstore
+ if len(scratch) < tot {
+ // req = min(1.5*tot, nscratchslices*minscratchblocks)
+ // 50% padding allows for graph growth in later phases.
+ req := (tot * 3) >> 1
+ if req < nscratchslices*minscratchblocks {
+ req = nscratchslices * minscratchblocks
+ }
+ scratch = make([]ID, req)
+ cfg.domblockstore = scratch
+ } else {
+ // Clear as much of scratch as we will (re)use
+ scratch = scratch[0:tot]
+ for i := range scratch {
+ scratch[i] = 0
+ }
+ }
+
+ a = scratch[0*maxBlockID : 1*maxBlockID]
+ b = scratch[1*maxBlockID : 2*maxBlockID]
+ c = scratch[2*maxBlockID : 3*maxBlockID]
+ d = scratch[3*maxBlockID : 4*maxBlockID]
+ e = scratch[4*maxBlockID : 5*maxBlockID]
+ f = scratch[5*maxBlockID : 6*maxBlockID]
+ g = scratch[6*maxBlockID : 7*maxBlockID]
+ h = scratch[7*maxBlockID : 8*maxBlockID]
+
+ return
+}
+
+// dfs performs a depth first search over the blocks starting at the set of
+// blocks in the entries list (in arbitrary order). dfnum contains a mapping
+// from block id to an int indicating the order the block was reached or
+// notFound if the block was not reached. order contains a mapping from dfnum
+// to block.
+func (f *Func) dfs(entries []*Block, succFn linkedBlocks, dfnum, order, parent []ID) (fromID []*Block) {
+ maxBlockID := entries[0].Func.NumBlocks()
+
+ fromID = make([]*Block, maxBlockID)
+
+ for _, entry := range entries[0].Func.Blocks {
+ eid := entry.ID
+ if fromID[eid] != nil {
+ panic("Colliding entry IDs")
+ }
+ fromID[eid] = entry
+ }
+
+ n := ID(0)
+ s := make([]*Block, 0, 256)
+ for _, entry := range entries {
+ if dfnum[entry.ID] != notFound {
+ continue // already found from a previous entry
+ }
+ s = append(s, entry)
+ parent[entry.ID] = entry.ID
+ for len(s) > 0 {
+ node := s[len(s)-1]
+ s = s[:len(s)-1]
+
+ n++
+ for _, w := range succFn(node) {
+ // if it has a dfnum, we've already visited it
+ if dfnum[w.ID] == notFound {
+ s = append(s, w)
+ parent[w.ID] = node.ID
+ dfnum[w.ID] = notExplored
+ }
+ }
+ dfnum[node.ID] = n
+ order[n] = node.ID
+ }
+ }
+
+ return
+}
+
+// dominators computes the dominator tree for f. It returns a slice
+// which maps block ID to the immediate dominator of that block.
+// Unreachable blocks map to nil. The entry block maps to nil.
+func dominators(f *Func) []*Block {
+ preds := func(b *Block) []*Block { return b.Preds }
+ succs := func(b *Block) []*Block { return b.Succs }
+
+ //TODO: benchmark and try to find criteria for swapping between
+ // dominatorsSimple and dominatorsLT
+ return f.dominatorsLT([]*Block{f.Entry}, preds, succs)
+}
+
+// postDominators computes the post-dominator tree for f.
+func postDominators(f *Func) []*Block {
+ preds := func(b *Block) []*Block { return b.Preds }
+ succs := func(b *Block) []*Block { return b.Succs }
+
+ if len(f.Blocks) == 0 {
+ return nil
+ }
+
+ // find the exit blocks
+ var exits []*Block
+ for i := len(f.Blocks) - 1; i >= 0; i-- {
+ switch f.Blocks[i].Kind {
+ case BlockExit, BlockRet, BlockRetJmp, BlockCall, BlockCheck:
+ exits = append(exits, f.Blocks[i])
+ break
+ }
+ }
+
+ // infinite loop with no exit
+ if exits == nil {
+ return make([]*Block, f.NumBlocks())
+ }
+ return f.dominatorsLT(exits, succs, preds)
+}
+
+// dominatorsLt runs Lengauer-Tarjan to compute a dominator tree starting at
+// entry and using predFn/succFn to find predecessors/successors to allow
+// computing both dominator and post-dominator trees.
+func (f *Func) dominatorsLT(entries []*Block, predFn linkedBlocks, succFn linkedBlocks) []*Block {
+ // Based on Lengauer-Tarjan from Modern Compiler Implementation in C -
+ // Appel with optimizations from Finding Dominators in Practice -
+ // Georgiadis
+
+ maxBlockID := entries[0].Func.NumBlocks()
+
+ dfnum, vertex, parent, semi, samedom, ancestor, best, bucket := f.Config.scratchBlocksForDom(maxBlockID)
+
+ // dfnum := make([]ID, maxBlockID) // conceptually int32, but punning for allocation purposes.
+ // vertex := make([]ID, maxBlockID)
+ // parent := make([]ID, maxBlockID)
+
+ // semi := make([]ID, maxBlockID)
+ // samedom := make([]ID, maxBlockID)
+ // ancestor := make([]ID, maxBlockID)
+ // best := make([]ID, maxBlockID)
+ // bucket := make([]ID, maxBlockID)
+
+ // Step 1. Carry out a depth first search of the problem graph. Number
+ // the vertices from 1 to n as they are reached during the search.
+ fromID := f.dfs(entries, succFn, dfnum, vertex, parent)
+
+ idom := make([]*Block, maxBlockID)
+
+ // Step 2. Compute the semidominators of all vertices by applying
+ // Theorem 4. Carry out the computation vertex by vertex in decreasing
+ // order by number.
+ for i := maxBlockID - 1; i > 0; i-- {
+ w := vertex[i]
+ if w == 0 {
+ continue
+ }
+
+ if dfnum[w] == notFound {
+ // skip unreachable node
+ continue
+ }
+
+ // Step 3. Implicitly define the immediate dominator of each
+ // vertex by applying Corollary 1. (reordered)
+ for v := bucket[w]; v != 0; v = bucket[v] {
+ u := eval(v, ancestor, semi, dfnum, best)
+ if semi[u] == semi[v] {
+ idom[v] = fromID[w] // true dominator
+ } else {
+ samedom[v] = u // v has same dominator as u
+ }
+ }
+
+ p := parent[w]
+ s := p // semidominator
+
+ var sp ID
+ // calculate the semidominator of w
+ for _, v := range predFn(fromID[w]) {
+ if dfnum[v.ID] == notFound {
+ // skip unreachable predecessor
+ continue
+ }
+
+ if dfnum[v.ID] <= dfnum[w] {
+ sp = v.ID
+ } else {
+ sp = semi[eval(v.ID, ancestor, semi, dfnum, best)]
+ }
+
+ if dfnum[sp] < dfnum[s] {
+ s = sp
+ }
+ }
+
+ // link
+ ancestor[w] = p
+ best[w] = w
+
+ semi[w] = s
+ if semi[s] != parent[s] {
+ bucket[w] = bucket[s]
+ bucket[s] = w
+ }
+ }
+
+ // Final pass of step 3
+ for v := bucket[0]; v != 0; v = bucket[v] {
+ idom[v] = fromID[bucket[0]]
+ }
+
+ // Step 4. Explictly define the immediate dominator of each vertex,
+ // carrying out the computation vertex by vertex in increasing order by
+ // number.
+ for i := 1; i < maxBlockID-1; i++ {
+ w := vertex[i]
+ if w == 0 {
+ continue
+ }
+ // w has the same dominator as samedom[w]
+ if samedom[w] != 0 {
+ idom[w] = idom[samedom[w]]
+ }
+ }
+ return idom
+}
+
+// eval function from LT paper with path compression
+func eval(v ID, ancestor []ID, semi []ID, dfnum []ID, best []ID) ID {
+ a := ancestor[v]
+ if ancestor[a] != 0 {
+ bid := eval(a, ancestor, semi, dfnum, best)
+ ancestor[v] = ancestor[a]
+ if dfnum[semi[bid]] < dfnum[semi[best[v]]] {
+ best[v] = bid
+ }
+ }
+ return best[v]
+}
+
+// dominators computes the dominator tree for f. It returns a slice
+// which maps block ID to the immediate dominator of that block.
+// Unreachable blocks map to nil. The entry block maps to nil.
+func dominatorsSimple(f *Func) []*Block {
+ // A simple algorithm for now
+ // Cooper, Harvey, Kennedy
+ idom := make([]*Block, f.NumBlocks())
+
+ // Compute postorder walk
+ post := postorder(f)
+
+ // Make map from block id to order index (for intersect call)
+ postnum := make([]int, f.NumBlocks())
+ for i, b := range post {
+ postnum[b.ID] = i
+ }
+
+ // Make the entry block a self-loop
+ idom[f.Entry.ID] = f.Entry
+ if postnum[f.Entry.ID] != len(post)-1 {
+ f.Fatalf("entry block %v not last in postorder", f.Entry)
+ }
+
+ // Compute relaxation of idom entries
+ for {
+ changed := false
+
+ for i := len(post) - 2; i >= 0; i-- {
+ b := post[i]
+ var d *Block
+ for _, p := range b.Preds {
+ if idom[p.ID] == nil {
+ continue
+ }
+ if d == nil {
+ d = p
+ continue
+ }
+ d = intersect(d, p, postnum, idom)
+ }
+ if d != idom[b.ID] {
+ idom[b.ID] = d
+ changed = true
+ }
+ }
+ if !changed {
+ break
+ }
+ }
+ // Set idom of entry block to nil instead of itself.
+ idom[f.Entry.ID] = nil
+ return idom
+}
+
+// intersect finds the closest dominator of both b and c.
+// It requires a postorder numbering of all the blocks.
+func intersect(b, c *Block, postnum []int, idom []*Block) *Block {
+ // TODO: This loop is O(n^2). See BenchmarkNilCheckDeep*.
+ for b != c {
+ if postnum[b.ID] < postnum[c.ID] {
+ b = idom[b.ID]
+ } else {
+ c = idom[c.ID]
+ }
+ }
+ return b
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import "testing"
+
+func BenchmarkDominatorsLinear(b *testing.B) { benchmarkDominators(b, 10000, genLinear) }
+func BenchmarkDominatorsFwdBack(b *testing.B) { benchmarkDominators(b, 10000, genFwdBack) }
+func BenchmarkDominatorsManyPred(b *testing.B) { benchmarkDominators(b, 10000, genManyPred) }
+func BenchmarkDominatorsMaxPred(b *testing.B) { benchmarkDominators(b, 10000, genMaxPred) }
+func BenchmarkDominatorsMaxPredVal(b *testing.B) { benchmarkDominators(b, 10000, genMaxPredValue) }
+
+type blockGen func(size int) []bloc
+
+// genLinear creates an array of blocks that succeed one another
+// b_n -> [b_n+1].
+func genLinear(size int) []bloc {
+ var blocs []bloc
+ blocs = append(blocs,
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Goto(blockn(0)),
+ ),
+ )
+ for i := 0; i < size; i++ {
+ blocs = append(blocs, Bloc(blockn(i),
+ Goto(blockn(i+1))))
+ }
+
+ blocs = append(blocs,
+ Bloc(blockn(size), Goto("exit")),
+ Bloc("exit", Exit("mem")),
+ )
+
+ return blocs
+}
+
+// genLinear creates an array of blocks that alternate between
+// b_n -> [b_n+1], b_n -> [b_n+1, b_n-1] , b_n -> [b_n+1, b_n+2]
+func genFwdBack(size int) []bloc {
+ var blocs []bloc
+ blocs = append(blocs,
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("p", OpConstBool, TypeBool, 1, nil),
+ Goto(blockn(0)),
+ ),
+ )
+ for i := 0; i < size; i++ {
+ switch i % 2 {
+ case 0:
+ blocs = append(blocs, Bloc(blockn(i),
+ If("p", blockn(i+1), blockn(i+2))))
+ case 1:
+ blocs = append(blocs, Bloc(blockn(i),
+ If("p", blockn(i+1), blockn(i-1))))
+ }
+ }
+
+ blocs = append(blocs,
+ Bloc(blockn(size), Goto("exit")),
+ Bloc("exit", Exit("mem")),
+ )
+
+ return blocs
+}
+
+// genManyPred creates an array of blocks where 1/3rd have a sucessor of the
+// first block, 1/3rd the last block, and the remaining third are plain.
+func genManyPred(size int) []bloc {
+ var blocs []bloc
+ blocs = append(blocs,
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("p", OpConstBool, TypeBool, 1, nil),
+ Goto(blockn(0)),
+ ),
+ )
+
+ // We want predecessor lists to be long, so 2/3rds of the blocks have a
+ // sucessor of the first or last block.
+ for i := 0; i < size; i++ {
+ switch i % 3 {
+ case 0:
+ blocs = append(blocs, Bloc(blockn(i),
+ Valu("a", OpConstBool, TypeBool, 1, nil),
+ Goto(blockn(i+1))))
+ case 1:
+ blocs = append(blocs, Bloc(blockn(i),
+ Valu("a", OpConstBool, TypeBool, 1, nil),
+ If("p", blockn(i+1), blockn(0))))
+ case 2:
+ blocs = append(blocs, Bloc(blockn(i),
+ Valu("a", OpConstBool, TypeBool, 1, nil),
+ If("p", blockn(i+1), blockn(size))))
+ }
+ }
+
+ blocs = append(blocs,
+ Bloc(blockn(size), Goto("exit")),
+ Bloc("exit", Exit("mem")),
+ )
+
+ return blocs
+}
+
+// genMaxPred maximizes the size of the 'exit' predecessor list.
+func genMaxPred(size int) []bloc {
+ var blocs []bloc
+ blocs = append(blocs,
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("p", OpConstBool, TypeBool, 1, nil),
+ Goto(blockn(0)),
+ ),
+ )
+
+ for i := 0; i < size; i++ {
+ blocs = append(blocs, Bloc(blockn(i),
+ If("p", blockn(i+1), "exit")))
+ }
+
+ blocs = append(blocs,
+ Bloc(blockn(size), Goto("exit")),
+ Bloc("exit", Exit("mem")),
+ )
+
+ return blocs
+}
+
+// genMaxPredValue is identical to genMaxPred but contains an
+// additional value.
+func genMaxPredValue(size int) []bloc {
+ var blocs []bloc
+ blocs = append(blocs,
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("p", OpConstBool, TypeBool, 1, nil),
+ Goto(blockn(0)),
+ ),
+ )
+
+ for i := 0; i < size; i++ {
+ blocs = append(blocs, Bloc(blockn(i),
+ Valu("a", OpConstBool, TypeBool, 1, nil),
+ If("p", blockn(i+1), "exit")))
+ }
+
+ blocs = append(blocs,
+ Bloc(blockn(size), Goto("exit")),
+ Bloc("exit", Exit("mem")),
+ )
+
+ return blocs
+}
+
+// sink for benchmark
+var domBenchRes []*Block
+
+func benchmarkDominators(b *testing.B, size int, bg blockGen) {
+ c := NewConfig("amd64", DummyFrontend{b}, nil, true)
+ fun := Fun(c, "entry", bg(size)...)
+
+ CheckFunc(fun.f)
+ b.SetBytes(int64(size))
+ b.ResetTimer()
+ for i := 0; i < b.N; i++ {
+ domBenchRes = dominators(fun.f)
+ }
+}
+
+type domFunc func(f *Func) []*Block
+
+// verifyDominators verifies that the dominators of fut (function under test)
+// as determined by domFn, match the map node->dominator
+func verifyDominators(t *testing.T, fut fun, domFn domFunc, doms map[string]string) {
+ blockNames := map[*Block]string{}
+ for n, b := range fut.blocks {
+ blockNames[b] = n
+ }
+
+ calcDom := domFn(fut.f)
+
+ for n, d := range doms {
+ nblk, ok := fut.blocks[n]
+ if !ok {
+ t.Errorf("invalid block name %s", n)
+ }
+ dblk, ok := fut.blocks[d]
+ if !ok {
+ t.Errorf("invalid block name %s", d)
+ }
+
+ domNode := calcDom[nblk.ID]
+ switch {
+ case calcDom[nblk.ID] == dblk:
+ calcDom[nblk.ID] = nil
+ continue
+ case calcDom[nblk.ID] != dblk:
+ t.Errorf("expected %s as dominator of %s, found %s", d, n, blockNames[domNode])
+ default:
+ t.Fatal("unexpected dominator condition")
+ }
+ }
+
+ for id, d := range calcDom {
+ // If nil, we've already verified it
+ if d == nil {
+ continue
+ }
+ for _, b := range fut.blocks {
+ if int(b.ID) == id {
+ t.Errorf("unexpected dominator of %s for %s", blockNames[d], blockNames[b])
+ }
+ }
+ }
+
+}
+
+func TestDominatorsSingleBlock(t *testing.T) {
+ c := testConfig(t)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Exit("mem")))
+
+ doms := map[string]string{}
+
+ CheckFunc(fun.f)
+ verifyDominators(t, fun, dominators, doms)
+ verifyDominators(t, fun, dominatorsSimple, doms)
+
+}
+
+func TestDominatorsSimple(t *testing.T) {
+ c := testConfig(t)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Goto("a")),
+ Bloc("a",
+ Goto("b")),
+ Bloc("b",
+ Goto("c")),
+ Bloc("c",
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem")))
+
+ doms := map[string]string{
+ "a": "entry",
+ "b": "a",
+ "c": "b",
+ "exit": "c",
+ }
+
+ CheckFunc(fun.f)
+ verifyDominators(t, fun, dominators, doms)
+ verifyDominators(t, fun, dominatorsSimple, doms)
+
+}
+
+func TestDominatorsMultPredFwd(t *testing.T) {
+ c := testConfig(t)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("p", OpConstBool, TypeBool, 1, nil),
+ If("p", "a", "c")),
+ Bloc("a",
+ If("p", "b", "c")),
+ Bloc("b",
+ Goto("c")),
+ Bloc("c",
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem")))
+
+ doms := map[string]string{
+ "a": "entry",
+ "b": "a",
+ "c": "entry",
+ "exit": "c",
+ }
+
+ CheckFunc(fun.f)
+ verifyDominators(t, fun, dominators, doms)
+ verifyDominators(t, fun, dominatorsSimple, doms)
+}
+
+func TestDominatorsDeadCode(t *testing.T) {
+ c := testConfig(t)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("p", OpConstBool, TypeBool, 0, nil),
+ If("p", "b3", "b5")),
+ Bloc("b2", Exit("mem")),
+ Bloc("b3", Goto("b2")),
+ Bloc("b4", Goto("b2")),
+ Bloc("b5", Goto("b2")))
+
+ doms := map[string]string{
+ "b2": "entry",
+ "b3": "entry",
+ "b5": "entry",
+ }
+
+ CheckFunc(fun.f)
+ verifyDominators(t, fun, dominators, doms)
+ verifyDominators(t, fun, dominatorsSimple, doms)
+}
+
+func TestDominatorsMultPredRev(t *testing.T) {
+ c := testConfig(t)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Goto("first")),
+ Bloc("first",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("p", OpConstBool, TypeBool, 1, nil),
+ Goto("a")),
+ Bloc("a",
+ If("p", "b", "first")),
+ Bloc("b",
+ Goto("c")),
+ Bloc("c",
+ If("p", "exit", "b")),
+ Bloc("exit",
+ Exit("mem")))
+
+ doms := map[string]string{
+ "first": "entry",
+ "a": "first",
+ "b": "a",
+ "c": "b",
+ "exit": "c",
+ }
+
+ CheckFunc(fun.f)
+ verifyDominators(t, fun, dominators, doms)
+ verifyDominators(t, fun, dominatorsSimple, doms)
+}
+
+func TestDominatorsMultPred(t *testing.T) {
+ c := testConfig(t)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("p", OpConstBool, TypeBool, 1, nil),
+ If("p", "a", "c")),
+ Bloc("a",
+ If("p", "b", "c")),
+ Bloc("b",
+ Goto("c")),
+ Bloc("c",
+ If("p", "b", "exit")),
+ Bloc("exit",
+ Exit("mem")))
+
+ doms := map[string]string{
+ "a": "entry",
+ "b": "entry",
+ "c": "entry",
+ "exit": "c",
+ }
+
+ CheckFunc(fun.f)
+ verifyDominators(t, fun, dominators, doms)
+ verifyDominators(t, fun, dominatorsSimple, doms)
+}
+
+func TestPostDominators(t *testing.T) {
+ c := testConfig(t)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("p", OpConstBool, TypeBool, 1, nil),
+ If("p", "a", "c")),
+ Bloc("a",
+ If("p", "b", "c")),
+ Bloc("b",
+ Goto("c")),
+ Bloc("c",
+ If("p", "b", "exit")),
+ Bloc("exit",
+ Exit("mem")))
+
+ doms := map[string]string{"entry": "c",
+ "a": "c",
+ "b": "c",
+ "c": "exit",
+ }
+
+ CheckFunc(fun.f)
+ verifyDominators(t, fun, postDominators, doms)
+}
+
+func TestInfiniteLoop(t *testing.T) {
+ c := testConfig(t)
+ // note lack of an exit block
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("p", OpConstBool, TypeBool, 1, nil),
+ Goto("a")),
+ Bloc("a",
+ Goto("b")),
+ Bloc("b",
+ Goto("a")))
+
+ CheckFunc(fun.f)
+ doms := map[string]string{"a": "entry",
+ "b": "a"}
+ verifyDominators(t, fun, dominators, doms)
+
+ // no exit block, so there are no post-dominators
+ postDoms := map[string]string{}
+ verifyDominators(t, fun, postDominators, postDoms)
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import (
+ "cmd/internal/obj"
+ "testing"
+)
+
+var CheckFunc = checkFunc
+var PrintFunc = printFunc
+var Opt = opt
+var Deadcode = deadcode
+
+func testConfig(t *testing.T) *Config {
+ testCtxt := &obj.Link{}
+ return NewConfig("amd64", DummyFrontend{t}, testCtxt, true)
+}
+
+// DummyFrontend is a test-only frontend.
+// It assumes 64 bit integers and pointers.
+type DummyFrontend struct {
+ t testing.TB
+}
+
+func (DummyFrontend) StringData(s string) interface{} {
+ return nil
+}
+func (DummyFrontend) Auto(t Type) GCNode {
+ return nil
+}
+func (DummyFrontend) Line(line int32) string {
+ return "unknown.go:0"
+}
+
+func (d DummyFrontend) Logf(msg string, args ...interface{}) { d.t.Logf(msg, args...) }
+func (d DummyFrontend) Log() bool { return true }
+
+func (d DummyFrontend) Fatalf(line int32, msg string, args ...interface{}) { d.t.Fatalf(msg, args...) }
+func (d DummyFrontend) Unimplementedf(line int32, msg string, args ...interface{}) {
+ d.t.Fatalf(msg, args...)
+}
+func (d DummyFrontend) Warnl(line int, msg string, args ...interface{}) { d.t.Logf(msg, args...) }
+func (d DummyFrontend) Debug_checknil() bool { return false }
+
+func (d DummyFrontend) TypeBool() Type { return TypeBool }
+func (d DummyFrontend) TypeInt8() Type { return TypeInt8 }
+func (d DummyFrontend) TypeInt16() Type { return TypeInt16 }
+func (d DummyFrontend) TypeInt32() Type { return TypeInt32 }
+func (d DummyFrontend) TypeInt64() Type { return TypeInt64 }
+func (d DummyFrontend) TypeUInt8() Type { return TypeUInt8 }
+func (d DummyFrontend) TypeUInt16() Type { return TypeUInt16 }
+func (d DummyFrontend) TypeUInt32() Type { return TypeUInt32 }
+func (d DummyFrontend) TypeUInt64() Type { return TypeUInt64 }
+func (d DummyFrontend) TypeFloat32() Type { return TypeFloat32 }
+func (d DummyFrontend) TypeFloat64() Type { return TypeFloat64 }
+func (d DummyFrontend) TypeInt() Type { return TypeInt64 }
+func (d DummyFrontend) TypeUintptr() Type { return TypeUInt64 }
+func (d DummyFrontend) TypeString() Type { panic("unimplemented") }
+func (d DummyFrontend) TypeBytePtr() Type { return TypeBytePtr }
+
+func (d DummyFrontend) CanSSA(t Type) bool {
+ // There are no un-SSAable types in dummy land.
+ return true
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+const flagRegMask = regMask(1) << 33 // TODO: arch-specific
+
+// flagalloc allocates the flag register among all the flag-generating
+// instructions. Flag values are recomputed if they need to be
+// spilled/restored.
+func flagalloc(f *Func) {
+ // Compute the in-register flag value we want at the end of
+ // each block. This is basically a best-effort live variable
+ // analysis, so it can be much simpler than a full analysis.
+ // TODO: do we really need to keep flag values live across blocks?
+ // Could we force the flags register to be unused at basic block
+ // boundaries? Then we wouldn't need this computation.
+ end := make([]*Value, f.NumBlocks())
+ for n := 0; n < 2; n++ {
+ // Walk blocks backwards. Poor-man's postorder traversal.
+ for i := len(f.Blocks) - 1; i >= 0; i-- {
+ b := f.Blocks[i]
+ // Walk values backwards to figure out what flag
+ // value we want in the flag register at the start
+ // of the block.
+ flag := end[b.ID]
+ if b.Control != nil && b.Control.Type.IsFlags() {
+ flag = b.Control
+ }
+ for j := len(b.Values) - 1; j >= 0; j-- {
+ v := b.Values[j]
+ if v == flag {
+ flag = nil
+ }
+ if opcodeTable[v.Op].reg.clobbers&flagRegMask != 0 {
+ flag = nil
+ }
+ for _, a := range v.Args {
+ if a.Type.IsFlags() {
+ flag = a
+ }
+ }
+ }
+ if flag != nil {
+ for _, p := range b.Preds {
+ end[p.ID] = flag
+ }
+ }
+ }
+ }
+
+ // For blocks which have a flags control value, that's the only value
+ // we can leave in the flags register at the end of the block. (There
+ // is no place to put a flag regeneration instruction.)
+ for _, b := range f.Blocks {
+ v := b.Control
+ if v != nil && v.Type.IsFlags() && end[b.ID] != v {
+ end[b.ID] = nil
+ }
+ }
+
+ // Add flag recomputations where they are needed.
+ // TODO: Remove original instructions if they are never used.
+ var oldSched []*Value
+ for _, b := range f.Blocks {
+ oldSched = append(oldSched[:0], b.Values...)
+ b.Values = b.Values[:0]
+ // The current live flag value the pre-flagalloc copy).
+ var flag *Value
+ if len(b.Preds) > 0 {
+ flag = end[b.Preds[0].ID]
+ // Note: the following condition depends on the lack of critical edges.
+ for _, p := range b.Preds[1:] {
+ if end[p.ID] != flag {
+ f.Fatalf("live flag in %s's predecessors not consistent", b)
+ }
+ }
+ }
+ for _, v := range oldSched {
+ if v.Op == OpPhi && v.Type.IsFlags() {
+ f.Fatalf("phi of flags not supported: %s", v.LongString())
+ }
+ // Make sure any flag arg of v is in the flags register.
+ // If not, recompute it.
+ for i, a := range v.Args {
+ if !a.Type.IsFlags() {
+ continue
+ }
+ if a == flag {
+ continue
+ }
+ // Recalculate a
+ c := a.copyInto(b)
+ // Update v.
+ v.SetArg(i, c)
+ // Remember the most-recently computed flag value.
+ flag = a
+ }
+ // Issue v.
+ b.Values = append(b.Values, v)
+ if opcodeTable[v.Op].reg.clobbers&flagRegMask != 0 {
+ flag = nil
+ }
+ if v.Type.IsFlags() {
+ flag = v
+ }
+ }
+ if v := b.Control; v != nil && v != flag && v.Type.IsFlags() {
+ // Recalculate control value.
+ c := v.copyInto(b)
+ b.Control = c
+ flag = v
+ }
+ if v := end[b.ID]; v != nil && v != flag {
+ // Need to reissue flag generator for use by
+ // subsequent blocks.
+ _ = v.copyInto(b)
+ // Note: this flag generator is not properly linked up
+ // with the flag users. This breaks the SSA representation.
+ // We could fix up the users with another pass, but for now
+ // we'll just leave it. (Regalloc has the same issue for
+ // standard regs, and it runs next.)
+ }
+ }
+
+ // Save live flag state for later.
+ for _, b := range f.Blocks {
+ b.FlagsLiveAtEnd = end[b.ID] != nil
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import (
+ "fmt"
+ "math"
+)
+
+// A Func represents a Go func declaration (or function literal) and
+// its body. This package compiles each Func independently.
+type Func struct {
+ Config *Config // architecture information
+ pass *pass // current pass information (name, options, etc.)
+ Name string // e.g. bytes·Compare
+ Type Type // type signature of the function.
+ StaticData interface{} // associated static data, untouched by the ssa package
+ Blocks []*Block // unordered set of all basic blocks (note: not indexable by ID)
+ Entry *Block // the entry basic block
+ bid idAlloc // block ID allocator
+ vid idAlloc // value ID allocator
+
+ scheduled bool // Values in Blocks are in final order
+
+ // when register allocation is done, maps value ids to locations
+ RegAlloc []Location
+
+ // map from LocalSlot to set of Values that we want to store in that slot.
+ NamedValues map[LocalSlot][]*Value
+ // Names is a copy of NamedValues.Keys. We keep a separate list
+ // of keys to make iteration order deterministic.
+ Names []LocalSlot
+
+ freeValues *Value // free Values linked by argstorage[0]. All other fields except ID are 0/nil.
+ freeBlocks *Block // free Blocks linked by succstorage[0]. All other fields except ID are 0/nil.
+
+ constants map[int64][]*Value // constants cache, keyed by constant value; users must check value's Op and Type
+}
+
+// NumBlocks returns an integer larger than the id of any Block in the Func.
+func (f *Func) NumBlocks() int {
+ return f.bid.num()
+}
+
+// NumValues returns an integer larger than the id of any Value in the Func.
+func (f *Func) NumValues() int {
+ return f.vid.num()
+}
+
+// newSparseSet returns a sparse set that can store at least up to n integers.
+func (f *Func) newSparseSet(n int) *sparseSet {
+ for i, scr := range f.Config.scrSparse {
+ if scr != nil && scr.cap() >= n {
+ f.Config.scrSparse[i] = nil
+ scr.clear()
+ return scr
+ }
+ }
+ return newSparseSet(n)
+}
+
+// retSparseSet returns a sparse set to the config's cache of sparse sets to be reused by f.newSparseSet.
+func (f *Func) retSparseSet(ss *sparseSet) {
+ for i, scr := range f.Config.scrSparse {
+ if scr == nil {
+ f.Config.scrSparse[i] = ss
+ return
+ }
+ }
+ f.Config.scrSparse = append(f.Config.scrSparse, ss)
+}
+
+// newValue allocates a new Value with the given fields and places it at the end of b.Values.
+func (f *Func) newValue(op Op, t Type, b *Block, line int32) *Value {
+ var v *Value
+ if f.freeValues != nil {
+ v = f.freeValues
+ f.freeValues = v.argstorage[0]
+ v.argstorage[0] = nil
+ } else {
+ ID := f.vid.get()
+ if int(ID) < len(f.Config.values) {
+ v = &f.Config.values[ID]
+ } else {
+ v = &Value{ID: ID}
+ }
+ }
+ v.Op = op
+ v.Type = t
+ v.Block = b
+ v.Line = line
+ b.Values = append(b.Values, v)
+ return v
+}
+
+// logPassStat writes a string key and int value as a warning in a
+// tab-separated format easily handled by spreadsheets or awk.
+// file names, lines, and function names are included to provide enough (?)
+// context to allow item-by-item comparisons across runs.
+// For example:
+// awk 'BEGIN {FS="\t"} $3~/TIME/{sum+=$4} END{print "t(ns)=",sum}' t.log
+func (f *Func) logStat(key string, args ...interface{}) {
+ value := ""
+ for _, a := range args {
+ value += fmt.Sprintf("\t%v", a)
+ }
+ f.Config.Warnl(int(f.Entry.Line), "\t%s\t%s%s\t%s", f.pass.name, key, value, f.Name)
+}
+
+// freeValue frees a value. It must no longer be referenced.
+func (f *Func) freeValue(v *Value) {
+ if v.Block == nil {
+ f.Fatalf("trying to free an already freed value")
+ }
+ // Clear everything but ID (which we reuse).
+ id := v.ID
+ *v = Value{}
+ v.ID = id
+ v.argstorage[0] = f.freeValues
+ f.freeValues = v
+}
+
+// newBlock allocates a new Block of the given kind and places it at the end of f.Blocks.
+func (f *Func) NewBlock(kind BlockKind) *Block {
+ var b *Block
+ if f.freeBlocks != nil {
+ b = f.freeBlocks
+ f.freeBlocks = b.succstorage[0]
+ b.succstorage[0] = nil
+ } else {
+ ID := f.bid.get()
+ if int(ID) < len(f.Config.blocks) {
+ b = &f.Config.blocks[ID]
+ } else {
+ b = &Block{ID: ID}
+ }
+ }
+ b.Kind = kind
+ b.Func = f
+ b.Preds = b.predstorage[:0]
+ b.Succs = b.succstorage[:0]
+ b.Values = b.valstorage[:0]
+ f.Blocks = append(f.Blocks, b)
+ return b
+}
+
+func (f *Func) freeBlock(b *Block) {
+ if b.Func == nil {
+ f.Fatalf("trying to free an already freed block")
+ }
+ // Clear everything but ID (which we reuse).
+ id := b.ID
+ *b = Block{}
+ b.ID = id
+ b.succstorage[0] = f.freeBlocks
+ f.freeBlocks = b
+}
+
+// NewValue0 returns a new value in the block with no arguments and zero aux values.
+func (b *Block) NewValue0(line int32, op Op, t Type) *Value {
+ v := b.Func.newValue(op, t, b, line)
+ v.AuxInt = 0
+ v.Args = v.argstorage[:0]
+ return v
+}
+
+// NewValue returns a new value in the block with no arguments and an auxint value.
+func (b *Block) NewValue0I(line int32, op Op, t Type, auxint int64) *Value {
+ v := b.Func.newValue(op, t, b, line)
+ v.AuxInt = auxint
+ v.Args = v.argstorage[:0]
+ return v
+}
+
+// NewValue returns a new value in the block with no arguments and an aux value.
+func (b *Block) NewValue0A(line int32, op Op, t Type, aux interface{}) *Value {
+ if _, ok := aux.(int64); ok {
+ // Disallow int64 aux values. They should be in the auxint field instead.
+ // Maybe we want to allow this at some point, but for now we disallow it
+ // to prevent errors like using NewValue1A instead of NewValue1I.
+ b.Fatalf("aux field has int64 type op=%s type=%s aux=%v", op, t, aux)
+ }
+ v := b.Func.newValue(op, t, b, line)
+ v.AuxInt = 0
+ v.Aux = aux
+ v.Args = v.argstorage[:0]
+ return v
+}
+
+// NewValue returns a new value in the block with no arguments and both an auxint and aux values.
+func (b *Block) NewValue0IA(line int32, op Op, t Type, auxint int64, aux interface{}) *Value {
+ v := b.Func.newValue(op, t, b, line)
+ v.AuxInt = auxint
+ v.Aux = aux
+ v.Args = v.argstorage[:0]
+ return v
+}
+
+// NewValue1 returns a new value in the block with one argument and zero aux values.
+func (b *Block) NewValue1(line int32, op Op, t Type, arg *Value) *Value {
+ v := b.Func.newValue(op, t, b, line)
+ v.AuxInt = 0
+ v.Args = v.argstorage[:1]
+ v.argstorage[0] = arg
+ return v
+}
+
+// NewValue1I returns a new value in the block with one argument and an auxint value.
+func (b *Block) NewValue1I(line int32, op Op, t Type, auxint int64, arg *Value) *Value {
+ v := b.Func.newValue(op, t, b, line)
+ v.AuxInt = auxint
+ v.Args = v.argstorage[:1]
+ v.argstorage[0] = arg
+ return v
+}
+
+// NewValue1A returns a new value in the block with one argument and an aux value.
+func (b *Block) NewValue1A(line int32, op Op, t Type, aux interface{}, arg *Value) *Value {
+ v := b.Func.newValue(op, t, b, line)
+ v.AuxInt = 0
+ v.Aux = aux
+ v.Args = v.argstorage[:1]
+ v.argstorage[0] = arg
+ return v
+}
+
+// NewValue1IA returns a new value in the block with one argument and both an auxint and aux values.
+func (b *Block) NewValue1IA(line int32, op Op, t Type, auxint int64, aux interface{}, arg *Value) *Value {
+ v := b.Func.newValue(op, t, b, line)
+ v.AuxInt = auxint
+ v.Aux = aux
+ v.Args = v.argstorage[:1]
+ v.argstorage[0] = arg
+ return v
+}
+
+// NewValue2 returns a new value in the block with two arguments and zero aux values.
+func (b *Block) NewValue2(line int32, op Op, t Type, arg0, arg1 *Value) *Value {
+ v := b.Func.newValue(op, t, b, line)
+ v.AuxInt = 0
+ v.Args = v.argstorage[:2]
+ v.argstorage[0] = arg0
+ v.argstorage[1] = arg1
+ return v
+}
+
+// NewValue2I returns a new value in the block with two arguments and an auxint value.
+func (b *Block) NewValue2I(line int32, op Op, t Type, auxint int64, arg0, arg1 *Value) *Value {
+ v := b.Func.newValue(op, t, b, line)
+ v.AuxInt = auxint
+ v.Args = v.argstorage[:2]
+ v.argstorage[0] = arg0
+ v.argstorage[1] = arg1
+ return v
+}
+
+// NewValue3 returns a new value in the block with three arguments and zero aux values.
+func (b *Block) NewValue3(line int32, op Op, t Type, arg0, arg1, arg2 *Value) *Value {
+ v := b.Func.newValue(op, t, b, line)
+ v.AuxInt = 0
+ v.Args = []*Value{arg0, arg1, arg2}
+ return v
+}
+
+// NewValue3I returns a new value in the block with three arguments and an auxint value.
+func (b *Block) NewValue3I(line int32, op Op, t Type, auxint int64, arg0, arg1, arg2 *Value) *Value {
+ v := b.Func.newValue(op, t, b, line)
+ v.AuxInt = auxint
+ v.Args = []*Value{arg0, arg1, arg2}
+ return v
+}
+
+// constVal returns a constant value for c.
+func (f *Func) constVal(line int32, op Op, t Type, c int64) *Value {
+ if f.constants == nil {
+ f.constants = make(map[int64][]*Value)
+ }
+ vv := f.constants[c]
+ for _, v := range vv {
+ if v.Op == op && v.Type.Equal(t) {
+ return v
+ }
+ }
+ v := f.Entry.NewValue0I(line, op, t, c)
+ f.constants[c] = append(vv, v)
+ return v
+}
+
+// ConstInt returns an int constant representing its argument.
+func (f *Func) ConstBool(line int32, t Type, c bool) *Value {
+ i := int64(0)
+ if c {
+ i = 1
+ }
+ return f.constVal(line, OpConstBool, t, i)
+}
+func (f *Func) ConstInt8(line int32, t Type, c int8) *Value {
+ return f.constVal(line, OpConst8, t, int64(c))
+}
+func (f *Func) ConstInt16(line int32, t Type, c int16) *Value {
+ return f.constVal(line, OpConst16, t, int64(c))
+}
+func (f *Func) ConstInt32(line int32, t Type, c int32) *Value {
+ return f.constVal(line, OpConst32, t, int64(c))
+}
+func (f *Func) ConstInt64(line int32, t Type, c int64) *Value {
+ return f.constVal(line, OpConst64, t, c)
+}
+func (f *Func) ConstFloat32(line int32, t Type, c float64) *Value {
+ return f.constVal(line, OpConst32F, t, int64(math.Float64bits(c)))
+}
+func (f *Func) ConstFloat64(line int32, t Type, c float64) *Value {
+ return f.constVal(line, OpConst64F, t, int64(math.Float64bits(c)))
+}
+
+func (f *Func) Logf(msg string, args ...interface{}) { f.Config.Logf(msg, args...) }
+func (f *Func) Log() bool { return f.Config.Log() }
+func (f *Func) Fatalf(msg string, args ...interface{}) { f.Config.Fatalf(f.Entry.Line, msg, args...) }
+func (f *Func) Unimplementedf(msg string, args ...interface{}) {
+ f.Config.Unimplementedf(f.Entry.Line, msg, args...)
+}
+
+func (f *Func) Free() {
+ // Clear values.
+ n := f.vid.num()
+ if n > len(f.Config.values) {
+ n = len(f.Config.values)
+ }
+ for i := 1; i < n; i++ {
+ f.Config.values[i] = Value{}
+ f.Config.values[i].ID = ID(i)
+ }
+
+ // Clear blocks.
+ n = f.bid.num()
+ if n > len(f.Config.blocks) {
+ n = len(f.Config.blocks)
+ }
+ for i := 1; i < n; i++ {
+ f.Config.blocks[i] = Block{}
+ f.Config.blocks[i].ID = ID(i)
+ }
+
+ // Unregister from config.
+ if f.Config.curFunc != f {
+ f.Fatalf("free of function which isn't the last one allocated")
+ }
+ f.Config.curFunc = nil
+ *f = Func{} // just in case
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// This file contains some utility functions to help define Funcs for testing.
+// As an example, the following func
+//
+// b1:
+// v1 = InitMem <mem>
+// Plain -> b2
+// b2:
+// Exit v1
+// b3:
+// v2 = Const <bool> [true]
+// If v2 -> b3 b2
+//
+// can be defined as
+//
+// fun := Fun("entry",
+// Bloc("entry",
+// Valu("mem", OpInitMem, TypeMem, 0, nil),
+// Goto("exit")),
+// Bloc("exit",
+// Exit("mem")),
+// Bloc("deadblock",
+// Valu("deadval", OpConstBool, TypeBool, 0, true),
+// If("deadval", "deadblock", "exit")))
+//
+// and the Blocks or Values used in the Func can be accessed
+// like this:
+// fun.blocks["entry"] or fun.values["deadval"]
+
+package ssa
+
+// TODO(matloob): Choose better names for Fun, Bloc, Goto, etc.
+// TODO(matloob): Write a parser for the Func disassembly. Maybe
+// the parser can be used instead of Fun.
+
+import (
+ "fmt"
+ "reflect"
+ "testing"
+)
+
+// Compare two Funcs for equivalence. Their CFGs must be isomorphic,
+// and their values must correspond.
+// Requires that values and predecessors are in the same order, even
+// though Funcs could be equivalent when they are not.
+// TODO(matloob): Allow values and predecessors to be in different
+// orders if the CFG are otherwise equivalent.
+func Equiv(f, g *Func) bool {
+ valcor := make(map[*Value]*Value)
+ var checkVal func(fv, gv *Value) bool
+ checkVal = func(fv, gv *Value) bool {
+ if fv == nil && gv == nil {
+ return true
+ }
+ if valcor[fv] == nil && valcor[gv] == nil {
+ valcor[fv] = gv
+ valcor[gv] = fv
+ // Ignore ids. Ops and Types are compared for equality.
+ // TODO(matloob): Make sure types are canonical and can
+ // be compared for equality.
+ if fv.Op != gv.Op || fv.Type != gv.Type || fv.AuxInt != gv.AuxInt {
+ return false
+ }
+ if !reflect.DeepEqual(fv.Aux, gv.Aux) {
+ // This makes the assumption that aux values can be compared
+ // using DeepEqual.
+ // TODO(matloob): Aux values may be *gc.Sym pointers in the near
+ // future. Make sure they are canonical.
+ return false
+ }
+ if len(fv.Args) != len(gv.Args) {
+ return false
+ }
+ for i := range fv.Args {
+ if !checkVal(fv.Args[i], gv.Args[i]) {
+ return false
+ }
+ }
+ }
+ return valcor[fv] == gv && valcor[gv] == fv
+ }
+ blkcor := make(map[*Block]*Block)
+ var checkBlk func(fb, gb *Block) bool
+ checkBlk = func(fb, gb *Block) bool {
+ if blkcor[fb] == nil && blkcor[gb] == nil {
+ blkcor[fb] = gb
+ blkcor[gb] = fb
+ // ignore ids
+ if fb.Kind != gb.Kind {
+ return false
+ }
+ if len(fb.Values) != len(gb.Values) {
+ return false
+ }
+ for i := range fb.Values {
+ if !checkVal(fb.Values[i], gb.Values[i]) {
+ return false
+ }
+ }
+ if len(fb.Succs) != len(gb.Succs) {
+ return false
+ }
+ for i := range fb.Succs {
+ if !checkBlk(fb.Succs[i], gb.Succs[i]) {
+ return false
+ }
+ }
+ if len(fb.Preds) != len(gb.Preds) {
+ return false
+ }
+ for i := range fb.Preds {
+ if !checkBlk(fb.Preds[i], gb.Preds[i]) {
+ return false
+ }
+ }
+ return true
+
+ }
+ return blkcor[fb] == gb && blkcor[gb] == fb
+ }
+
+ return checkBlk(f.Entry, g.Entry)
+}
+
+// fun is the return type of Fun. It contains the created func
+// itself as well as indexes from block and value names into the
+// corresponding Blocks and Values.
+type fun struct {
+ f *Func
+ blocks map[string]*Block
+ values map[string]*Value
+}
+
+var emptyPass pass = pass{
+ name: "empty pass",
+}
+
+// Fun takes the name of an entry bloc and a series of Bloc calls, and
+// returns a fun containing the composed Func. entry must be a name
+// supplied to one of the Bloc functions. Each of the bloc names and
+// valu names should be unique across the Fun.
+func Fun(c *Config, entry string, blocs ...bloc) fun {
+ f := c.NewFunc()
+ f.pass = &emptyPass
+
+ blocks := make(map[string]*Block)
+ values := make(map[string]*Value)
+ // Create all the blocks and values.
+ for _, bloc := range blocs {
+ b := f.NewBlock(bloc.control.kind)
+ blocks[bloc.name] = b
+ for _, valu := range bloc.valus {
+ // args are filled in the second pass.
+ values[valu.name] = b.NewValue0IA(0, valu.op, valu.t, valu.auxint, valu.aux)
+ }
+ }
+ // Connect the blocks together and specify control values.
+ f.Entry = blocks[entry]
+ for _, bloc := range blocs {
+ b := blocks[bloc.name]
+ c := bloc.control
+ // Specify control values.
+ if c.control != "" {
+ cval, ok := values[c.control]
+ if !ok {
+ f.Fatalf("control value for block %s missing", bloc.name)
+ }
+ b.Control = cval
+ }
+ // Fill in args.
+ for _, valu := range bloc.valus {
+ v := values[valu.name]
+ for _, arg := range valu.args {
+ a, ok := values[arg]
+ if !ok {
+ b.Fatalf("arg %s missing for value %s in block %s",
+ arg, valu.name, bloc.name)
+ }
+ v.AddArg(a)
+ }
+ }
+ // Connect to successors.
+ for _, succ := range c.succs {
+ b.AddEdgeTo(blocks[succ])
+ }
+ }
+ return fun{f, blocks, values}
+}
+
+// Bloc defines a block for Fun. The bloc name should be unique
+// across the containing Fun. entries should consist of calls to valu,
+// as well as one call to Goto, If, or Exit to specify the block kind.
+func Bloc(name string, entries ...interface{}) bloc {
+ b := bloc{}
+ b.name = name
+ seenCtrl := false
+ for _, e := range entries {
+ switch v := e.(type) {
+ case ctrl:
+ // there should be exactly one Ctrl entry.
+ if seenCtrl {
+ panic(fmt.Sprintf("already seen control for block %s", name))
+ }
+ b.control = v
+ seenCtrl = true
+ case valu:
+ b.valus = append(b.valus, v)
+ }
+ }
+ if !seenCtrl {
+ panic(fmt.Sprintf("block %s doesn't have control", b.name))
+ }
+ return b
+}
+
+// Valu defines a value in a block.
+func Valu(name string, op Op, t Type, auxint int64, aux interface{}, args ...string) valu {
+ return valu{name, op, t, auxint, aux, args}
+}
+
+// Goto specifies that this is a BlockPlain and names the single successor.
+// TODO(matloob): choose a better name.
+func Goto(succ string) ctrl {
+ return ctrl{BlockPlain, "", []string{succ}}
+}
+
+// If specifies a BlockIf.
+func If(cond, sub, alt string) ctrl {
+ return ctrl{BlockIf, cond, []string{sub, alt}}
+}
+
+// Exit specifies a BlockExit.
+func Exit(arg string) ctrl {
+ return ctrl{BlockExit, arg, []string{}}
+}
+
+// Eq specifies a BlockAMD64EQ.
+func Eq(cond, sub, alt string) ctrl {
+ return ctrl{BlockAMD64EQ, cond, []string{sub, alt}}
+}
+
+// bloc, ctrl, and valu are internal structures used by Bloc, Valu, Goto,
+// If, and Exit to help define blocks.
+
+type bloc struct {
+ name string
+ control ctrl
+ valus []valu
+}
+
+type ctrl struct {
+ kind BlockKind
+ control string
+ succs []string
+}
+
+type valu struct {
+ name string
+ op Op
+ t Type
+ auxint int64
+ aux interface{}
+ args []string
+}
+
+func TestArgs(t *testing.T) {
+ c := testConfig(t)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("a", OpConst64, TypeInt64, 14, nil),
+ Valu("b", OpConst64, TypeInt64, 26, nil),
+ Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem")))
+ sum := fun.values["sum"]
+ for i, name := range []string{"a", "b"} {
+ if sum.Args[i] != fun.values[name] {
+ t.Errorf("arg %d for sum is incorrect: want %s, got %s",
+ i, sum.Args[i], fun.values[name])
+ }
+ }
+}
+
+func TestEquiv(t *testing.T) {
+ equivalentCases := []struct{ f, g fun }{
+ // simple case
+ {
+ Fun(testConfig(t), "entry",
+ Bloc("entry",
+ Valu("a", OpConst64, TypeInt64, 14, nil),
+ Valu("b", OpConst64, TypeInt64, 26, nil),
+ Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem"))),
+ Fun(testConfig(t), "entry",
+ Bloc("entry",
+ Valu("a", OpConst64, TypeInt64, 14, nil),
+ Valu("b", OpConst64, TypeInt64, 26, nil),
+ Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem"))),
+ },
+ // block order changed
+ {
+ Fun(testConfig(t), "entry",
+ Bloc("entry",
+ Valu("a", OpConst64, TypeInt64, 14, nil),
+ Valu("b", OpConst64, TypeInt64, 26, nil),
+ Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem"))),
+ Fun(testConfig(t), "entry",
+ Bloc("exit",
+ Exit("mem")),
+ Bloc("entry",
+ Valu("a", OpConst64, TypeInt64, 14, nil),
+ Valu("b", OpConst64, TypeInt64, 26, nil),
+ Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Goto("exit"))),
+ },
+ }
+ for _, c := range equivalentCases {
+ if !Equiv(c.f.f, c.g.f) {
+ t.Error("expected equivalence. Func definitions:")
+ t.Error(c.f.f)
+ t.Error(c.g.f)
+ }
+ }
+
+ differentCases := []struct{ f, g fun }{
+ // different shape
+ {
+ Fun(testConfig(t), "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem"))),
+ Fun(testConfig(t), "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Exit("mem"))),
+ },
+ // value order changed
+ {
+ Fun(testConfig(t), "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("b", OpConst64, TypeInt64, 26, nil),
+ Valu("a", OpConst64, TypeInt64, 14, nil),
+ Exit("mem"))),
+ Fun(testConfig(t), "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("a", OpConst64, TypeInt64, 14, nil),
+ Valu("b", OpConst64, TypeInt64, 26, nil),
+ Exit("mem"))),
+ },
+ // value auxint different
+ {
+ Fun(testConfig(t), "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("a", OpConst64, TypeInt64, 14, nil),
+ Exit("mem"))),
+ Fun(testConfig(t), "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("a", OpConst64, TypeInt64, 26, nil),
+ Exit("mem"))),
+ },
+ // value aux different
+ {
+ Fun(testConfig(t), "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("a", OpConst64, TypeInt64, 0, 14),
+ Exit("mem"))),
+ Fun(testConfig(t), "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("a", OpConst64, TypeInt64, 0, 26),
+ Exit("mem"))),
+ },
+ // value args different
+ {
+ Fun(testConfig(t), "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("a", OpConst64, TypeInt64, 14, nil),
+ Valu("b", OpConst64, TypeInt64, 26, nil),
+ Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
+ Exit("mem"))),
+ Fun(testConfig(t), "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("a", OpConst64, TypeInt64, 0, nil),
+ Valu("b", OpConst64, TypeInt64, 14, nil),
+ Valu("sum", OpAdd64, TypeInt64, 0, nil, "b", "a"),
+ Exit("mem"))),
+ },
+ }
+ for _, c := range differentCases {
+ if Equiv(c.f.f, c.g.f) {
+ t.Error("expected difference. Func definitions:")
+ t.Error(c.f.f)
+ t.Error(c.g.f)
+ }
+ }
+}
+
+// opcodeMap returns a map from opcode to the number of times that opcode
+// appears in the function.
+func opcodeMap(f *Func) map[Op]int {
+ m := map[Op]int{}
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ m[v.Op]++
+ }
+ }
+ return m
+}
+
+// opcodeCounts checks that the number of opcodes listed in m agree with the
+// number of opcodes that appear in the function.
+func checkOpcodeCounts(t *testing.T, f *Func, m map[Op]int) {
+ n := opcodeMap(f)
+ for op, cnt := range m {
+ if n[op] != cnt {
+ t.Errorf("%s appears %d times, want %d times", op, n[op], cnt)
+ }
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// fuse simplifies control flow by joining basic blocks.
+func fuse(f *Func) {
+ for changed := true; changed; {
+ changed = false
+ for _, b := range f.Blocks {
+ changed = fuseBlockIf(b) || changed
+ changed = fuseBlockPlain(b) || changed
+ }
+ }
+}
+
+// fuseBlockIf handles the following cases where s0 and s1 are empty blocks.
+//
+// b b b b
+// / \ | \ / | | |
+// s0 s1 | s1 s0 | | |
+// \ / | / \ | | |
+// ss ss ss ss
+//
+// If all Phi ops in ss have identical variables for slots corresponding to
+// s0, s1 and b then the branch can be dropped.
+// TODO: If ss doesn't contain any OpPhis, are s0 and s1 dead code anyway.
+func fuseBlockIf(b *Block) bool {
+ if b.Kind != BlockIf {
+ return false
+ }
+
+ var ss0, ss1 *Block
+ s0 := b.Succs[0]
+ if s0.Kind != BlockPlain || len(s0.Preds) != 1 || len(s0.Values) != 0 {
+ s0, ss0 = b, s0
+ } else {
+ ss0 = s0.Succs[0]
+ }
+ s1 := b.Succs[1]
+ if s1.Kind != BlockPlain || len(s1.Preds) != 1 || len(s1.Values) != 0 {
+ s1, ss1 = b, s1
+ } else {
+ ss1 = s1.Succs[0]
+ }
+
+ if ss0 != ss1 {
+ return false
+ }
+ ss := ss0
+
+ // s0 and s1 are equal with b if the corresponding block is missing
+ // (2nd, 3rd and 4th case in the figure).
+ i0, i1 := -1, -1
+ for i, p := range ss.Preds {
+ if p == s0 {
+ i0 = i
+ }
+ if p == s1 {
+ i1 = i
+ }
+ }
+ if i0 == -1 || i1 == -1 {
+ b.Fatalf("invalid predecessors")
+ }
+ for _, v := range ss.Values {
+ if v.Op == OpPhi && v.Args[i0] != v.Args[i1] {
+ return false
+ }
+ }
+
+ // Now we have two of following b->ss, b->s0->ss and b->s1->ss,
+ // with s0 and s1 empty if exist.
+ // We can replace it with b->ss without if all OpPhis in ss
+ // have identical predecessors (verified above).
+ // No critical edge is introduced because b will have one successor.
+ if s0 != b && s1 != b {
+ ss.removePred(s0)
+
+ // Replace edge b->s1->ss with b->ss.
+ // We need to keep a slot for Phis corresponding to b.
+ for i := range b.Succs {
+ if b.Succs[i] == s1 {
+ b.Succs[i] = ss
+ }
+ }
+ for i := range ss.Preds {
+ if ss.Preds[i] == s1 {
+ ss.Preds[i] = b
+ }
+ }
+ } else if s0 != b {
+ ss.removePred(s0)
+ } else if s1 != b {
+ ss.removePred(s1)
+ }
+ b.Kind = BlockPlain
+ b.Control = nil
+ b.Succs = append(b.Succs[:0], ss)
+
+ // Trash the empty blocks s0 & s1.
+ if s0 != b {
+ s0.Kind = BlockInvalid
+ s0.Values = nil
+ s0.Succs = nil
+ s0.Preds = nil
+ }
+ if s1 != b {
+ s1.Kind = BlockInvalid
+ s1.Values = nil
+ s1.Succs = nil
+ s1.Preds = nil
+ }
+ return true
+}
+
+func fuseBlockPlain(b *Block) bool {
+ if b.Kind != BlockPlain {
+ return false
+ }
+
+ c := b.Succs[0]
+ if len(c.Preds) != 1 {
+ return false
+ }
+
+ // move all of b'c values to c.
+ for _, v := range b.Values {
+ v.Block = c
+ c.Values = append(c.Values, v)
+ }
+
+ // replace b->c edge with preds(b) -> c
+ c.predstorage[0] = nil
+ if len(b.Preds) > len(b.predstorage) {
+ c.Preds = b.Preds
+ } else {
+ c.Preds = append(c.predstorage[:0], b.Preds...)
+ }
+ for _, p := range c.Preds {
+ for i, q := range p.Succs {
+ if q == b {
+ p.Succs[i] = c
+ }
+ }
+ }
+ if f := b.Func; f.Entry == b {
+ f.Entry = c
+ }
+
+ // trash b, just in case
+ b.Kind = BlockInvalid
+ b.Values = nil
+ b.Preds = nil
+ b.Succs = nil
+ return true
+}
--- /dev/null
+package ssa
+
+import (
+ "testing"
+)
+
+func TestFuseEliminatesOneBranch(t *testing.T) {
+ ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ c := NewConfig("amd64", DummyFrontend{t}, nil, true)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Goto("checkPtr")),
+ Bloc("checkPtr",
+ Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
+ Valu("nilptr", OpConstNil, ptrType, 0, nil),
+ Valu("bool1", OpNeqPtr, TypeBool, 0, nil, "ptr1", "nilptr"),
+ If("bool1", "then", "exit")),
+ Bloc("then",
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem")))
+
+ CheckFunc(fun.f)
+ fuse(fun.f)
+
+ for _, b := range fun.f.Blocks {
+ if b == fun.blocks["then"] && b.Kind != BlockInvalid {
+ t.Errorf("then was not eliminated, but should have")
+ }
+ }
+}
+
+func TestFuseEliminatesBothBranches(t *testing.T) {
+ ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ c := NewConfig("amd64", DummyFrontend{t}, nil, true)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Goto("checkPtr")),
+ Bloc("checkPtr",
+ Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
+ Valu("nilptr", OpConstNil, ptrType, 0, nil),
+ Valu("bool1", OpNeqPtr, TypeBool, 0, nil, "ptr1", "nilptr"),
+ If("bool1", "then", "else")),
+ Bloc("then",
+ Goto("exit")),
+ Bloc("else",
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem")))
+
+ CheckFunc(fun.f)
+ fuse(fun.f)
+
+ for _, b := range fun.f.Blocks {
+ if b == fun.blocks["then"] && b.Kind != BlockInvalid {
+ t.Errorf("then was not eliminated, but should have")
+ }
+ if b == fun.blocks["else"] && b.Kind != BlockInvalid {
+ t.Errorf("then was not eliminated, but should have")
+ }
+ }
+}
+
+func TestFuseHandlesPhis(t *testing.T) {
+ ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ c := NewConfig("amd64", DummyFrontend{t}, nil, true)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Goto("checkPtr")),
+ Bloc("checkPtr",
+ Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
+ Valu("nilptr", OpConstNil, ptrType, 0, nil),
+ Valu("bool1", OpNeqPtr, TypeBool, 0, nil, "ptr1", "nilptr"),
+ If("bool1", "then", "else")),
+ Bloc("then",
+ Goto("exit")),
+ Bloc("else",
+ Goto("exit")),
+ Bloc("exit",
+ Valu("phi", OpPhi, ptrType, 0, nil, "ptr1", "ptr1"),
+ Exit("mem")))
+
+ CheckFunc(fun.f)
+ fuse(fun.f)
+
+ for _, b := range fun.f.Blocks {
+ if b == fun.blocks["then"] && b.Kind != BlockInvalid {
+ t.Errorf("then was not eliminated, but should have")
+ }
+ if b == fun.blocks["else"] && b.Kind != BlockInvalid {
+ t.Errorf("then was not eliminated, but should have")
+ }
+ }
+}
+
+func TestFuseEliminatesEmptyBlocks(t *testing.T) {
+ c := NewConfig("amd64", DummyFrontend{t}, nil, true)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Goto("z0")),
+ Bloc("z1",
+ Goto("z2")),
+ Bloc("z3",
+ Goto("exit")),
+ Bloc("z2",
+ Goto("z3")),
+ Bloc("z0",
+ Goto("z1")),
+ Bloc("exit",
+ Exit("mem"),
+ ))
+
+ CheckFunc(fun.f)
+ fuse(fun.f)
+
+ for k, b := range fun.blocks {
+ if k[:1] == "z" && b.Kind != BlockInvalid {
+ t.Errorf("%s was not eliminated, but should have", k)
+ }
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// x86 register conventions:
+// - Integer types live in the low portion of registers. Upper portions are junk.
+// - Boolean types use the low-order byte of a register. Upper bytes are junk.
+// - We do not use AH,BH,CH,DH registers.
+// - Floating-point types will live in the low natural slot of an sse2 register.
+// Unused portions are junk.
+
+// Lowering arithmetic
+(Add64 x y) -> (ADDQ x y)
+(AddPtr x y) -> (ADDQ x y)
+(Add32 x y) -> (ADDL x y)
+(Add16 x y) -> (ADDW x y)
+(Add8 x y) -> (ADDB x y)
+(Add32F x y) -> (ADDSS x y)
+(Add64F x y) -> (ADDSD x y)
+
+(Sub64 x y) -> (SUBQ x y)
+(SubPtr x y) -> (SUBQ x y)
+(Sub32 x y) -> (SUBL x y)
+(Sub16 x y) -> (SUBW x y)
+(Sub8 x y) -> (SUBB x y)
+(Sub32F x y) -> (SUBSS x y)
+(Sub64F x y) -> (SUBSD x y)
+
+(Mul64 x y) -> (MULQ x y)
+(Mul32 x y) -> (MULL x y)
+(Mul16 x y) -> (MULW x y)
+(Mul8 x y) -> (MULB x y)
+(Mul32F x y) -> (MULSS x y)
+(Mul64F x y) -> (MULSD x y)
+
+(Div32F x y) -> (DIVSS x y)
+(Div64F x y) -> (DIVSD x y)
+
+(Div64 x y) -> (DIVQ x y)
+(Div64u x y) -> (DIVQU x y)
+(Div32 x y) -> (DIVL x y)
+(Div32u x y) -> (DIVLU x y)
+(Div16 x y) -> (DIVW x y)
+(Div16u x y) -> (DIVWU x y)
+(Div8 x y) -> (DIVW (SignExt8to16 x) (SignExt8to16 y))
+(Div8u x y) -> (DIVWU (ZeroExt8to16 x) (ZeroExt8to16 y))
+
+(Hmul64 x y) -> (HMULQ x y)
+(Hmul64u x y) -> (HMULQU x y)
+(Hmul32 x y) -> (HMULL x y)
+(Hmul32u x y) -> (HMULLU x y)
+(Hmul16 x y) -> (HMULW x y)
+(Hmul16u x y) -> (HMULWU x y)
+(Hmul8 x y) -> (HMULB x y)
+(Hmul8u x y) -> (HMULBU x y)
+
+(Avg64u x y) -> (AVGQU x y)
+
+(Mod64 x y) -> (MODQ x y)
+(Mod64u x y) -> (MODQU x y)
+(Mod32 x y) -> (MODL x y)
+(Mod32u x y) -> (MODLU x y)
+(Mod16 x y) -> (MODW x y)
+(Mod16u x y) -> (MODWU x y)
+(Mod8 x y) -> (MODW (SignExt8to16 x) (SignExt8to16 y))
+(Mod8u x y) -> (MODWU (ZeroExt8to16 x) (ZeroExt8to16 y))
+
+(And64 x y) -> (ANDQ x y)
+(And32 x y) -> (ANDL x y)
+(And16 x y) -> (ANDW x y)
+(And8 x y) -> (ANDB x y)
+
+(Or64 x y) -> (ORQ x y)
+(Or32 x y) -> (ORL x y)
+(Or16 x y) -> (ORW x y)
+(Or8 x y) -> (ORB x y)
+
+(Xor64 x y) -> (XORQ x y)
+(Xor32 x y) -> (XORL x y)
+(Xor16 x y) -> (XORW x y)
+(Xor8 x y) -> (XORB x y)
+
+(Neg64 x) -> (NEGQ x)
+(Neg32 x) -> (NEGL x)
+(Neg16 x) -> (NEGW x)
+(Neg8 x) -> (NEGB x)
+(Neg32F x) -> (PXOR x (MOVSSconst <config.Frontend().TypeFloat32()> [f2i(math.Copysign(0, -1))]))
+(Neg64F x) -> (PXOR x (MOVSDconst <config.Frontend().TypeFloat64()> [f2i(math.Copysign(0, -1))]))
+
+(Com64 x) -> (NOTQ x)
+(Com32 x) -> (NOTL x)
+(Com16 x) -> (NOTW x)
+(Com8 x) -> (NOTB x)
+
+(Sqrt x) -> (SQRTSD x)
+
+// Note: we always extend to 64 bits even though some ops don't need that many result bits.
+(SignExt8to16 x) -> (MOVBQSX x)
+(SignExt8to32 x) -> (MOVBQSX x)
+(SignExt8to64 x) -> (MOVBQSX x)
+(SignExt16to32 x) -> (MOVWQSX x)
+(SignExt16to64 x) -> (MOVWQSX x)
+(SignExt32to64 x) -> (MOVLQSX x)
+
+(ZeroExt8to16 x) -> (MOVBQZX x)
+(ZeroExt8to32 x) -> (MOVBQZX x)
+(ZeroExt8to64 x) -> (MOVBQZX x)
+(ZeroExt16to32 x) -> (MOVWQZX x)
+(ZeroExt16to64 x) -> (MOVWQZX x)
+(ZeroExt32to64 x) -> (MOVLQZX x)
+
+(Cvt32to32F x) -> (CVTSL2SS x)
+(Cvt32to64F x) -> (CVTSL2SD x)
+(Cvt64to32F x) -> (CVTSQ2SS x)
+(Cvt64to64F x) -> (CVTSQ2SD x)
+
+(Cvt32Fto32 x) -> (CVTTSS2SL x)
+(Cvt32Fto64 x) -> (CVTTSS2SQ x)
+(Cvt64Fto32 x) -> (CVTTSD2SL x)
+(Cvt64Fto64 x) -> (CVTTSD2SQ x)
+
+(Cvt32Fto64F x) -> (CVTSS2SD x)
+(Cvt64Fto32F x) -> (CVTSD2SS x)
+
+// Because we ignore high parts of registers, truncates are just copies.
+(Trunc16to8 x) -> x
+(Trunc32to8 x) -> x
+(Trunc32to16 x) -> x
+(Trunc64to8 x) -> x
+(Trunc64to16 x) -> x
+(Trunc64to32 x) -> x
+
+// Lowering shifts
+// Unsigned shifts need to return 0 if shift amount is >= width of shifted value.
+// result = (arg << shift) & (shift >= argbits ? 0 : 0xffffffffffffffff)
+// Note: for small shifts we generate 32 bits of mask even when we don't need it all.
+(Lsh64x64 <t> x y) -> (ANDQ (SHLQ <t> x y) (SBBQcarrymask <t> (CMPQconst y [64])))
+(Lsh64x32 <t> x y) -> (ANDQ (SHLQ <t> x y) (SBBQcarrymask <t> (CMPLconst y [64])))
+(Lsh64x16 <t> x y) -> (ANDQ (SHLQ <t> x y) (SBBQcarrymask <t> (CMPWconst y [64])))
+(Lsh64x8 <t> x y) -> (ANDQ (SHLQ <t> x y) (SBBQcarrymask <t> (CMPBconst y [64])))
+
+(Lsh32x64 <t> x y) -> (ANDL (SHLL <t> x y) (SBBLcarrymask <t> (CMPQconst y [32])))
+(Lsh32x32 <t> x y) -> (ANDL (SHLL <t> x y) (SBBLcarrymask <t> (CMPLconst y [32])))
+(Lsh32x16 <t> x y) -> (ANDL (SHLL <t> x y) (SBBLcarrymask <t> (CMPWconst y [32])))
+(Lsh32x8 <t> x y) -> (ANDL (SHLL <t> x y) (SBBLcarrymask <t> (CMPBconst y [32])))
+
+(Lsh16x64 <t> x y) -> (ANDW (SHLW <t> x y) (SBBLcarrymask <t> (CMPQconst y [16])))
+(Lsh16x32 <t> x y) -> (ANDW (SHLW <t> x y) (SBBLcarrymask <t> (CMPLconst y [16])))
+(Lsh16x16 <t> x y) -> (ANDW (SHLW <t> x y) (SBBLcarrymask <t> (CMPWconst y [16])))
+(Lsh16x8 <t> x y) -> (ANDW (SHLW <t> x y) (SBBLcarrymask <t> (CMPBconst y [16])))
+
+(Lsh8x64 <t> x y) -> (ANDB (SHLB <t> x y) (SBBLcarrymask <t> (CMPQconst y [8])))
+(Lsh8x32 <t> x y) -> (ANDB (SHLB <t> x y) (SBBLcarrymask <t> (CMPLconst y [8])))
+(Lsh8x16 <t> x y) -> (ANDB (SHLB <t> x y) (SBBLcarrymask <t> (CMPWconst y [8])))
+(Lsh8x8 <t> x y) -> (ANDB (SHLB <t> x y) (SBBLcarrymask <t> (CMPBconst y [8])))
+
+(Lrot64 <t> x [c]) -> (ROLQconst <t> [c&63] x)
+(Lrot32 <t> x [c]) -> (ROLLconst <t> [c&31] x)
+(Lrot16 <t> x [c]) -> (ROLWconst <t> [c&15] x)
+(Lrot8 <t> x [c]) -> (ROLBconst <t> [c&7] x)
+
+(Rsh64Ux64 <t> x y) -> (ANDQ (SHRQ <t> x y) (SBBQcarrymask <t> (CMPQconst y [64])))
+(Rsh64Ux32 <t> x y) -> (ANDQ (SHRQ <t> x y) (SBBQcarrymask <t> (CMPLconst y [64])))
+(Rsh64Ux16 <t> x y) -> (ANDQ (SHRQ <t> x y) (SBBQcarrymask <t> (CMPWconst y [64])))
+(Rsh64Ux8 <t> x y) -> (ANDQ (SHRQ <t> x y) (SBBQcarrymask <t> (CMPBconst y [64])))
+
+(Rsh32Ux64 <t> x y) -> (ANDL (SHRL <t> x y) (SBBLcarrymask <t> (CMPQconst y [32])))
+(Rsh32Ux32 <t> x y) -> (ANDL (SHRL <t> x y) (SBBLcarrymask <t> (CMPLconst y [32])))
+(Rsh32Ux16 <t> x y) -> (ANDL (SHRL <t> x y) (SBBLcarrymask <t> (CMPWconst y [32])))
+(Rsh32Ux8 <t> x y) -> (ANDL (SHRL <t> x y) (SBBLcarrymask <t> (CMPBconst y [32])))
+
+(Rsh16Ux64 <t> x y) -> (ANDW (SHRW <t> x y) (SBBLcarrymask <t> (CMPQconst y [16])))
+(Rsh16Ux32 <t> x y) -> (ANDW (SHRW <t> x y) (SBBLcarrymask <t> (CMPLconst y [16])))
+(Rsh16Ux16 <t> x y) -> (ANDW (SHRW <t> x y) (SBBLcarrymask <t> (CMPWconst y [16])))
+(Rsh16Ux8 <t> x y) -> (ANDW (SHRW <t> x y) (SBBLcarrymask <t> (CMPBconst y [16])))
+
+(Rsh8Ux64 <t> x y) -> (ANDB (SHRB <t> x y) (SBBLcarrymask <t> (CMPQconst y [8])))
+(Rsh8Ux32 <t> x y) -> (ANDB (SHRB <t> x y) (SBBLcarrymask <t> (CMPLconst y [8])))
+(Rsh8Ux16 <t> x y) -> (ANDB (SHRB <t> x y) (SBBLcarrymask <t> (CMPWconst y [8])))
+(Rsh8Ux8 <t> x y) -> (ANDB (SHRB <t> x y) (SBBLcarrymask <t> (CMPBconst y [8])))
+
+// Signed right shift needs to return 0/-1 if shift amount is >= width of shifted value.
+// We implement this by setting the shift value to -1 (all ones) if the shift value is >= width.
+// Note: for small shift widths we generate 32 bits of mask even when we don't need it all.
+(Rsh64x64 <t> x y) -> (SARQ <t> x (ORQ <y.Type> y (NOTQ <y.Type> (SBBQcarrymask <y.Type> (CMPQconst y [64])))))
+(Rsh64x32 <t> x y) -> (SARQ <t> x (ORL <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPLconst y [64])))))
+(Rsh64x16 <t> x y) -> (SARQ <t> x (ORW <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPWconst y [64])))))
+(Rsh64x8 <t> x y) -> (SARQ <t> x (ORB <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPBconst y [64])))))
+
+(Rsh32x64 <t> x y) -> (SARL <t> x (ORQ <y.Type> y (NOTQ <y.Type> (SBBQcarrymask <y.Type> (CMPQconst y [32])))))
+(Rsh32x32 <t> x y) -> (SARL <t> x (ORL <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPLconst y [32])))))
+(Rsh32x16 <t> x y) -> (SARL <t> x (ORW <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPWconst y [32])))))
+(Rsh32x8 <t> x y) -> (SARL <t> x (ORB <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPBconst y [32])))))
+
+(Rsh16x64 <t> x y) -> (SARW <t> x (ORQ <y.Type> y (NOTQ <y.Type> (SBBQcarrymask <y.Type> (CMPQconst y [16])))))
+(Rsh16x32 <t> x y) -> (SARW <t> x (ORL <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPLconst y [16])))))
+(Rsh16x16 <t> x y) -> (SARW <t> x (ORW <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPWconst y [16])))))
+(Rsh16x8 <t> x y) -> (SARW <t> x (ORB <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPBconst y [16])))))
+
+(Rsh8x64 <t> x y) -> (SARB <t> x (ORQ <y.Type> y (NOTQ <y.Type> (SBBQcarrymask <y.Type> (CMPQconst y [8])))))
+(Rsh8x32 <t> x y) -> (SARB <t> x (ORL <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPLconst y [8])))))
+(Rsh8x16 <t> x y) -> (SARB <t> x (ORW <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPWconst y [8])))))
+(Rsh8x8 <t> x y) -> (SARB <t> x (ORB <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPBconst y [8])))))
+
+(Less64 x y) -> (SETL (CMPQ x y))
+(Less32 x y) -> (SETL (CMPL x y))
+(Less16 x y) -> (SETL (CMPW x y))
+(Less8 x y) -> (SETL (CMPB x y))
+(Less64U x y) -> (SETB (CMPQ x y))
+(Less32U x y) -> (SETB (CMPL x y))
+(Less16U x y) -> (SETB (CMPW x y))
+(Less8U x y) -> (SETB (CMPB x y))
+// Use SETGF with reversed operands to dodge NaN case
+(Less64F x y) -> (SETGF (UCOMISD y x))
+(Less32F x y) -> (SETGF (UCOMISS y x))
+
+(Leq64 x y) -> (SETLE (CMPQ x y))
+(Leq32 x y) -> (SETLE (CMPL x y))
+(Leq16 x y) -> (SETLE (CMPW x y))
+(Leq8 x y) -> (SETLE (CMPB x y))
+(Leq64U x y) -> (SETBE (CMPQ x y))
+(Leq32U x y) -> (SETBE (CMPL x y))
+(Leq16U x y) -> (SETBE (CMPW x y))
+(Leq8U x y) -> (SETBE (CMPB x y))
+// Use SETGEF with reversed operands to dodge NaN case
+(Leq64F x y) -> (SETGEF (UCOMISD y x))
+(Leq32F x y) -> (SETGEF (UCOMISS y x))
+
+(Greater64 x y) -> (SETG (CMPQ x y))
+(Greater32 x y) -> (SETG (CMPL x y))
+(Greater16 x y) -> (SETG (CMPW x y))
+(Greater8 x y) -> (SETG (CMPB x y))
+(Greater64U x y) -> (SETA (CMPQ x y))
+(Greater32U x y) -> (SETA (CMPL x y))
+(Greater16U x y) -> (SETA (CMPW x y))
+(Greater8U x y) -> (SETA (CMPB x y))
+// Note Go assembler gets UCOMISx operand order wrong, but it is right here
+// Bug is accommodated at generation of assembly language.
+(Greater64F x y) -> (SETGF (UCOMISD x y))
+(Greater32F x y) -> (SETGF (UCOMISS x y))
+
+(Geq64 x y) -> (SETGE (CMPQ x y))
+(Geq32 x y) -> (SETGE (CMPL x y))
+(Geq16 x y) -> (SETGE (CMPW x y))
+(Geq8 x y) -> (SETGE (CMPB x y))
+(Geq64U x y) -> (SETAE (CMPQ x y))
+(Geq32U x y) -> (SETAE (CMPL x y))
+(Geq16U x y) -> (SETAE (CMPW x y))
+(Geq8U x y) -> (SETAE (CMPB x y))
+// Note Go assembler gets UCOMISx operand order wrong, but it is right here
+// Bug is accommodated at generation of assembly language.
+(Geq64F x y) -> (SETGEF (UCOMISD x y))
+(Geq32F x y) -> (SETGEF (UCOMISS x y))
+
+(Eq64 x y) -> (SETEQ (CMPQ x y))
+(Eq32 x y) -> (SETEQ (CMPL x y))
+(Eq16 x y) -> (SETEQ (CMPW x y))
+(Eq8 x y) -> (SETEQ (CMPB x y))
+(EqPtr x y) -> (SETEQ (CMPQ x y))
+(Eq64F x y) -> (SETEQF (UCOMISD x y))
+(Eq32F x y) -> (SETEQF (UCOMISS x y))
+
+(Neq64 x y) -> (SETNE (CMPQ x y))
+(Neq32 x y) -> (SETNE (CMPL x y))
+(Neq16 x y) -> (SETNE (CMPW x y))
+(Neq8 x y) -> (SETNE (CMPB x y))
+(NeqPtr x y) -> (SETNE (CMPQ x y))
+(Neq64F x y) -> (SETNEF (UCOMISD x y))
+(Neq32F x y) -> (SETNEF (UCOMISS x y))
+
+(Load <t> ptr mem) && (is64BitInt(t) || isPtr(t)) -> (MOVQload ptr mem)
+(Load <t> ptr mem) && is32BitInt(t) -> (MOVLload ptr mem)
+(Load <t> ptr mem) && is16BitInt(t) -> (MOVWload ptr mem)
+(Load <t> ptr mem) && (t.IsBoolean() || is8BitInt(t)) -> (MOVBload ptr mem)
+(Load <t> ptr mem) && is32BitFloat(t) -> (MOVSSload ptr mem)
+(Load <t> ptr mem) && is64BitFloat(t) -> (MOVSDload ptr mem)
+
+// These more-specific FP versions of Store pattern should come first.
+(Store [8] ptr val mem) && is64BitFloat(val.Type) -> (MOVSDstore ptr val mem)
+(Store [4] ptr val mem) && is32BitFloat(val.Type) -> (MOVSSstore ptr val mem)
+
+(Store [8] ptr val mem) -> (MOVQstore ptr val mem)
+(Store [4] ptr val mem) -> (MOVLstore ptr val mem)
+(Store [2] ptr val mem) -> (MOVWstore ptr val mem)
+(Store [1] ptr val mem) -> (MOVBstore ptr val mem)
+
+// We want this to stick out so the to/from ptr conversion is obvious
+(Convert <t> x mem) -> (MOVQconvert <t> x mem)
+
+// checks
+(IsNonNil p) -> (SETNE (TESTQ p p))
+(IsInBounds idx len) -> (SETB (CMPQ idx len))
+(IsSliceInBounds idx len) -> (SETBE (CMPQ idx len))
+(NilCheck ptr mem) -> (LoweredNilCheck ptr mem)
+
+(GetG mem) -> (LoweredGetG mem)
+(GetClosurePtr) -> (LoweredGetClosurePtr)
+
+// Small moves
+(Move [0] _ _ mem) -> mem
+(Move [1] dst src mem) -> (MOVBstore dst (MOVBload src mem) mem)
+(Move [2] dst src mem) -> (MOVWstore dst (MOVWload src mem) mem)
+(Move [4] dst src mem) -> (MOVLstore dst (MOVLload src mem) mem)
+(Move [8] dst src mem) -> (MOVQstore dst (MOVQload src mem) mem)
+(Move [16] dst src mem) -> (MOVOstore dst (MOVOload src mem) mem)
+(Move [3] dst src mem) ->
+ (MOVBstore [2] dst (MOVBload [2] src mem)
+ (MOVWstore dst (MOVWload src mem) mem))
+(Move [5] dst src mem) ->
+ (MOVBstore [4] dst (MOVBload [4] src mem)
+ (MOVLstore dst (MOVLload src mem) mem))
+(Move [6] dst src mem) ->
+ (MOVWstore [4] dst (MOVWload [4] src mem)
+ (MOVLstore dst (MOVLload src mem) mem))
+(Move [7] dst src mem) ->
+ (MOVLstore [3] dst (MOVLload [3] src mem)
+ (MOVLstore dst (MOVLload src mem) mem))
+(Move [size] dst src mem) && size > 8 && size < 16 ->
+ (MOVQstore [size-8] dst (MOVQload [size-8] src mem)
+ (MOVQstore dst (MOVQload src mem) mem))
+
+// Adjust moves to be a multiple of 16 bytes.
+(Move [size] dst src mem) && size > 16 && size%16 != 0 && size%16 <= 8 ->
+ (Move [size-size%16] (ADDQconst <dst.Type> dst [size%16]) (ADDQconst <src.Type> src [size%16])
+ (MOVQstore dst (MOVQload src mem) mem))
+(Move [size] dst src mem) && size > 16 && size%16 != 0 && size%16 > 8 ->
+ (Move [size-size%16] (ADDQconst <dst.Type> dst [size%16]) (ADDQconst <src.Type> src [size%16])
+ (MOVOstore dst (MOVOload src mem) mem))
+
+// Medium copying uses a duff device.
+(Move [size] dst src mem) && size >= 32 && size <= 16*64 && size%16 == 0 ->
+ (DUFFCOPY [14*(64-size/16)] dst src mem)
+// 14 and 64 are magic constants. 14 is the number of bytes to encode:
+// MOVUPS (SI), X0
+// ADDQ $16, SI
+// MOVUPS X0, (DI)
+// ADDQ $16, DI
+// and 64 is the number of such blocks. See src/runtime/duff_amd64.s:duffcopy.
+
+// Large copying uses REP MOVSQ.
+(Move [size] dst src mem) && size > 16*64 && size%8 == 0 ->
+ (REPMOVSQ dst src (MOVQconst [size/8]) mem)
+
+(Not x) -> (XORBconst [1] x)
+
+(OffPtr [off] ptr) -> (ADDQconst [off] ptr)
+
+(Const8 [val]) -> (MOVBconst [val])
+(Const16 [val]) -> (MOVWconst [val])
+(Const32 [val]) -> (MOVLconst [val])
+(Const64 [val]) -> (MOVQconst [val])
+(Const32F [val]) -> (MOVSSconst [val])
+(Const64F [val]) -> (MOVSDconst [val])
+(ConstNil) -> (MOVQconst [0])
+(ConstBool [b]) -> (MOVBconst [b])
+
+(Addr {sym} base) -> (LEAQ {sym} base)
+
+(ITab (Load ptr mem)) -> (MOVQload ptr mem)
+
+// block rewrites
+(If (SETL cmp) yes no) -> (LT cmp yes no)
+(If (SETLE cmp) yes no) -> (LE cmp yes no)
+(If (SETG cmp) yes no) -> (GT cmp yes no)
+(If (SETGE cmp) yes no) -> (GE cmp yes no)
+(If (SETEQ cmp) yes no) -> (EQ cmp yes no)
+(If (SETNE cmp) yes no) -> (NE cmp yes no)
+(If (SETB cmp) yes no) -> (ULT cmp yes no)
+(If (SETBE cmp) yes no) -> (ULE cmp yes no)
+(If (SETA cmp) yes no) -> (UGT cmp yes no)
+(If (SETAE cmp) yes no) -> (UGE cmp yes no)
+
+// Special case for floating point - LF/LEF not generated
+(If (SETGF cmp) yes no) -> (UGT cmp yes no)
+(If (SETGEF cmp) yes no) -> (UGE cmp yes no)
+(If (SETEQF cmp) yes no) -> (EQF cmp yes no)
+(If (SETNEF cmp) yes no) -> (NEF cmp yes no)
+
+(If cond yes no) -> (NE (TESTB cond cond) yes no)
+
+(NE (TESTB (SETL cmp)) yes no) -> (LT cmp yes no)
+(NE (TESTB (SETLE cmp)) yes no) -> (LE cmp yes no)
+(NE (TESTB (SETG cmp)) yes no) -> (GT cmp yes no)
+(NE (TESTB (SETGE cmp)) yes no) -> (GE cmp yes no)
+(NE (TESTB (SETEQ cmp)) yes no) -> (EQ cmp yes no)
+(NE (TESTB (SETNE cmp)) yes no) -> (NE cmp yes no)
+(NE (TESTB (SETB cmp)) yes no) -> (ULT cmp yes no)
+(NE (TESTB (SETBE cmp)) yes no) -> (ULE cmp yes no)
+(NE (TESTB (SETA cmp)) yes no) -> (UGT cmp yes no)
+(NE (TESTB (SETAE cmp)) yes no) -> (UGE cmp yes no)
+
+// Special case for floating point - LF/LEF not generated
+(NE (TESTB (SETGF cmp)) yes no) -> (UGT cmp yes no)
+(NE (TESTB (SETGEF cmp)) yes no) -> (UGE cmp yes no)
+(NE (TESTB (SETEQF cmp)) yes no) -> (EQF cmp yes no)
+(NE (TESTB (SETNEF cmp)) yes no) -> (NEF cmp yes no)
+
+// Disabled because it interferes with the pattern match above and makes worse code.
+// (SETNEF x) -> (ORQ (SETNE <config.Frontend().TypeInt8()> x) (SETNAN <config.Frontend().TypeInt8()> x))
+// (SETEQF x) -> (ANDQ (SETEQ <config.Frontend().TypeInt8()> x) (SETORD <config.Frontend().TypeInt8()> x))
+
+(StaticCall [argwid] {target} mem) -> (CALLstatic [argwid] {target} mem)
+(ClosureCall [argwid] entry closure mem) -> (CALLclosure [argwid] entry closure mem)
+(DeferCall [argwid] mem) -> (CALLdefer [argwid] mem)
+(GoCall [argwid] mem) -> (CALLgo [argwid] mem)
+(InterCall [argwid] entry mem) -> (CALLinter [argwid] entry mem)
+
+// Rules below here apply some simple optimizations after lowering.
+// TODO: Should this be a separate pass?
+
+// fold constants into instructions
+(ADDQ x (MOVQconst [c])) && is32Bit(c) -> (ADDQconst [c] x)
+(ADDQ (MOVQconst [c]) x) && is32Bit(c) -> (ADDQconst [c] x)
+(ADDL x (MOVLconst [c])) -> (ADDLconst [c] x)
+(ADDL (MOVLconst [c]) x) -> (ADDLconst [c] x)
+(ADDW x (MOVWconst [c])) -> (ADDWconst [c] x)
+(ADDW (MOVWconst [c]) x) -> (ADDWconst [c] x)
+(ADDB x (MOVBconst [c])) -> (ADDBconst [c] x)
+(ADDB (MOVBconst [c]) x) -> (ADDBconst [c] x)
+
+(SUBQ x (MOVQconst [c])) && is32Bit(c) -> (SUBQconst x [c])
+(SUBQ (MOVQconst [c]) x) && is32Bit(c) -> (NEGQ (SUBQconst <v.Type> x [c]))
+(SUBL x (MOVLconst [c])) -> (SUBLconst x [c])
+(SUBL (MOVLconst [c]) x) -> (NEGL (SUBLconst <v.Type> x [c]))
+(SUBW x (MOVWconst [c])) -> (SUBWconst x [c])
+(SUBW (MOVWconst [c]) x) -> (NEGW (SUBWconst <v.Type> x [c]))
+(SUBB x (MOVBconst [c])) -> (SUBBconst x [c])
+(SUBB (MOVBconst [c]) x) -> (NEGB (SUBBconst <v.Type> x [c]))
+
+(MULQ x (MOVQconst [c])) && is32Bit(c) -> (MULQconst [c] x)
+(MULQ (MOVQconst [c]) x) && is32Bit(c) -> (MULQconst [c] x)
+(MULL x (MOVLconst [c])) -> (MULLconst [c] x)
+(MULL (MOVLconst [c]) x) -> (MULLconst [c] x)
+(MULW x (MOVWconst [c])) -> (MULWconst [c] x)
+(MULW (MOVWconst [c]) x) -> (MULWconst [c] x)
+(MULB x (MOVBconst [c])) -> (MULBconst [c] x)
+(MULB (MOVBconst [c]) x) -> (MULBconst [c] x)
+
+(ANDQ x (MOVQconst [c])) && is32Bit(c) -> (ANDQconst [c] x)
+(ANDQ (MOVQconst [c]) x) && is32Bit(c) -> (ANDQconst [c] x)
+(ANDL x (MOVLconst [c])) -> (ANDLconst [c] x)
+(ANDL (MOVLconst [c]) x) -> (ANDLconst [c] x)
+(ANDW x (MOVLconst [c])) -> (ANDWconst [c] x)
+(ANDW (MOVLconst [c]) x) -> (ANDWconst [c] x)
+(ANDW x (MOVWconst [c])) -> (ANDWconst [c] x)
+(ANDW (MOVWconst [c]) x) -> (ANDWconst [c] x)
+(ANDB x (MOVLconst [c])) -> (ANDBconst [c] x)
+(ANDB (MOVLconst [c]) x) -> (ANDBconst [c] x)
+(ANDB x (MOVBconst [c])) -> (ANDBconst [c] x)
+(ANDB (MOVBconst [c]) x) -> (ANDBconst [c] x)
+
+(ORQ x (MOVQconst [c])) && is32Bit(c) -> (ORQconst [c] x)
+(ORQ (MOVQconst [c]) x) && is32Bit(c) -> (ORQconst [c] x)
+(ORL x (MOVLconst [c])) -> (ORLconst [c] x)
+(ORL (MOVLconst [c]) x) -> (ORLconst [c] x)
+(ORW x (MOVWconst [c])) -> (ORWconst [c] x)
+(ORW (MOVWconst [c]) x) -> (ORWconst [c] x)
+(ORB x (MOVBconst [c])) -> (ORBconst [c] x)
+(ORB (MOVBconst [c]) x) -> (ORBconst [c] x)
+
+(XORQ x (MOVQconst [c])) && is32Bit(c) -> (XORQconst [c] x)
+(XORQ (MOVQconst [c]) x) && is32Bit(c) -> (XORQconst [c] x)
+(XORL x (MOVLconst [c])) -> (XORLconst [c] x)
+(XORL (MOVLconst [c]) x) -> (XORLconst [c] x)
+(XORW x (MOVWconst [c])) -> (XORWconst [c] x)
+(XORW (MOVWconst [c]) x) -> (XORWconst [c] x)
+(XORB x (MOVBconst [c])) -> (XORBconst [c] x)
+(XORB (MOVBconst [c]) x) -> (XORBconst [c] x)
+
+(SHLQ x (MOVQconst [c])) -> (SHLQconst [c&63] x)
+(SHLQ x (MOVLconst [c])) -> (SHLQconst [c&63] x)
+(SHLQ x (MOVWconst [c])) -> (SHLQconst [c&63] x)
+(SHLQ x (MOVBconst [c])) -> (SHLQconst [c&63] x)
+
+(SHLL x (MOVQconst [c])) -> (SHLLconst [c&31] x)
+(SHLL x (MOVLconst [c])) -> (SHLLconst [c&31] x)
+(SHLL x (MOVWconst [c])) -> (SHLLconst [c&31] x)
+(SHLL x (MOVBconst [c])) -> (SHLLconst [c&31] x)
+
+(SHLW x (MOVQconst [c])) -> (SHLWconst [c&31] x)
+(SHLW x (MOVLconst [c])) -> (SHLWconst [c&31] x)
+(SHLW x (MOVWconst [c])) -> (SHLWconst [c&31] x)
+(SHLW x (MOVBconst [c])) -> (SHLWconst [c&31] x)
+
+(SHLB x (MOVQconst [c])) -> (SHLBconst [c&31] x)
+(SHLB x (MOVLconst [c])) -> (SHLBconst [c&31] x)
+(SHLB x (MOVWconst [c])) -> (SHLBconst [c&31] x)
+(SHLB x (MOVBconst [c])) -> (SHLBconst [c&31] x)
+
+(SHRQ x (MOVQconst [c])) -> (SHRQconst [c&63] x)
+(SHRQ x (MOVLconst [c])) -> (SHRQconst [c&63] x)
+(SHRQ x (MOVWconst [c])) -> (SHRQconst [c&63] x)
+(SHRQ x (MOVBconst [c])) -> (SHRQconst [c&63] x)
+
+(SHRL x (MOVQconst [c])) -> (SHRLconst [c&31] x)
+(SHRL x (MOVLconst [c])) -> (SHRLconst [c&31] x)
+(SHRL x (MOVWconst [c])) -> (SHRLconst [c&31] x)
+(SHRL x (MOVBconst [c])) -> (SHRLconst [c&31] x)
+
+(SHRW x (MOVQconst [c])) -> (SHRWconst [c&31] x)
+(SHRW x (MOVLconst [c])) -> (SHRWconst [c&31] x)
+(SHRW x (MOVWconst [c])) -> (SHRWconst [c&31] x)
+(SHRW x (MOVBconst [c])) -> (SHRWconst [c&31] x)
+
+(SHRB x (MOVQconst [c])) -> (SHRBconst [c&31] x)
+(SHRB x (MOVLconst [c])) -> (SHRBconst [c&31] x)
+(SHRB x (MOVWconst [c])) -> (SHRBconst [c&31] x)
+(SHRB x (MOVBconst [c])) -> (SHRBconst [c&31] x)
+
+(SARQ x (MOVQconst [c])) -> (SARQconst [c&63] x)
+(SARQ x (MOVLconst [c])) -> (SARQconst [c&63] x)
+(SARQ x (MOVWconst [c])) -> (SARQconst [c&63] x)
+(SARQ x (MOVBconst [c])) -> (SARQconst [c&63] x)
+
+(SARL x (MOVQconst [c])) -> (SARLconst [c&31] x)
+(SARL x (MOVLconst [c])) -> (SARLconst [c&31] x)
+(SARL x (MOVWconst [c])) -> (SARLconst [c&31] x)
+(SARL x (MOVBconst [c])) -> (SARLconst [c&31] x)
+
+(SARW x (MOVQconst [c])) -> (SARWconst [c&31] x)
+(SARW x (MOVLconst [c])) -> (SARWconst [c&31] x)
+(SARW x (MOVWconst [c])) -> (SARWconst [c&31] x)
+(SARW x (MOVBconst [c])) -> (SARWconst [c&31] x)
+
+(SARB x (MOVQconst [c])) -> (SARBconst [c&31] x)
+(SARB x (MOVLconst [c])) -> (SARBconst [c&31] x)
+(SARB x (MOVWconst [c])) -> (SARBconst [c&31] x)
+(SARB x (MOVBconst [c])) -> (SARBconst [c&31] x)
+
+// Note: the word and byte shifts keep the low 5 bits (not the low 4 or 3 bits)
+// because the x86 instructions are defined to use all 5 bits of the shift even
+// for the small shifts. I don't think we'll ever generate a weird shift (e.g.
+// (SHLW x (MOVWconst [24])), but just in case.
+
+(CMPQ x (MOVQconst [c])) && is32Bit(c) -> (CMPQconst x [c])
+(CMPQ (MOVQconst [c]) x) && is32Bit(c) -> (InvertFlags (CMPQconst x [c]))
+(CMPL x (MOVLconst [c])) -> (CMPLconst x [c])
+(CMPL (MOVLconst [c]) x) -> (InvertFlags (CMPLconst x [c]))
+(CMPW x (MOVWconst [c])) -> (CMPWconst x [c])
+(CMPW (MOVWconst [c]) x) -> (InvertFlags (CMPWconst x [c]))
+(CMPB x (MOVBconst [c])) -> (CMPBconst x [c])
+(CMPB (MOVBconst [c]) x) -> (InvertFlags (CMPBconst x [c]))
+
+// strength reduction
+(MULQconst [-1] x) -> (NEGQ x)
+(MULQconst [0] _) -> (MOVQconst [0])
+(MULQconst [1] x) -> x
+(MULQconst [3] x) -> (LEAQ2 x x)
+(MULQconst [5] x) -> (LEAQ4 x x)
+(MULQconst [9] x) -> (LEAQ8 x x)
+(MULQconst [c] x) && isPowerOfTwo(c) -> (SHLQconst [log2(c)] x)
+
+// combine add/shift into LEAQ
+(ADDQ x (SHLQconst [3] y)) -> (LEAQ8 x y)
+(ADDQ x (SHLQconst [2] y)) -> (LEAQ4 x y)
+(ADDQ x (SHLQconst [1] y)) -> (LEAQ2 x y)
+(ADDQ x (ADDQ y y)) -> (LEAQ2 x y)
+(ADDQ x (ADDQ x y)) -> (LEAQ2 y x)
+(ADDQ x (ADDQ y x)) -> (LEAQ2 y x)
+
+// combine ADDQ/ADDQconst into LEAQ1
+(ADDQconst [c] (ADDQ x y)) -> (LEAQ1 [c] x y)
+(ADDQ (ADDQconst [c] x) y) -> (LEAQ1 [c] x y)
+(ADDQ x (ADDQconst [c] y)) -> (LEAQ1 [c] x y)
+
+// fold ADDQ into LEAQ
+(ADDQconst [c] (LEAQ [d] {s} x)) -> (LEAQ [c+d] {s} x)
+(LEAQ [c] {s} (ADDQconst [d] x)) -> (LEAQ [c+d] {s} x)
+(LEAQ [c] {s} (ADDQ x y)) && x.Op != OpSB && y.Op != OpSB -> (LEAQ1 [c] {s} x y)
+(ADDQ x (LEAQ [c] {s} y)) && x.Op != OpSB && y.Op != OpSB -> (LEAQ1 [c] {s} x y)
+(ADDQ (LEAQ [c] {s} x) y) && x.Op != OpSB && y.Op != OpSB -> (LEAQ1 [c] {s} x y)
+
+// fold ADDQconst into leaqX
+(ADDQconst [c] (LEAQ1 [d] {s} x y)) -> (LEAQ1 [c+d] {s} x y)
+(ADDQconst [c] (LEAQ2 [d] {s} x y)) -> (LEAQ2 [c+d] {s} x y)
+(ADDQconst [c] (LEAQ4 [d] {s} x y)) -> (LEAQ4 [c+d] {s} x y)
+(ADDQconst [c] (LEAQ8 [d] {s} x y)) -> (LEAQ8 [c+d] {s} x y)
+(LEAQ1 [c] {s} (ADDQconst [d] x) y) && x.Op != OpSB -> (LEAQ1 [c+d] {s} x y)
+(LEAQ1 [c] {s} x (ADDQconst [d] y)) && y.Op != OpSB -> (LEAQ1 [c+d] {s} x y)
+(LEAQ2 [c] {s} (ADDQconst [d] x) y) && x.Op != OpSB -> (LEAQ2 [c+d] {s} x y)
+(LEAQ2 [c] {s} x (ADDQconst [d] y)) && y.Op != OpSB -> (LEAQ2 [c+2*d] {s} x y)
+(LEAQ4 [c] {s} (ADDQconst [d] x) y) && x.Op != OpSB -> (LEAQ4 [c+d] {s} x y)
+(LEAQ4 [c] {s} x (ADDQconst [d] y)) && y.Op != OpSB -> (LEAQ4 [c+4*d] {s} x y)
+(LEAQ8 [c] {s} (ADDQconst [d] x) y) && x.Op != OpSB -> (LEAQ8 [c+d] {s} x y)
+(LEAQ8 [c] {s} x (ADDQconst [d] y)) && y.Op != OpSB -> (LEAQ8 [c+8*d] {s} x y)
+
+// reverse ordering of compare instruction
+(SETL (InvertFlags x)) -> (SETG x)
+(SETG (InvertFlags x)) -> (SETL x)
+(SETB (InvertFlags x)) -> (SETA x)
+(SETA (InvertFlags x)) -> (SETB x)
+(SETLE (InvertFlags x)) -> (SETGE x)
+(SETGE (InvertFlags x)) -> (SETLE x)
+(SETBE (InvertFlags x)) -> (SETAE x)
+(SETAE (InvertFlags x)) -> (SETBE x)
+(SETEQ (InvertFlags x)) -> (SETEQ x)
+(SETNE (InvertFlags x)) -> (SETNE x)
+
+// sign extended loads
+// Note: The combined instruction must end up in the same block
+// as the original load. If not, we end up making a value with
+// memory type live in two different blocks, which can lead to
+// multiple memory values alive simultaneously.
+(MOVBQSX (MOVBload [off] {sym} ptr mem)) -> @v.Args[0].Block (MOVBQSXload <v.Type> [off] {sym} ptr mem)
+(MOVBQZX (MOVBload [off] {sym} ptr mem)) -> @v.Args[0].Block (MOVBQZXload <v.Type> [off] {sym} ptr mem)
+(MOVWQSX (MOVWload [off] {sym} ptr mem)) -> @v.Args[0].Block (MOVWQSXload <v.Type> [off] {sym} ptr mem)
+(MOVWQZX (MOVWload [off] {sym} ptr mem)) -> @v.Args[0].Block (MOVWQZXload <v.Type> [off] {sym} ptr mem)
+(MOVLQSX (MOVLload [off] {sym} ptr mem)) -> @v.Args[0].Block (MOVLQSXload <v.Type> [off] {sym} ptr mem)
+(MOVLQZX (MOVLload [off] {sym} ptr mem)) -> @v.Args[0].Block (MOVLQZXload <v.Type> [off] {sym} ptr mem)
+
+// replace load from same location as preceding store with copy
+(MOVBload [off] {sym} ptr (MOVBstore [off2] {sym2} ptr2 x _)) && sym == sym2 && off == off2 && isSamePtr(ptr, ptr2) -> x
+(MOVWload [off] {sym} ptr (MOVWstore [off2] {sym2} ptr2 x _)) && sym == sym2 && off == off2 && isSamePtr(ptr, ptr2) -> x
+(MOVLload [off] {sym} ptr (MOVLstore [off2] {sym2} ptr2 x _)) && sym == sym2 && off == off2 && isSamePtr(ptr, ptr2) -> x
+(MOVQload [off] {sym} ptr (MOVQstore [off2] {sym2} ptr2 x _)) && sym == sym2 && off == off2 && isSamePtr(ptr, ptr2) -> x
+
+// Fold extensions and ANDs together.
+(MOVBQZX (ANDBconst [c] x)) -> (ANDQconst [c & 0xff] x)
+(MOVWQZX (ANDWconst [c] x)) -> (ANDQconst [c & 0xffff] x)
+(MOVLQZX (ANDLconst [c] x)) -> (ANDQconst [c & 0xffffffff] x)
+(MOVBQSX (ANDBconst [c] x)) && c & 0x80 == 0 -> (ANDQconst [c & 0x7f] x)
+(MOVWQSX (ANDWconst [c] x)) && c & 0x8000 == 0 -> (ANDQconst [c & 0x7fff] x)
+(MOVLQSX (ANDLconst [c] x)) && c & 0x80000000 == 0 -> (ANDQconst [c & 0x7fffffff] x)
+
+// Don't extend before storing
+(MOVLstore [off] {sym} ptr (MOVLQSX x) mem) -> (MOVLstore [off] {sym} ptr x mem)
+(MOVWstore [off] {sym} ptr (MOVWQSX x) mem) -> (MOVWstore [off] {sym} ptr x mem)
+(MOVBstore [off] {sym} ptr (MOVBQSX x) mem) -> (MOVBstore [off] {sym} ptr x mem)
+(MOVLstore [off] {sym} ptr (MOVLQZX x) mem) -> (MOVLstore [off] {sym} ptr x mem)
+(MOVWstore [off] {sym} ptr (MOVWQZX x) mem) -> (MOVWstore [off] {sym} ptr x mem)
+(MOVBstore [off] {sym} ptr (MOVBQZX x) mem) -> (MOVBstore [off] {sym} ptr x mem)
+
+// fold constants into memory operations
+// Note that this is not always a good idea because if not all the uses of
+// the ADDQconst get eliminated, we still have to compute the ADDQconst and we now
+// have potentially two live values (ptr and (ADDQconst [off] ptr)) instead of one.
+// Nevertheless, let's do it!
+(MOVQload [off1] {sym} (ADDQconst [off2] ptr) mem) -> (MOVQload [addOff(off1, off2)] {sym} ptr mem)
+(MOVLload [off1] {sym} (ADDQconst [off2] ptr) mem) -> (MOVLload [addOff(off1, off2)] {sym} ptr mem)
+(MOVWload [off1] {sym} (ADDQconst [off2] ptr) mem) -> (MOVWload [addOff(off1, off2)] {sym} ptr mem)
+(MOVBload [off1] {sym} (ADDQconst [off2] ptr) mem) -> (MOVBload [addOff(off1, off2)] {sym} ptr mem)
+(MOVSSload [off1] {sym} (ADDQconst [off2] ptr) mem) -> (MOVSSload [addOff(off1, off2)] {sym} ptr mem)
+(MOVSDload [off1] {sym} (ADDQconst [off2] ptr) mem) -> (MOVSDload [addOff(off1, off2)] {sym} ptr mem)
+(MOVOload [off1] {sym} (ADDQconst [off2] ptr) mem) -> (MOVOload [addOff(off1, off2)] {sym} ptr mem)
+
+(MOVQstore [off1] {sym} (ADDQconst [off2] ptr) val mem) -> (MOVQstore [addOff(off1, off2)] {sym} ptr val mem)
+(MOVLstore [off1] {sym} (ADDQconst [off2] ptr) val mem) -> (MOVLstore [addOff(off1, off2)] {sym} ptr val mem)
+(MOVWstore [off1] {sym} (ADDQconst [off2] ptr) val mem) -> (MOVWstore [addOff(off1, off2)] {sym} ptr val mem)
+(MOVBstore [off1] {sym} (ADDQconst [off2] ptr) val mem) -> (MOVBstore [addOff(off1, off2)] {sym} ptr val mem)
+(MOVSSstore [off1] {sym} (ADDQconst [off2] ptr) val mem) -> (MOVSSstore [addOff(off1, off2)] {sym} ptr val mem)
+(MOVSDstore [off1] {sym} (ADDQconst [off2] ptr) val mem) -> (MOVSDstore [addOff(off1, off2)] {sym} ptr val mem)
+(MOVOstore [off1] {sym} (ADDQconst [off2] ptr) val mem) -> (MOVOstore [addOff(off1, off2)] {sym} ptr val mem)
+
+// Fold constants into stores.
+(MOVQstore [off] {sym} ptr (MOVQconst [c]) mem) && validValAndOff(c,off) ->
+ (MOVQstoreconst [makeValAndOff(c,off)] {sym} ptr mem)
+(MOVLstore [off] {sym} ptr (MOVLconst [c]) mem) && validOff(off) ->
+ (MOVLstoreconst [makeValAndOff(int64(int32(c)),off)] {sym} ptr mem)
+(MOVWstore [off] {sym} ptr (MOVWconst [c]) mem) && validOff(off) ->
+ (MOVWstoreconst [makeValAndOff(int64(int16(c)),off)] {sym} ptr mem)
+(MOVBstore [off] {sym} ptr (MOVBconst [c]) mem) && validOff(off) ->
+ (MOVBstoreconst [makeValAndOff(int64(int8(c)),off)] {sym} ptr mem)
+
+// Fold address offsets into constant stores.
+(MOVQstoreconst [sc] {s} (ADDQconst [off] ptr) mem) && ValAndOff(sc).canAdd(off) ->
+ (MOVQstoreconst [ValAndOff(sc).add(off)] {s} ptr mem)
+(MOVLstoreconst [sc] {s} (ADDQconst [off] ptr) mem) && ValAndOff(sc).canAdd(off) ->
+ (MOVLstoreconst [ValAndOff(sc).add(off)] {s} ptr mem)
+(MOVWstoreconst [sc] {s} (ADDQconst [off] ptr) mem) && ValAndOff(sc).canAdd(off) ->
+ (MOVWstoreconst [ValAndOff(sc).add(off)] {s} ptr mem)
+(MOVBstoreconst [sc] {s} (ADDQconst [off] ptr) mem) && ValAndOff(sc).canAdd(off) ->
+ (MOVBstoreconst [ValAndOff(sc).add(off)] {s} ptr mem)
+
+// We need to fold LEAQ into the MOVx ops so that the live variable analysis knows
+// what variables are being read/written by the ops.
+(MOVQload [off1] {sym1} (LEAQ [off2] {sym2} base) mem) && canMergeSym(sym1, sym2) ->
+ (MOVQload [addOff(off1,off2)] {mergeSym(sym1,sym2)} base mem)
+(MOVLload [off1] {sym1} (LEAQ [off2] {sym2} base) mem) && canMergeSym(sym1, sym2) ->
+ (MOVLload [addOff(off1,off2)] {mergeSym(sym1,sym2)} base mem)
+(MOVWload [off1] {sym1} (LEAQ [off2] {sym2} base) mem) && canMergeSym(sym1, sym2) ->
+ (MOVWload [addOff(off1,off2)] {mergeSym(sym1,sym2)} base mem)
+(MOVBload [off1] {sym1} (LEAQ [off2] {sym2} base) mem) && canMergeSym(sym1, sym2) ->
+ (MOVBload [addOff(off1,off2)] {mergeSym(sym1,sym2)} base mem)
+(MOVSSload [off1] {sym1} (LEAQ [off2] {sym2} base) mem) && canMergeSym(sym1, sym2) ->
+ (MOVSSload [addOff(off1,off2)] {mergeSym(sym1,sym2)} base mem)
+(MOVSDload [off1] {sym1} (LEAQ [off2] {sym2} base) mem) && canMergeSym(sym1, sym2) ->
+ (MOVSDload [addOff(off1,off2)] {mergeSym(sym1,sym2)} base mem)
+(MOVOload [off1] {sym1} (LEAQ [off2] {sym2} base) mem) && canMergeSym(sym1, sym2) ->
+ (MOVOload [addOff(off1,off2)] {mergeSym(sym1,sym2)} base mem)
+
+(MOVQstore [off1] {sym1} (LEAQ [off2] {sym2} base) val mem) && canMergeSym(sym1, sym2) ->
+ (MOVQstore [addOff(off1,off2)] {mergeSym(sym1,sym2)} base val mem)
+(MOVLstore [off1] {sym1} (LEAQ [off2] {sym2} base) val mem) && canMergeSym(sym1, sym2) ->
+ (MOVLstore [addOff(off1,off2)] {mergeSym(sym1,sym2)} base val mem)
+(MOVWstore [off1] {sym1} (LEAQ [off2] {sym2} base) val mem) && canMergeSym(sym1, sym2) ->
+ (MOVWstore [addOff(off1,off2)] {mergeSym(sym1,sym2)} base val mem)
+(MOVBstore [off1] {sym1} (LEAQ [off2] {sym2} base) val mem) && canMergeSym(sym1, sym2) ->
+ (MOVBstore [addOff(off1,off2)] {mergeSym(sym1,sym2)} base val mem)
+(MOVSSstore [off1] {sym1} (LEAQ [off2] {sym2} base) val mem) && canMergeSym(sym1, sym2) ->
+ (MOVSSstore [addOff(off1,off2)] {mergeSym(sym1,sym2)} base val mem)
+(MOVSDstore [off1] {sym1} (LEAQ [off2] {sym2} base) val mem) && canMergeSym(sym1, sym2) ->
+ (MOVSDstore [addOff(off1,off2)] {mergeSym(sym1,sym2)} base val mem)
+(MOVOstore [off1] {sym1} (LEAQ [off2] {sym2} base) val mem) && canMergeSym(sym1, sym2) ->
+ (MOVOstore [addOff(off1,off2)] {mergeSym(sym1,sym2)} base val mem)
+
+(MOVQstoreconst [sc] {sym1} (LEAQ [off] {sym2} ptr) mem) && canMergeSym(sym1, sym2) && ValAndOff(sc).canAdd(off) ->
+ (MOVQstoreconst [ValAndOff(sc).add(off)] {mergeSym(sym1, sym2)} ptr mem)
+(MOVLstoreconst [sc] {sym1} (LEAQ [off] {sym2} ptr) mem) && canMergeSym(sym1, sym2) && ValAndOff(sc).canAdd(off) ->
+ (MOVLstoreconst [ValAndOff(sc).add(off)] {mergeSym(sym1, sym2)} ptr mem)
+(MOVWstoreconst [sc] {sym1} (LEAQ [off] {sym2} ptr) mem) && canMergeSym(sym1, sym2) && ValAndOff(sc).canAdd(off) ->
+ (MOVWstoreconst [ValAndOff(sc).add(off)] {mergeSym(sym1, sym2)} ptr mem)
+(MOVBstoreconst [sc] {sym1} (LEAQ [off] {sym2} ptr) mem) && canMergeSym(sym1, sym2) && ValAndOff(sc).canAdd(off) ->
+ (MOVBstoreconst [ValAndOff(sc).add(off)] {mergeSym(sym1, sym2)} ptr mem)
+
+// generating indexed loads and stores
+(MOVBload [off1] {sym1} (LEAQ1 [off2] {sym2} ptr idx) mem) && canMergeSym(sym1, sym2) ->
+ (MOVBloadidx1 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx mem)
+(MOVWload [off1] {sym1} (LEAQ2 [off2] {sym2} ptr idx) mem) && canMergeSym(sym1, sym2) ->
+ (MOVWloadidx2 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx mem)
+(MOVLload [off1] {sym1} (LEAQ4 [off2] {sym2} ptr idx) mem) && canMergeSym(sym1, sym2) ->
+ (MOVLloadidx4 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx mem)
+(MOVQload [off1] {sym1} (LEAQ8 [off2] {sym2} ptr idx) mem) && canMergeSym(sym1, sym2) ->
+ (MOVQloadidx8 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx mem)
+(MOVSSload [off1] {sym1} (LEAQ4 [off2] {sym2} ptr idx) mem) && canMergeSym(sym1, sym2) ->
+ (MOVSSloadidx4 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx mem)
+(MOVSDload [off1] {sym1} (LEAQ8 [off2] {sym2} ptr idx) mem) && canMergeSym(sym1, sym2) ->
+ (MOVSDloadidx8 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx mem)
+
+(MOVBstore [off1] {sym1} (LEAQ1 [off2] {sym2} ptr idx) val mem) && canMergeSym(sym1, sym2) ->
+ (MOVBstoreidx1 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx val mem)
+(MOVWstore [off1] {sym1} (LEAQ2 [off2] {sym2} ptr idx) val mem) && canMergeSym(sym1, sym2) ->
+ (MOVWstoreidx2 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx val mem)
+(MOVLstore [off1] {sym1} (LEAQ4 [off2] {sym2} ptr idx) val mem) && canMergeSym(sym1, sym2) ->
+ (MOVLstoreidx4 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx val mem)
+(MOVQstore [off1] {sym1} (LEAQ8 [off2] {sym2} ptr idx) val mem) && canMergeSym(sym1, sym2) ->
+ (MOVQstoreidx8 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx val mem)
+(MOVSSstore [off1] {sym1} (LEAQ4 [off2] {sym2} ptr idx) val mem) && canMergeSym(sym1, sym2) ->
+ (MOVSSstoreidx4 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx val mem)
+(MOVSDstore [off1] {sym1} (LEAQ8 [off2] {sym2} ptr idx) val mem) && canMergeSym(sym1, sym2) ->
+ (MOVSDstoreidx8 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx val mem)
+
+(MOVBload [off] {sym} (ADDQ ptr idx) mem) && ptr.Op != OpSB -> (MOVBloadidx1 [off] {sym} ptr idx mem)
+(MOVBstore [off] {sym} (ADDQ ptr idx) val mem) && ptr.Op != OpSB -> (MOVBstoreidx1 [off] {sym} ptr idx val mem)
+
+(MOVBstoreconst [x] {sym1} (LEAQ1 [off] {sym2} ptr idx) mem) && canMergeSym(sym1, sym2) ->
+ (MOVBstoreconstidx1 [ValAndOff(x).add(off)] {mergeSym(sym1,sym2)} ptr idx mem)
+(MOVWstoreconst [x] {sym1} (LEAQ2 [off] {sym2} ptr idx) mem) && canMergeSym(sym1, sym2) ->
+ (MOVWstoreconstidx2 [ValAndOff(x).add(off)] {mergeSym(sym1,sym2)} ptr idx mem)
+(MOVLstoreconst [x] {sym1} (LEAQ4 [off] {sym2} ptr idx) mem) && canMergeSym(sym1, sym2) ->
+ (MOVLstoreconstidx4 [ValAndOff(x).add(off)] {mergeSym(sym1,sym2)} ptr idx mem)
+(MOVQstoreconst [x] {sym1} (LEAQ8 [off] {sym2} ptr idx) mem) && canMergeSym(sym1, sym2) ->
+ (MOVQstoreconstidx8 [ValAndOff(x).add(off)] {mergeSym(sym1,sym2)} ptr idx mem)
+(MOVBstoreconst [x] {sym} (ADDQ ptr idx) mem) -> (MOVBstoreconstidx1 [x] {sym} ptr idx mem)
+
+// combine ADDQ into indexed loads and stores
+(MOVBloadidx1 [c] {sym} (ADDQconst [d] ptr) idx mem) -> (MOVBloadidx1 [c+d] {sym} ptr idx mem)
+(MOVWloadidx2 [c] {sym} (ADDQconst [d] ptr) idx mem) -> (MOVWloadidx2 [c+d] {sym} ptr idx mem)
+(MOVLloadidx4 [c] {sym} (ADDQconst [d] ptr) idx mem) -> (MOVLloadidx4 [c+d] {sym} ptr idx mem)
+(MOVQloadidx8 [c] {sym} (ADDQconst [d] ptr) idx mem) -> (MOVQloadidx8 [c+d] {sym} ptr idx mem)
+(MOVSSloadidx4 [c] {sym} (ADDQconst [d] ptr) idx mem) -> (MOVSSloadidx4 [c+d] {sym} ptr idx mem)
+(MOVSDloadidx8 [c] {sym} (ADDQconst [d] ptr) idx mem) -> (MOVSDloadidx8 [c+d] {sym} ptr idx mem)
+
+(MOVBstoreidx1 [c] {sym} (ADDQconst [d] ptr) idx val mem) -> (MOVBstoreidx1 [c+d] {sym} ptr idx val mem)
+(MOVWstoreidx2 [c] {sym} (ADDQconst [d] ptr) idx val mem) -> (MOVWstoreidx2 [c+d] {sym} ptr idx val mem)
+(MOVLstoreidx4 [c] {sym} (ADDQconst [d] ptr) idx val mem) -> (MOVLstoreidx4 [c+d] {sym} ptr idx val mem)
+(MOVQstoreidx8 [c] {sym} (ADDQconst [d] ptr) idx val mem) -> (MOVQstoreidx8 [c+d] {sym} ptr idx val mem)
+(MOVSSstoreidx4 [c] {sym} (ADDQconst [d] ptr) idx val mem) -> (MOVSSstoreidx4 [c+d] {sym} ptr idx val mem)
+(MOVSDstoreidx8 [c] {sym} (ADDQconst [d] ptr) idx val mem) -> (MOVSDstoreidx8 [c+d] {sym} ptr idx val mem)
+
+(MOVBloadidx1 [c] {sym} ptr (ADDQconst [d] idx) mem) -> (MOVBloadidx1 [c+d] {sym} ptr idx mem)
+(MOVWloadidx2 [c] {sym} ptr (ADDQconst [d] idx) mem) -> (MOVWloadidx2 [c+2*d] {sym} ptr idx mem)
+(MOVLloadidx4 [c] {sym} ptr (ADDQconst [d] idx) mem) -> (MOVLloadidx4 [c+4*d] {sym} ptr idx mem)
+(MOVQloadidx8 [c] {sym} ptr (ADDQconst [d] idx) mem) -> (MOVQloadidx8 [c+8*d] {sym} ptr idx mem)
+(MOVSSloadidx4 [c] {sym} ptr (ADDQconst [d] idx) mem) -> (MOVSSloadidx4 [c+4*d] {sym} ptr idx mem)
+(MOVSDloadidx8 [c] {sym} ptr (ADDQconst [d] idx) mem) -> (MOVSDloadidx8 [c+8*d] {sym} ptr idx mem)
+
+(MOVBstoreidx1 [c] {sym} ptr (ADDQconst [d] idx) val mem) -> (MOVBstoreidx1 [c+d] {sym} ptr idx val mem)
+(MOVWstoreidx2 [c] {sym} ptr (ADDQconst [d] idx) val mem) -> (MOVWstoreidx2 [c+2*d] {sym} ptr idx val mem)
+(MOVLstoreidx4 [c] {sym} ptr (ADDQconst [d] idx) val mem) -> (MOVLstoreidx4 [c+4*d] {sym} ptr idx val mem)
+(MOVQstoreidx8 [c] {sym} ptr (ADDQconst [d] idx) val mem) -> (MOVQstoreidx8 [c+8*d] {sym} ptr idx val mem)
+(MOVSSstoreidx4 [c] {sym} ptr (ADDQconst [d] idx) val mem) -> (MOVSSstoreidx4 [c+4*d] {sym} ptr idx val mem)
+(MOVSDstoreidx8 [c] {sym} ptr (ADDQconst [d] idx) val mem) -> (MOVSDstoreidx8 [c+8*d] {sym} ptr idx val mem)
+
+(MOVBstoreconstidx1 [x] {sym} (ADDQconst [c] ptr) idx mem) ->
+ (MOVBstoreconstidx1 [ValAndOff(x).add(c)] {sym} ptr idx mem)
+(MOVWstoreconstidx2 [x] {sym} (ADDQconst [c] ptr) idx mem) ->
+ (MOVWstoreconstidx2 [ValAndOff(x).add(c)] {sym} ptr idx mem)
+(MOVLstoreconstidx4 [x] {sym} (ADDQconst [c] ptr) idx mem) ->
+ (MOVLstoreconstidx4 [ValAndOff(x).add(c)] {sym} ptr idx mem)
+(MOVQstoreconstidx8 [x] {sym} (ADDQconst [c] ptr) idx mem) ->
+ (MOVQstoreconstidx8 [ValAndOff(x).add(c)] {sym} ptr idx mem)
+
+(MOVBstoreconstidx1 [x] {sym} ptr (ADDQconst [c] idx) mem) ->
+ (MOVBstoreconstidx1 [ValAndOff(x).add(c)] {sym} ptr idx mem)
+(MOVWstoreconstidx2 [x] {sym} ptr (ADDQconst [c] idx) mem) ->
+ (MOVWstoreconstidx2 [ValAndOff(x).add(2*c)] {sym} ptr idx mem)
+(MOVLstoreconstidx4 [x] {sym} ptr (ADDQconst [c] idx) mem) ->
+ (MOVLstoreconstidx4 [ValAndOff(x).add(4*c)] {sym} ptr idx mem)
+(MOVQstoreconstidx8 [x] {sym} ptr (ADDQconst [c] idx) mem) ->
+ (MOVQstoreconstidx8 [ValAndOff(x).add(8*c)] {sym} ptr idx mem)
+
+// fold LEAQs together
+(LEAQ [off1] {sym1} (LEAQ [off2] {sym2} x)) && canMergeSym(sym1, sym2) ->
+ (LEAQ [addOff(off1,off2)] {mergeSym(sym1,sym2)} x)
+
+// LEAQ into LEAQ1
+(LEAQ1 [off1] {sym1} (LEAQ [off2] {sym2} x) y) && canMergeSym(sym1, sym2) && x.Op != OpSB ->
+ (LEAQ1 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+(LEAQ1 [off1] {sym1} x (LEAQ [off2] {sym2} y)) && canMergeSym(sym1, sym2) && y.Op != OpSB ->
+ (LEAQ1 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+
+// LEAQ1 into LEAQ
+(LEAQ [off1] {sym1} (LEAQ1 [off2] {sym2} x y)) && canMergeSym(sym1, sym2) ->
+ (LEAQ1 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+
+// LEAQ into LEAQ[248]
+(LEAQ2 [off1] {sym1} (LEAQ [off2] {sym2} x) y) && canMergeSym(sym1, sym2) && x.Op != OpSB ->
+ (LEAQ2 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+(LEAQ4 [off1] {sym1} (LEAQ [off2] {sym2} x) y) && canMergeSym(sym1, sym2) && x.Op != OpSB ->
+ (LEAQ4 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+(LEAQ8 [off1] {sym1} (LEAQ [off2] {sym2} x) y) && canMergeSym(sym1, sym2) && x.Op != OpSB ->
+ (LEAQ8 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+
+// LEAQ[248] into LEAQ
+(LEAQ [off1] {sym1} (LEAQ2 [off2] {sym2} x y)) && canMergeSym(sym1, sym2) ->
+ (LEAQ2 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+(LEAQ [off1] {sym1} (LEAQ4 [off2] {sym2} x y)) && canMergeSym(sym1, sym2) ->
+ (LEAQ4 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+(LEAQ [off1] {sym1} (LEAQ8 [off2] {sym2} x y)) && canMergeSym(sym1, sym2) ->
+ (LEAQ8 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+
+// lower Zero instructions with word sizes
+(Zero [0] _ mem) -> mem
+(Zero [1] destptr mem) -> (MOVBstoreconst [0] destptr mem)
+(Zero [2] destptr mem) -> (MOVWstoreconst [0] destptr mem)
+(Zero [4] destptr mem) -> (MOVLstoreconst [0] destptr mem)
+(Zero [8] destptr mem) -> (MOVQstoreconst [0] destptr mem)
+
+(Zero [3] destptr mem) ->
+ (MOVBstoreconst [makeValAndOff(0,2)] destptr
+ (MOVWstoreconst [0] destptr mem))
+(Zero [5] destptr mem) ->
+ (MOVBstoreconst [makeValAndOff(0,4)] destptr
+ (MOVLstoreconst [0] destptr mem))
+(Zero [6] destptr mem) ->
+ (MOVWstoreconst [makeValAndOff(0,4)] destptr
+ (MOVLstoreconst [0] destptr mem))
+(Zero [7] destptr mem) ->
+ (MOVLstoreconst [makeValAndOff(0,3)] destptr
+ (MOVLstoreconst [0] destptr mem))
+
+// Strip off any fractional word zeroing.
+(Zero [size] destptr mem) && size%8 != 0 && size > 8 ->
+ (Zero [size-size%8] (ADDQconst destptr [size%8])
+ (MOVQstoreconst [0] destptr mem))
+
+// Zero small numbers of words directly.
+(Zero [16] destptr mem) ->
+ (MOVQstoreconst [makeValAndOff(0,8)] destptr
+ (MOVQstoreconst [0] destptr mem))
+(Zero [24] destptr mem) ->
+ (MOVQstoreconst [makeValAndOff(0,16)] destptr
+ (MOVQstoreconst [makeValAndOff(0,8)] destptr
+ (MOVQstoreconst [0] destptr mem)))
+(Zero [32] destptr mem) ->
+ (MOVQstoreconst [makeValAndOff(0,24)] destptr
+ (MOVQstoreconst [makeValAndOff(0,16)] destptr
+ (MOVQstoreconst [makeValAndOff(0,8)] destptr
+ (MOVQstoreconst [0] destptr mem))))
+
+// Medium zeroing uses a duff device.
+(Zero [size] destptr mem) && size <= 1024 && size%8 == 0 && size%16 != 0 ->
+ (Zero [size-8] (ADDQconst [8] destptr) (MOVQstore destptr (MOVQconst [0]) mem))
+(Zero [size] destptr mem) && size <= 1024 && size%16 == 0 ->
+ (DUFFZERO [duffStart(size)] (ADDQconst [duffAdj(size)] destptr) (MOVOconst [0]) mem)
+
+// Large zeroing uses REP STOSQ.
+(Zero [size] destptr mem) && size > 1024 && size%8 == 0 ->
+ (REPSTOSQ destptr (MOVQconst [size/8]) (MOVQconst [0]) mem)
+
+// Absorb InvertFlags into branches.
+(LT (InvertFlags cmp) yes no) -> (GT cmp yes no)
+(GT (InvertFlags cmp) yes no) -> (LT cmp yes no)
+(LE (InvertFlags cmp) yes no) -> (GE cmp yes no)
+(GE (InvertFlags cmp) yes no) -> (LE cmp yes no)
+(ULT (InvertFlags cmp) yes no) -> (UGT cmp yes no)
+(UGT (InvertFlags cmp) yes no) -> (ULT cmp yes no)
+(ULE (InvertFlags cmp) yes no) -> (UGE cmp yes no)
+(UGE (InvertFlags cmp) yes no) -> (ULE cmp yes no)
+(EQ (InvertFlags cmp) yes no) -> (EQ cmp yes no)
+(NE (InvertFlags cmp) yes no) -> (NE cmp yes no)
+
+// Constant comparisons.
+(CMPQconst (MOVQconst [x]) [y]) && x==y -> (FlagEQ)
+(CMPQconst (MOVQconst [x]) [y]) && x<y && uint64(x)<uint64(y) -> (FlagLT_ULT)
+(CMPQconst (MOVQconst [x]) [y]) && x<y && uint64(x)>uint64(y) -> (FlagLT_UGT)
+(CMPQconst (MOVQconst [x]) [y]) && x>y && uint64(x)<uint64(y) -> (FlagGT_ULT)
+(CMPQconst (MOVQconst [x]) [y]) && x>y && uint64(x)>uint64(y) -> (FlagGT_UGT)
+(CMPLconst (MOVLconst [x]) [y]) && int32(x)==int32(y) -> (FlagEQ)
+(CMPLconst (MOVLconst [x]) [y]) && int32(x)<int32(y) && uint32(x)<uint32(y) -> (FlagLT_ULT)
+(CMPLconst (MOVLconst [x]) [y]) && int32(x)<int32(y) && uint32(x)>uint32(y) -> (FlagLT_UGT)
+(CMPLconst (MOVLconst [x]) [y]) && int32(x)>int32(y) && uint32(x)<uint32(y) -> (FlagGT_ULT)
+(CMPLconst (MOVLconst [x]) [y]) && int32(x)>int32(y) && uint32(x)>uint32(y) -> (FlagGT_UGT)
+(CMPWconst (MOVWconst [x]) [y]) && int16(x)==int16(y) -> (FlagEQ)
+(CMPWconst (MOVWconst [x]) [y]) && int16(x)<int16(y) && uint16(x)<uint16(y) -> (FlagLT_ULT)
+(CMPWconst (MOVWconst [x]) [y]) && int16(x)<int16(y) && uint16(x)>uint16(y) -> (FlagLT_UGT)
+(CMPWconst (MOVWconst [x]) [y]) && int16(x)>int16(y) && uint16(x)<uint16(y) -> (FlagGT_ULT)
+(CMPWconst (MOVWconst [x]) [y]) && int16(x)>int16(y) && uint16(x)>uint16(y) -> (FlagGT_UGT)
+(CMPBconst (MOVBconst [x]) [y]) && int8(x)==int8(y) -> (FlagEQ)
+(CMPBconst (MOVBconst [x]) [y]) && int8(x)<int8(y) && uint8(x)<uint8(y) -> (FlagLT_ULT)
+(CMPBconst (MOVBconst [x]) [y]) && int8(x)<int8(y) && uint8(x)>uint8(y) -> (FlagLT_UGT)
+(CMPBconst (MOVBconst [x]) [y]) && int8(x)>int8(y) && uint8(x)<uint8(y) -> (FlagGT_ULT)
+(CMPBconst (MOVBconst [x]) [y]) && int8(x)>int8(y) && uint8(x)>uint8(y) -> (FlagGT_UGT)
+
+// Other known comparisons.
+(CMPQconst (ANDQconst _ [m]) [n]) && m+1==n && isPowerOfTwo(n) -> (FlagLT_ULT)
+(CMPLconst (ANDLconst _ [m]) [n]) && int32(m)+1==int32(n) && isPowerOfTwo(int64(int32(n))) -> (FlagLT_ULT)
+(CMPWconst (ANDWconst _ [m]) [n]) && int16(m)+1==int16(n) && isPowerOfTwo(int64(int16(n))) -> (FlagLT_ULT)
+(CMPBconst (ANDBconst _ [m]) [n]) && int8(m)+1==int8(n) && isPowerOfTwo(int64(int8(n))) -> (FlagLT_ULT)
+// TODO: DIVxU also.
+
+// Absorb flag constants into SBB ops.
+(SBBQcarrymask (FlagEQ)) -> (MOVQconst [0])
+(SBBQcarrymask (FlagLT_ULT)) -> (MOVQconst [-1])
+(SBBQcarrymask (FlagLT_UGT)) -> (MOVQconst [0])
+(SBBQcarrymask (FlagGT_ULT)) -> (MOVQconst [-1])
+(SBBQcarrymask (FlagGT_UGT)) -> (MOVQconst [0])
+(SBBLcarrymask (FlagEQ)) -> (MOVLconst [0])
+(SBBLcarrymask (FlagLT_ULT)) -> (MOVLconst [-1])
+(SBBLcarrymask (FlagLT_UGT)) -> (MOVLconst [0])
+(SBBLcarrymask (FlagGT_ULT)) -> (MOVLconst [-1])
+(SBBLcarrymask (FlagGT_UGT)) -> (MOVLconst [0])
+
+// Absorb flag constants into branches.
+(EQ (FlagEQ) yes no) -> (First nil yes no)
+(EQ (FlagLT_ULT) yes no) -> (First nil no yes)
+(EQ (FlagLT_UGT) yes no) -> (First nil no yes)
+(EQ (FlagGT_ULT) yes no) -> (First nil no yes)
+(EQ (FlagGT_UGT) yes no) -> (First nil no yes)
+
+(NE (FlagEQ) yes no) -> (First nil no yes)
+(NE (FlagLT_ULT) yes no) -> (First nil yes no)
+(NE (FlagLT_UGT) yes no) -> (First nil yes no)
+(NE (FlagGT_ULT) yes no) -> (First nil yes no)
+(NE (FlagGT_UGT) yes no) -> (First nil yes no)
+
+(LT (FlagEQ) yes no) -> (First nil no yes)
+(LT (FlagLT_ULT) yes no) -> (First nil yes no)
+(LT (FlagLT_UGT) yes no) -> (First nil yes no)
+(LT (FlagGT_ULT) yes no) -> (First nil no yes)
+(LT (FlagGT_UGT) yes no) -> (First nil no yes)
+
+(LE (FlagEQ) yes no) -> (First nil yes no)
+(LE (FlagLT_ULT) yes no) -> (First nil yes no)
+(LE (FlagLT_UGT) yes no) -> (First nil yes no)
+(LE (FlagGT_ULT) yes no) -> (First nil no yes)
+(LE (FlagGT_UGT) yes no) -> (First nil no yes)
+
+(GT (FlagEQ) yes no) -> (First nil no yes)
+(GT (FlagLT_ULT) yes no) -> (First nil no yes)
+(GT (FlagLT_UGT) yes no) -> (First nil no yes)
+(GT (FlagGT_ULT) yes no) -> (First nil yes no)
+(GT (FlagGT_UGT) yes no) -> (First nil yes no)
+
+(GE (FlagEQ) yes no) -> (First nil yes no)
+(GE (FlagLT_ULT) yes no) -> (First nil no yes)
+(GE (FlagLT_UGT) yes no) -> (First nil no yes)
+(GE (FlagGT_ULT) yes no) -> (First nil yes no)
+(GE (FlagGT_UGT) yes no) -> (First nil yes no)
+
+(ULT (FlagEQ) yes no) -> (First nil no yes)
+(ULT (FlagLT_ULT) yes no) -> (First nil yes no)
+(ULT (FlagLT_UGT) yes no) -> (First nil no yes)
+(ULT (FlagGT_ULT) yes no) -> (First nil yes no)
+(ULT (FlagGT_UGT) yes no) -> (First nil no yes)
+
+(ULE (FlagEQ) yes no) -> (First nil yes no)
+(ULE (FlagLT_ULT) yes no) -> (First nil yes no)
+(ULE (FlagLT_UGT) yes no) -> (First nil no yes)
+(ULE (FlagGT_ULT) yes no) -> (First nil yes no)
+(ULE (FlagGT_UGT) yes no) -> (First nil no yes)
+
+(UGT (FlagEQ) yes no) -> (First nil no yes)
+(UGT (FlagLT_ULT) yes no) -> (First nil no yes)
+(UGT (FlagLT_UGT) yes no) -> (First nil yes no)
+(UGT (FlagGT_ULT) yes no) -> (First nil no yes)
+(UGT (FlagGT_UGT) yes no) -> (First nil yes no)
+
+(UGE (FlagEQ) yes no) -> (First nil yes no)
+(UGE (FlagLT_ULT) yes no) -> (First nil no yes)
+(UGE (FlagLT_UGT) yes no) -> (First nil yes no)
+(UGE (FlagGT_ULT) yes no) -> (First nil no yes)
+(UGE (FlagGT_UGT) yes no) -> (First nil yes no)
+
+// Absorb flag constants into SETxx ops.
+(SETEQ (FlagEQ)) -> (MOVBconst [1])
+(SETEQ (FlagLT_ULT)) -> (MOVBconst [0])
+(SETEQ (FlagLT_UGT)) -> (MOVBconst [0])
+(SETEQ (FlagGT_ULT)) -> (MOVBconst [0])
+(SETEQ (FlagGT_UGT)) -> (MOVBconst [0])
+
+(SETNE (FlagEQ)) -> (MOVBconst [0])
+(SETNE (FlagLT_ULT)) -> (MOVBconst [1])
+(SETNE (FlagLT_UGT)) -> (MOVBconst [1])
+(SETNE (FlagGT_ULT)) -> (MOVBconst [1])
+(SETNE (FlagGT_UGT)) -> (MOVBconst [1])
+
+(SETL (FlagEQ)) -> (MOVBconst [0])
+(SETL (FlagLT_ULT)) -> (MOVBconst [1])
+(SETL (FlagLT_UGT)) -> (MOVBconst [1])
+(SETL (FlagGT_ULT)) -> (MOVBconst [0])
+(SETL (FlagGT_UGT)) -> (MOVBconst [0])
+
+(SETLE (FlagEQ)) -> (MOVBconst [1])
+(SETLE (FlagLT_ULT)) -> (MOVBconst [1])
+(SETLE (FlagLT_UGT)) -> (MOVBconst [1])
+(SETLE (FlagGT_ULT)) -> (MOVBconst [0])
+(SETLE (FlagGT_UGT)) -> (MOVBconst [0])
+
+(SETG (FlagEQ)) -> (MOVBconst [0])
+(SETG (FlagLT_ULT)) -> (MOVBconst [0])
+(SETG (FlagLT_UGT)) -> (MOVBconst [0])
+(SETG (FlagGT_ULT)) -> (MOVBconst [1])
+(SETG (FlagGT_UGT)) -> (MOVBconst [1])
+
+(SETGE (FlagEQ)) -> (MOVBconst [1])
+(SETGE (FlagLT_ULT)) -> (MOVBconst [0])
+(SETGE (FlagLT_UGT)) -> (MOVBconst [0])
+(SETGE (FlagGT_ULT)) -> (MOVBconst [1])
+(SETGE (FlagGT_UGT)) -> (MOVBconst [1])
+
+(SETB (FlagEQ)) -> (MOVBconst [0])
+(SETB (FlagLT_ULT)) -> (MOVBconst [1])
+(SETB (FlagLT_UGT)) -> (MOVBconst [0])
+(SETB (FlagGT_ULT)) -> (MOVBconst [1])
+(SETB (FlagGT_UGT)) -> (MOVBconst [0])
+
+(SETBE (FlagEQ)) -> (MOVBconst [1])
+(SETBE (FlagLT_ULT)) -> (MOVBconst [1])
+(SETBE (FlagLT_UGT)) -> (MOVBconst [0])
+(SETBE (FlagGT_ULT)) -> (MOVBconst [1])
+(SETBE (FlagGT_UGT)) -> (MOVBconst [0])
+
+(SETA (FlagEQ)) -> (MOVBconst [0])
+(SETA (FlagLT_ULT)) -> (MOVBconst [0])
+(SETA (FlagLT_UGT)) -> (MOVBconst [1])
+(SETA (FlagGT_ULT)) -> (MOVBconst [0])
+(SETA (FlagGT_UGT)) -> (MOVBconst [1])
+
+(SETAE (FlagEQ)) -> (MOVBconst [1])
+(SETAE (FlagLT_ULT)) -> (MOVBconst [0])
+(SETAE (FlagLT_UGT)) -> (MOVBconst [1])
+(SETAE (FlagGT_ULT)) -> (MOVBconst [0])
+(SETAE (FlagGT_UGT)) -> (MOVBconst [1])
+
+// Remove redundant *const ops
+(ADDQconst [0] x) -> x
+(ADDLconst [c] x) && int32(c)==0 -> x
+(ADDWconst [c] x) && int16(c)==0 -> x
+(ADDBconst [c] x) && int8(c)==0 -> x
+(SUBQconst [0] x) -> x
+(SUBLconst [c] x) && int32(c) == 0 -> x
+(SUBWconst [c] x) && int16(c) == 0 -> x
+(SUBBconst [c] x) && int8(c) == 0 -> x
+(ANDQconst [0] _) -> (MOVQconst [0])
+(ANDLconst [c] _) && int32(c)==0 -> (MOVLconst [0])
+(ANDWconst [c] _) && int16(c)==0 -> (MOVWconst [0])
+(ANDBconst [c] _) && int8(c)==0 -> (MOVBconst [0])
+(ANDQconst [-1] x) -> x
+(ANDLconst [c] x) && int32(c)==-1 -> x
+(ANDWconst [c] x) && int16(c)==-1 -> x
+(ANDBconst [c] x) && int8(c)==-1 -> x
+(ORQconst [0] x) -> x
+(ORLconst [c] x) && int32(c)==0 -> x
+(ORWconst [c] x) && int16(c)==0 -> x
+(ORBconst [c] x) && int8(c)==0 -> x
+(ORQconst [-1] _) -> (MOVQconst [-1])
+(ORLconst [c] _) && int32(c)==-1 -> (MOVLconst [-1])
+(ORWconst [c] _) && int16(c)==-1 -> (MOVWconst [-1])
+(ORBconst [c] _) && int8(c)==-1 -> (MOVBconst [-1])
+(XORQconst [0] x) -> x
+(XORLconst [c] x) && int32(c)==0 -> x
+(XORWconst [c] x) && int16(c)==0 -> x
+(XORBconst [c] x) && int8(c)==0 -> x
+
+// generic constant folding
+// TODO: more of this
+(ADDQconst [c] (MOVQconst [d])) -> (MOVQconst [c+d])
+(ADDLconst [c] (MOVLconst [d])) -> (MOVLconst [c+d])
+(ADDWconst [c] (MOVWconst [d])) -> (MOVWconst [c+d])
+(ADDBconst [c] (MOVBconst [d])) -> (MOVBconst [c+d])
+(ADDQconst [c] (ADDQconst [d] x)) -> (ADDQconst [c+d] x)
+(ADDLconst [c] (ADDLconst [d] x)) -> (ADDLconst [c+d] x)
+(ADDWconst [c] (ADDWconst [d] x)) -> (ADDWconst [c+d] x)
+(ADDBconst [c] (ADDBconst [d] x)) -> (ADDBconst [c+d] x)
+(SUBQconst [c] (MOVQconst [d])) -> (MOVQconst [d-c])
+(SUBLconst [c] (MOVLconst [d])) -> (MOVLconst [d-c])
+(SUBWconst [c] (MOVWconst [d])) -> (MOVWconst [d-c])
+(SUBBconst [c] (MOVBconst [d])) -> (MOVBconst [d-c])
+(SUBQconst [c] (SUBQconst [d] x)) -> (ADDQconst [-c-d] x)
+(SUBLconst [c] (SUBLconst [d] x)) -> (ADDLconst [-c-d] x)
+(SUBWconst [c] (SUBWconst [d] x)) -> (ADDWconst [-c-d] x)
+(SUBBconst [c] (SUBBconst [d] x)) -> (ADDBconst [-c-d] x)
+(SARQconst [c] (MOVQconst [d])) -> (MOVQconst [d>>uint64(c)])
+(SARLconst [c] (MOVQconst [d])) -> (MOVQconst [d>>uint64(c)])
+(SARWconst [c] (MOVQconst [d])) -> (MOVQconst [d>>uint64(c)])
+(SARBconst [c] (MOVQconst [d])) -> (MOVQconst [d>>uint64(c)])
+(NEGQ (MOVQconst [c])) -> (MOVQconst [-c])
+(NEGL (MOVLconst [c])) -> (MOVLconst [-c])
+(NEGW (MOVWconst [c])) -> (MOVWconst [-c])
+(NEGB (MOVBconst [c])) -> (MOVBconst [-c])
+(MULQconst [c] (MOVQconst [d])) -> (MOVQconst [c*d])
+(MULLconst [c] (MOVLconst [d])) -> (MOVLconst [c*d])
+(MULWconst [c] (MOVWconst [d])) -> (MOVWconst [c*d])
+(MULBconst [c] (MOVBconst [d])) -> (MOVBconst [c*d])
+(ANDQconst [c] (MOVQconst [d])) -> (MOVQconst [c&d])
+(ANDLconst [c] (MOVLconst [d])) -> (MOVLconst [c&d])
+(ANDWconst [c] (MOVWconst [d])) -> (MOVWconst [c&d])
+(ANDBconst [c] (MOVBconst [d])) -> (MOVBconst [c&d])
+(ORQconst [c] (MOVQconst [d])) -> (MOVQconst [c|d])
+(ORLconst [c] (MOVLconst [d])) -> (MOVLconst [c|d])
+(ORWconst [c] (MOVWconst [d])) -> (MOVWconst [c|d])
+(ORBconst [c] (MOVBconst [d])) -> (MOVBconst [c|d])
+(XORQconst [c] (MOVQconst [d])) -> (MOVQconst [c^d])
+(XORLconst [c] (MOVLconst [d])) -> (MOVLconst [c^d])
+(XORWconst [c] (MOVWconst [d])) -> (MOVWconst [c^d])
+(XORBconst [c] (MOVBconst [d])) -> (MOVBconst [c^d])
+(NOTQ (MOVQconst [c])) -> (MOVQconst [^c])
+(NOTL (MOVLconst [c])) -> (MOVLconst [^c])
+(NOTW (MOVWconst [c])) -> (MOVWconst [^c])
+(NOTB (MOVBconst [c])) -> (MOVBconst [^c])
+
+// generic simplifications
+// TODO: more of this
+(ADDQ x (NEGQ y)) -> (SUBQ x y)
+(ADDL x (NEGL y)) -> (SUBL x y)
+(ADDW x (NEGW y)) -> (SUBW x y)
+(ADDB x (NEGB y)) -> (SUBB x y)
+(SUBQ x x) -> (MOVQconst [0])
+(SUBL x x) -> (MOVLconst [0])
+(SUBW x x) -> (MOVWconst [0])
+(SUBB x x) -> (MOVBconst [0])
+(ANDQ x x) -> x
+(ANDL x x) -> x
+(ANDW x x) -> x
+(ANDB x x) -> x
+(ORQ x x) -> x
+(ORL x x) -> x
+(ORW x x) -> x
+(ORB x x) -> x
+(XORQ x x) -> (MOVQconst [0])
+(XORL x x) -> (MOVLconst [0])
+(XORW x x) -> (MOVWconst [0])
+(XORB x x) -> (MOVBconst [0])
+
+// checking AND against 0.
+(CMPQconst (ANDQ x y) [0]) -> (TESTQ x y)
+(CMPLconst (ANDL x y) [0]) -> (TESTL x y)
+(CMPWconst (ANDW x y) [0]) -> (TESTW x y)
+(CMPBconst (ANDB x y) [0]) -> (TESTB x y)
+(CMPQconst (ANDQconst [c] x) [0]) -> (TESTQconst [c] x)
+(CMPLconst (ANDLconst [c] x) [0]) -> (TESTLconst [c] x)
+(CMPWconst (ANDWconst [c] x) [0]) -> (TESTWconst [c] x)
+(CMPBconst (ANDBconst [c] x) [0]) -> (TESTBconst [c] x)
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package main
+
+import "strings"
+
+// copied from ../../amd64/reg.go
+var regNamesAMD64 = []string{
+ ".AX",
+ ".CX",
+ ".DX",
+ ".BX",
+ ".SP",
+ ".BP",
+ ".SI",
+ ".DI",
+ ".R8",
+ ".R9",
+ ".R10",
+ ".R11",
+ ".R12",
+ ".R13",
+ ".R14",
+ ".R15",
+ ".X0",
+ ".X1",
+ ".X2",
+ ".X3",
+ ".X4",
+ ".X5",
+ ".X6",
+ ".X7",
+ ".X8",
+ ".X9",
+ ".X10",
+ ".X11",
+ ".X12",
+ ".X13",
+ ".X14",
+ ".X15",
+
+ // pseudo-registers
+ ".SB",
+ ".FLAGS",
+}
+
+func init() {
+ // Make map from reg names to reg integers.
+ if len(regNamesAMD64) > 64 {
+ panic("too many registers")
+ }
+ num := map[string]int{}
+ for i, name := range regNamesAMD64 {
+ if name[0] != '.' {
+ panic("register name " + name + " does not start with '.'")
+ }
+ num[name[1:]] = i
+ }
+ buildReg := func(s string) regMask {
+ m := regMask(0)
+ for _, r := range strings.Split(s, " ") {
+ if n, ok := num[r]; ok {
+ m |= regMask(1) << uint(n)
+ continue
+ }
+ panic("register " + r + " not found")
+ }
+ return m
+ }
+
+ // Common individual register masks
+ var (
+ ax = buildReg("AX")
+ cx = buildReg("CX")
+ dx = buildReg("DX")
+ x15 = buildReg("X15")
+ gp = buildReg("AX CX DX BX BP SI DI R8 R9 R10 R11 R12 R13 R14 R15")
+ fp = buildReg("X0 X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15")
+ gpsp = gp | buildReg("SP")
+ gpspsb = gpsp | buildReg("SB")
+ flags = buildReg("FLAGS")
+ callerSave = gp | fp | flags
+ )
+ // Common slices of register masks
+ var (
+ gponly = []regMask{gp}
+ fponly = []regMask{fp}
+ flagsonly = []regMask{flags}
+ )
+
+ // Common regInfo
+ var (
+ gp01 = regInfo{inputs: []regMask{}, outputs: gponly}
+ gp11 = regInfo{inputs: []regMask{gpsp}, outputs: gponly, clobbers: flags}
+ gp11nf = regInfo{inputs: []regMask{gpsp}, outputs: gponly} // nf: no flags clobbered
+ gp11sb = regInfo{inputs: []regMask{gpspsb}, outputs: gponly}
+ gp21 = regInfo{inputs: []regMask{gpsp, gpsp}, outputs: gponly, clobbers: flags}
+ gp21sb = regInfo{inputs: []regMask{gpspsb, gpsp}, outputs: gponly}
+ gp21shift = regInfo{inputs: []regMask{gpsp, cx}, outputs: []regMask{gp &^ cx}, clobbers: flags}
+ gp11div = regInfo{inputs: []regMask{ax, gpsp &^ dx}, outputs: []regMask{ax},
+ clobbers: dx | flags}
+ gp11hmul = regInfo{inputs: []regMask{ax, gpsp}, outputs: []regMask{dx},
+ clobbers: ax | flags}
+ gp11mod = regInfo{inputs: []regMask{ax, gpsp &^ dx}, outputs: []regMask{dx},
+ clobbers: ax | flags}
+
+ gp2flags = regInfo{inputs: []regMask{gpsp, gpsp}, outputs: flagsonly}
+ gp1flags = regInfo{inputs: []regMask{gpsp}, outputs: flagsonly}
+ flagsgp = regInfo{inputs: flagsonly, outputs: gponly}
+ readflags = regInfo{inputs: flagsonly, outputs: gponly}
+ flagsgpax = regInfo{inputs: flagsonly, clobbers: ax | flags, outputs: []regMask{gp &^ ax}}
+
+ gpload = regInfo{inputs: []regMask{gpspsb, 0}, outputs: gponly}
+ gploadidx = regInfo{inputs: []regMask{gpspsb, gpsp, 0}, outputs: gponly}
+
+ gpstore = regInfo{inputs: []regMask{gpspsb, gpsp, 0}}
+ gpstoreconst = regInfo{inputs: []regMask{gpspsb, 0}}
+ gpstoreidx = regInfo{inputs: []regMask{gpspsb, gpsp, gpsp, 0}}
+ gpstoreconstidx = regInfo{inputs: []regMask{gpspsb, gpsp, 0}}
+
+ fp01 = regInfo{inputs: []regMask{}, outputs: fponly}
+ fp21 = regInfo{inputs: []regMask{fp, fp}, outputs: fponly}
+ fp21x15 = regInfo{inputs: []regMask{fp &^ x15, fp &^ x15},
+ clobbers: x15, outputs: []regMask{fp &^ x15}}
+ fpgp = regInfo{inputs: fponly, outputs: gponly}
+ gpfp = regInfo{inputs: gponly, outputs: fponly}
+ fp11 = regInfo{inputs: fponly, outputs: fponly}
+ fp2flags = regInfo{inputs: []regMask{fp, fp}, outputs: flagsonly}
+ // fp1flags = regInfo{inputs: fponly, outputs: flagsonly}
+
+ fpload = regInfo{inputs: []regMask{gpspsb, 0}, outputs: fponly}
+ fploadidx = regInfo{inputs: []regMask{gpspsb, gpsp, 0}, outputs: fponly}
+
+ fpstore = regInfo{inputs: []regMask{gpspsb, fp, 0}}
+ fpstoreidx = regInfo{inputs: []regMask{gpspsb, gpsp, fp, 0}}
+ )
+ // TODO: most ops clobber flags
+
+ // Suffixes encode the bit width of various instructions.
+ // Q = 64 bit, L = 32 bit, W = 16 bit, B = 8 bit
+
+ // TODO: 2-address instructions. Mark ops as needing matching input/output regs.
+ var AMD64ops = []opData{
+ // fp ops
+ {name: "ADDSS", argLength: 2, reg: fp21, asm: "ADDSS"}, // fp32 add
+ {name: "ADDSD", argLength: 2, reg: fp21, asm: "ADDSD"}, // fp64 add
+ {name: "SUBSS", argLength: 2, reg: fp21x15, asm: "SUBSS"}, // fp32 sub
+ {name: "SUBSD", argLength: 2, reg: fp21x15, asm: "SUBSD"}, // fp64 sub
+ {name: "MULSS", argLength: 2, reg: fp21, asm: "MULSS"}, // fp32 mul
+ {name: "MULSD", argLength: 2, reg: fp21, asm: "MULSD"}, // fp64 mul
+ {name: "DIVSS", argLength: 2, reg: fp21x15, asm: "DIVSS"}, // fp32 div
+ {name: "DIVSD", argLength: 2, reg: fp21x15, asm: "DIVSD"}, // fp64 div
+
+ {name: "MOVSSload", argLength: 2, reg: fpload, asm: "MOVSS", aux: "SymOff"}, // fp32 load
+ {name: "MOVSDload", argLength: 2, reg: fpload, asm: "MOVSD", aux: "SymOff"}, // fp64 load
+ {name: "MOVSSconst", reg: fp01, asm: "MOVSS", aux: "Float", rematerializeable: true}, // fp32 constant
+ {name: "MOVSDconst", reg: fp01, asm: "MOVSD", aux: "Float", rematerializeable: true}, // fp64 constant
+ {name: "MOVSSloadidx4", argLength: 3, reg: fploadidx, asm: "MOVSS", aux: "SymOff"}, // fp32 load
+ {name: "MOVSDloadidx8", argLength: 3, reg: fploadidx, asm: "MOVSD", aux: "SymOff"}, // fp64 load
+
+ {name: "MOVSSstore", argLength: 3, reg: fpstore, asm: "MOVSS", aux: "SymOff"}, // fp32 store
+ {name: "MOVSDstore", argLength: 3, reg: fpstore, asm: "MOVSD", aux: "SymOff"}, // fp64 store
+ {name: "MOVSSstoreidx4", argLength: 4, reg: fpstoreidx, asm: "MOVSS", aux: "SymOff"}, // fp32 indexed by 4i store
+ {name: "MOVSDstoreidx8", argLength: 4, reg: fpstoreidx, asm: "MOVSD", aux: "SymOff"}, // fp64 indexed by 8i store
+
+ // binary ops
+ {name: "ADDQ", argLength: 2, reg: gp21, asm: "ADDQ"}, // arg0 + arg1
+ {name: "ADDL", argLength: 2, reg: gp21, asm: "ADDL"}, // arg0 + arg1
+ {name: "ADDW", argLength: 2, reg: gp21, asm: "ADDL"}, // arg0 + arg1
+ {name: "ADDB", argLength: 2, reg: gp21, asm: "ADDL"}, // arg0 + arg1
+ {name: "ADDQconst", argLength: 1, reg: gp11, asm: "ADDQ", aux: "Int64", typ: "UInt64"}, // arg0 + auxint
+ {name: "ADDLconst", argLength: 1, reg: gp11, asm: "ADDL", aux: "Int32"}, // arg0 + auxint
+ {name: "ADDWconst", argLength: 1, reg: gp11, asm: "ADDL", aux: "Int16"}, // arg0 + auxint
+ {name: "ADDBconst", argLength: 1, reg: gp11, asm: "ADDL", aux: "Int8"}, // arg0 + auxint
+
+ {name: "SUBQ", argLength: 2, reg: gp21, asm: "SUBQ"}, // arg0 - arg1
+ {name: "SUBL", argLength: 2, reg: gp21, asm: "SUBL"}, // arg0 - arg1
+ {name: "SUBW", argLength: 2, reg: gp21, asm: "SUBL"}, // arg0 - arg1
+ {name: "SUBB", argLength: 2, reg: gp21, asm: "SUBL"}, // arg0 - arg1
+ {name: "SUBQconst", argLength: 1, reg: gp11, asm: "SUBQ", aux: "Int64"}, // arg0 - auxint
+ {name: "SUBLconst", argLength: 1, reg: gp11, asm: "SUBL", aux: "Int32"}, // arg0 - auxint
+ {name: "SUBWconst", argLength: 1, reg: gp11, asm: "SUBL", aux: "Int16"}, // arg0 - auxint
+ {name: "SUBBconst", argLength: 1, reg: gp11, asm: "SUBL", aux: "Int8"}, // arg0 - auxint
+
+ {name: "MULQ", argLength: 2, reg: gp21, asm: "IMULQ"}, // arg0 * arg1
+ {name: "MULL", argLength: 2, reg: gp21, asm: "IMULL"}, // arg0 * arg1
+ {name: "MULW", argLength: 2, reg: gp21, asm: "IMULW"}, // arg0 * arg1
+ {name: "MULB", argLength: 2, reg: gp21, asm: "IMULW"}, // arg0 * arg1
+ {name: "MULQconst", argLength: 1, reg: gp11, asm: "IMULQ", aux: "Int64"}, // arg0 * auxint
+ {name: "MULLconst", argLength: 1, reg: gp11, asm: "IMULL", aux: "Int32"}, // arg0 * auxint
+ {name: "MULWconst", argLength: 1, reg: gp11, asm: "IMULW", aux: "Int16"}, // arg0 * auxint
+ {name: "MULBconst", argLength: 1, reg: gp11, asm: "IMULW", aux: "Int8"}, // arg0 * auxint
+
+ {name: "HMULQ", argLength: 2, reg: gp11hmul, asm: "IMULQ"}, // (arg0 * arg1) >> width
+ {name: "HMULL", argLength: 2, reg: gp11hmul, asm: "IMULL"}, // (arg0 * arg1) >> width
+ {name: "HMULW", argLength: 2, reg: gp11hmul, asm: "IMULW"}, // (arg0 * arg1) >> width
+ {name: "HMULB", argLength: 2, reg: gp11hmul, asm: "IMULB"}, // (arg0 * arg1) >> width
+ {name: "HMULQU", argLength: 2, reg: gp11hmul, asm: "MULQ"}, // (arg0 * arg1) >> width
+ {name: "HMULLU", argLength: 2, reg: gp11hmul, asm: "MULL"}, // (arg0 * arg1) >> width
+ {name: "HMULWU", argLength: 2, reg: gp11hmul, asm: "MULW"}, // (arg0 * arg1) >> width
+ {name: "HMULBU", argLength: 2, reg: gp11hmul, asm: "MULB"}, // (arg0 * arg1) >> width
+
+ {name: "AVGQU", argLength: 2, reg: gp21}, // (arg0 + arg1) / 2 as unsigned, all 64 result bits
+
+ {name: "DIVQ", argLength: 2, reg: gp11div, asm: "IDIVQ"}, // arg0 / arg1
+ {name: "DIVL", argLength: 2, reg: gp11div, asm: "IDIVL"}, // arg0 / arg1
+ {name: "DIVW", argLength: 2, reg: gp11div, asm: "IDIVW"}, // arg0 / arg1
+ {name: "DIVQU", argLength: 2, reg: gp11div, asm: "DIVQ"}, // arg0 / arg1
+ {name: "DIVLU", argLength: 2, reg: gp11div, asm: "DIVL"}, // arg0 / arg1
+ {name: "DIVWU", argLength: 2, reg: gp11div, asm: "DIVW"}, // arg0 / arg1
+
+ {name: "MODQ", argLength: 2, reg: gp11mod, asm: "IDIVQ"}, // arg0 % arg1
+ {name: "MODL", argLength: 2, reg: gp11mod, asm: "IDIVL"}, // arg0 % arg1
+ {name: "MODW", argLength: 2, reg: gp11mod, asm: "IDIVW"}, // arg0 % arg1
+ {name: "MODQU", argLength: 2, reg: gp11mod, asm: "DIVQ"}, // arg0 % arg1
+ {name: "MODLU", argLength: 2, reg: gp11mod, asm: "DIVL"}, // arg0 % arg1
+ {name: "MODWU", argLength: 2, reg: gp11mod, asm: "DIVW"}, // arg0 % arg1
+
+ {name: "ANDQ", argLength: 2, reg: gp21, asm: "ANDQ"}, // arg0 & arg1
+ {name: "ANDL", argLength: 2, reg: gp21, asm: "ANDL"}, // arg0 & arg1
+ {name: "ANDW", argLength: 2, reg: gp21, asm: "ANDL"}, // arg0 & arg1
+ {name: "ANDB", argLength: 2, reg: gp21, asm: "ANDL"}, // arg0 & arg1
+ {name: "ANDQconst", argLength: 1, reg: gp11, asm: "ANDQ", aux: "Int64"}, // arg0 & auxint
+ {name: "ANDLconst", argLength: 1, reg: gp11, asm: "ANDL", aux: "Int32"}, // arg0 & auxint
+ {name: "ANDWconst", argLength: 1, reg: gp11, asm: "ANDL", aux: "Int16"}, // arg0 & auxint
+ {name: "ANDBconst", argLength: 1, reg: gp11, asm: "ANDL", aux: "Int8"}, // arg0 & auxint
+
+ {name: "ORQ", argLength: 2, reg: gp21, asm: "ORQ"}, // arg0 | arg1
+ {name: "ORL", argLength: 2, reg: gp21, asm: "ORL"}, // arg0 | arg1
+ {name: "ORW", argLength: 2, reg: gp21, asm: "ORL"}, // arg0 | arg1
+ {name: "ORB", argLength: 2, reg: gp21, asm: "ORL"}, // arg0 | arg1
+ {name: "ORQconst", argLength: 1, reg: gp11, asm: "ORQ", aux: "Int64"}, // arg0 | auxint
+ {name: "ORLconst", argLength: 1, reg: gp11, asm: "ORL", aux: "Int32"}, // arg0 | auxint
+ {name: "ORWconst", argLength: 1, reg: gp11, asm: "ORL", aux: "Int16"}, // arg0 | auxint
+ {name: "ORBconst", argLength: 1, reg: gp11, asm: "ORL", aux: "Int8"}, // arg0 | auxint
+
+ {name: "XORQ", argLength: 2, reg: gp21, asm: "XORQ"}, // arg0 ^ arg1
+ {name: "XORL", argLength: 2, reg: gp21, asm: "XORL"}, // arg0 ^ arg1
+ {name: "XORW", argLength: 2, reg: gp21, asm: "XORL"}, // arg0 ^ arg1
+ {name: "XORB", argLength: 2, reg: gp21, asm: "XORL"}, // arg0 ^ arg1
+ {name: "XORQconst", argLength: 1, reg: gp11, asm: "XORQ", aux: "Int64"}, // arg0 ^ auxint
+ {name: "XORLconst", argLength: 1, reg: gp11, asm: "XORL", aux: "Int32"}, // arg0 ^ auxint
+ {name: "XORWconst", argLength: 1, reg: gp11, asm: "XORL", aux: "Int16"}, // arg0 ^ auxint
+ {name: "XORBconst", argLength: 1, reg: gp11, asm: "XORL", aux: "Int8"}, // arg0 ^ auxint
+
+ {name: "CMPQ", argLength: 2, reg: gp2flags, asm: "CMPQ", typ: "Flags"}, // arg0 compare to arg1
+ {name: "CMPL", argLength: 2, reg: gp2flags, asm: "CMPL", typ: "Flags"}, // arg0 compare to arg1
+ {name: "CMPW", argLength: 2, reg: gp2flags, asm: "CMPW", typ: "Flags"}, // arg0 compare to arg1
+ {name: "CMPB", argLength: 2, reg: gp2flags, asm: "CMPB", typ: "Flags"}, // arg0 compare to arg1
+ {name: "CMPQconst", argLength: 1, reg: gp1flags, asm: "CMPQ", typ: "Flags", aux: "Int64"}, // arg0 compare to auxint
+ {name: "CMPLconst", argLength: 1, reg: gp1flags, asm: "CMPL", typ: "Flags", aux: "Int32"}, // arg0 compare to auxint
+ {name: "CMPWconst", argLength: 1, reg: gp1flags, asm: "CMPW", typ: "Flags", aux: "Int16"}, // arg0 compare to auxint
+ {name: "CMPBconst", argLength: 1, reg: gp1flags, asm: "CMPB", typ: "Flags", aux: "Int8"}, // arg0 compare to auxint
+
+ {name: "UCOMISS", argLength: 2, reg: fp2flags, asm: "UCOMISS", typ: "Flags"}, // arg0 compare to arg1, f32
+ {name: "UCOMISD", argLength: 2, reg: fp2flags, asm: "UCOMISD", typ: "Flags"}, // arg0 compare to arg1, f64
+
+ {name: "TESTQ", argLength: 2, reg: gp2flags, asm: "TESTQ", typ: "Flags"}, // (arg0 & arg1) compare to 0
+ {name: "TESTL", argLength: 2, reg: gp2flags, asm: "TESTL", typ: "Flags"}, // (arg0 & arg1) compare to 0
+ {name: "TESTW", argLength: 2, reg: gp2flags, asm: "TESTW", typ: "Flags"}, // (arg0 & arg1) compare to 0
+ {name: "TESTB", argLength: 2, reg: gp2flags, asm: "TESTB", typ: "Flags"}, // (arg0 & arg1) compare to 0
+ {name: "TESTQconst", argLength: 1, reg: gp1flags, asm: "TESTQ", typ: "Flags", aux: "Int64"}, // (arg0 & auxint) compare to 0
+ {name: "TESTLconst", argLength: 1, reg: gp1flags, asm: "TESTL", typ: "Flags", aux: "Int32"}, // (arg0 & auxint) compare to 0
+ {name: "TESTWconst", argLength: 1, reg: gp1flags, asm: "TESTW", typ: "Flags", aux: "Int16"}, // (arg0 & auxint) compare to 0
+ {name: "TESTBconst", argLength: 1, reg: gp1flags, asm: "TESTB", typ: "Flags", aux: "Int8"}, // (arg0 & auxint) compare to 0
+
+ {name: "SHLQ", argLength: 2, reg: gp21shift, asm: "SHLQ"}, // arg0 << arg1, shift amount is mod 64
+ {name: "SHLL", argLength: 2, reg: gp21shift, asm: "SHLL"}, // arg0 << arg1, shift amount is mod 32
+ {name: "SHLW", argLength: 2, reg: gp21shift, asm: "SHLL"}, // arg0 << arg1, shift amount is mod 32
+ {name: "SHLB", argLength: 2, reg: gp21shift, asm: "SHLL"}, // arg0 << arg1, shift amount is mod 32
+ {name: "SHLQconst", argLength: 1, reg: gp11, asm: "SHLQ", aux: "Int64"}, // arg0 << auxint, shift amount 0-63
+ {name: "SHLLconst", argLength: 1, reg: gp11, asm: "SHLL", aux: "Int32"}, // arg0 << auxint, shift amount 0-31
+ {name: "SHLWconst", argLength: 1, reg: gp11, asm: "SHLL", aux: "Int16"}, // arg0 << auxint, shift amount 0-31
+ {name: "SHLBconst", argLength: 1, reg: gp11, asm: "SHLL", aux: "Int8"}, // arg0 << auxint, shift amount 0-31
+ // Note: x86 is weird, the 16 and 8 byte shifts still use all 5 bits of shift amount!
+
+ {name: "SHRQ", argLength: 2, reg: gp21shift, asm: "SHRQ"}, // unsigned arg0 >> arg1, shift amount is mod 64
+ {name: "SHRL", argLength: 2, reg: gp21shift, asm: "SHRL"}, // unsigned arg0 >> arg1, shift amount is mod 32
+ {name: "SHRW", argLength: 2, reg: gp21shift, asm: "SHRW"}, // unsigned arg0 >> arg1, shift amount is mod 32
+ {name: "SHRB", argLength: 2, reg: gp21shift, asm: "SHRB"}, // unsigned arg0 >> arg1, shift amount is mod 32
+ {name: "SHRQconst", argLength: 1, reg: gp11, asm: "SHRQ", aux: "Int64"}, // unsigned arg0 >> auxint, shift amount 0-63
+ {name: "SHRLconst", argLength: 1, reg: gp11, asm: "SHRL", aux: "Int32"}, // unsigned arg0 >> auxint, shift amount 0-31
+ {name: "SHRWconst", argLength: 1, reg: gp11, asm: "SHRW", aux: "Int16"}, // unsigned arg0 >> auxint, shift amount 0-31
+ {name: "SHRBconst", argLength: 1, reg: gp11, asm: "SHRB", aux: "Int8"}, // unsigned arg0 >> auxint, shift amount 0-31
+
+ {name: "SARQ", argLength: 2, reg: gp21shift, asm: "SARQ"}, // signed arg0 >> arg1, shift amount is mod 64
+ {name: "SARL", argLength: 2, reg: gp21shift, asm: "SARL"}, // signed arg0 >> arg1, shift amount is mod 32
+ {name: "SARW", argLength: 2, reg: gp21shift, asm: "SARW"}, // signed arg0 >> arg1, shift amount is mod 32
+ {name: "SARB", argLength: 2, reg: gp21shift, asm: "SARB"}, // signed arg0 >> arg1, shift amount is mod 32
+ {name: "SARQconst", argLength: 1, reg: gp11, asm: "SARQ", aux: "Int64"}, // signed arg0 >> auxint, shift amount 0-63
+ {name: "SARLconst", argLength: 1, reg: gp11, asm: "SARL", aux: "Int32"}, // signed arg0 >> auxint, shift amount 0-31
+ {name: "SARWconst", argLength: 1, reg: gp11, asm: "SARW", aux: "Int16"}, // signed arg0 >> auxint, shift amount 0-31
+ {name: "SARBconst", argLength: 1, reg: gp11, asm: "SARB", aux: "Int8"}, // signed arg0 >> auxint, shift amount 0-31
+
+ {name: "ROLQconst", argLength: 1, reg: gp11, asm: "ROLQ", aux: "Int64"}, // arg0 rotate left auxint, rotate amount 0-63
+ {name: "ROLLconst", argLength: 1, reg: gp11, asm: "ROLL", aux: "Int32"}, // arg0 rotate left auxint, rotate amount 0-31
+ {name: "ROLWconst", argLength: 1, reg: gp11, asm: "ROLW", aux: "Int16"}, // arg0 rotate left auxint, rotate amount 0-15
+ {name: "ROLBconst", argLength: 1, reg: gp11, asm: "ROLB", aux: "Int8"}, // arg0 rotate left auxint, rotate amount 0-7
+
+ // unary ops
+ {name: "NEGQ", argLength: 1, reg: gp11, asm: "NEGQ"}, // -arg0
+ {name: "NEGL", argLength: 1, reg: gp11, asm: "NEGL"}, // -arg0
+ {name: "NEGW", argLength: 1, reg: gp11, asm: "NEGL"}, // -arg0
+ {name: "NEGB", argLength: 1, reg: gp11, asm: "NEGL"}, // -arg0
+
+ {name: "NOTQ", argLength: 1, reg: gp11, asm: "NOTQ"}, // ^arg0
+ {name: "NOTL", argLength: 1, reg: gp11, asm: "NOTL"}, // ^arg0
+ {name: "NOTW", argLength: 1, reg: gp11, asm: "NOTL"}, // ^arg0
+ {name: "NOTB", argLength: 1, reg: gp11, asm: "NOTL"}, // ^arg0
+
+ {name: "SQRTSD", argLength: 1, reg: fp11, asm: "SQRTSD"}, // sqrt(arg0)
+
+ {name: "SBBQcarrymask", argLength: 1, reg: flagsgp, asm: "SBBQ"}, // (int64)(-1) if carry is set, 0 if carry is clear.
+ {name: "SBBLcarrymask", argLength: 1, reg: flagsgp, asm: "SBBL"}, // (int32)(-1) if carry is set, 0 if carry is clear.
+ // Note: SBBW and SBBB are subsumed by SBBL
+
+ {name: "SETEQ", argLength: 1, reg: readflags, asm: "SETEQ"}, // extract == condition from arg0
+ {name: "SETNE", argLength: 1, reg: readflags, asm: "SETNE"}, // extract != condition from arg0
+ {name: "SETL", argLength: 1, reg: readflags, asm: "SETLT"}, // extract signed < condition from arg0
+ {name: "SETLE", argLength: 1, reg: readflags, asm: "SETLE"}, // extract signed <= condition from arg0
+ {name: "SETG", argLength: 1, reg: readflags, asm: "SETGT"}, // extract signed > condition from arg0
+ {name: "SETGE", argLength: 1, reg: readflags, asm: "SETGE"}, // extract signed >= condition from arg0
+ {name: "SETB", argLength: 1, reg: readflags, asm: "SETCS"}, // extract unsigned < condition from arg0
+ {name: "SETBE", argLength: 1, reg: readflags, asm: "SETLS"}, // extract unsigned <= condition from arg0
+ {name: "SETA", argLength: 1, reg: readflags, asm: "SETHI"}, // extract unsigned > condition from arg0
+ {name: "SETAE", argLength: 1, reg: readflags, asm: "SETCC"}, // extract unsigned >= condition from arg0
+ // Need different opcodes for floating point conditions because
+ // any comparison involving a NaN is always FALSE and thus
+ // the patterns for inverting conditions cannot be used.
+ {name: "SETEQF", argLength: 1, reg: flagsgpax, asm: "SETEQ"}, // extract == condition from arg0
+ {name: "SETNEF", argLength: 1, reg: flagsgpax, asm: "SETNE"}, // extract != condition from arg0
+ {name: "SETORD", argLength: 1, reg: flagsgp, asm: "SETPC"}, // extract "ordered" (No Nan present) condition from arg0
+ {name: "SETNAN", argLength: 1, reg: flagsgp, asm: "SETPS"}, // extract "unordered" (Nan present) condition from arg0
+
+ {name: "SETGF", argLength: 1, reg: flagsgp, asm: "SETHI"}, // extract floating > condition from arg0
+ {name: "SETGEF", argLength: 1, reg: flagsgp, asm: "SETCC"}, // extract floating >= condition from arg0
+
+ {name: "MOVBQSX", argLength: 1, reg: gp11nf, asm: "MOVBQSX"}, // sign extend arg0 from int8 to int64
+ {name: "MOVBQZX", argLength: 1, reg: gp11nf, asm: "MOVBQZX"}, // zero extend arg0 from int8 to int64
+ {name: "MOVWQSX", argLength: 1, reg: gp11nf, asm: "MOVWQSX"}, // sign extend arg0 from int16 to int64
+ {name: "MOVWQZX", argLength: 1, reg: gp11nf, asm: "MOVWQZX"}, // zero extend arg0 from int16 to int64
+ {name: "MOVLQSX", argLength: 1, reg: gp11nf, asm: "MOVLQSX"}, // sign extend arg0 from int32 to int64
+ {name: "MOVLQZX", argLength: 1, reg: gp11nf, asm: "MOVLQZX"}, // zero extend arg0 from int32 to int64
+
+ {name: "MOVBconst", reg: gp01, asm: "MOVB", typ: "UInt8", aux: "Int8", rematerializeable: true}, // 8 low bits of auxint
+ {name: "MOVWconst", reg: gp01, asm: "MOVW", typ: "UInt16", aux: "Int16", rematerializeable: true}, // 16 low bits of auxint
+ {name: "MOVLconst", reg: gp01, asm: "MOVL", typ: "UInt32", aux: "Int32", rematerializeable: true}, // 32 low bits of auxint
+ {name: "MOVQconst", reg: gp01, asm: "MOVQ", typ: "UInt64", aux: "Int64", rematerializeable: true}, // auxint
+
+ {name: "CVTTSD2SL", argLength: 1, reg: fpgp, asm: "CVTTSD2SL"}, // convert float64 to int32
+ {name: "CVTTSD2SQ", argLength: 1, reg: fpgp, asm: "CVTTSD2SQ"}, // convert float64 to int64
+ {name: "CVTTSS2SL", argLength: 1, reg: fpgp, asm: "CVTTSS2SL"}, // convert float32 to int32
+ {name: "CVTTSS2SQ", argLength: 1, reg: fpgp, asm: "CVTTSS2SQ"}, // convert float32 to int64
+ {name: "CVTSL2SS", argLength: 1, reg: gpfp, asm: "CVTSL2SS"}, // convert int32 to float32
+ {name: "CVTSL2SD", argLength: 1, reg: gpfp, asm: "CVTSL2SD"}, // convert int32 to float64
+ {name: "CVTSQ2SS", argLength: 1, reg: gpfp, asm: "CVTSQ2SS"}, // convert int64 to float32
+ {name: "CVTSQ2SD", argLength: 1, reg: gpfp, asm: "CVTSQ2SD"}, // convert int64 to float64
+ {name: "CVTSD2SS", argLength: 1, reg: fp11, asm: "CVTSD2SS"}, // convert float64 to float32
+ {name: "CVTSS2SD", argLength: 1, reg: fp11, asm: "CVTSS2SD"}, // convert float32 to float64
+
+ {name: "PXOR", argLength: 2, reg: fp21, asm: "PXOR"}, // exclusive or, applied to X regs for float negation.
+
+ {name: "LEAQ", argLength: 1, reg: gp11sb, aux: "SymOff", rematerializeable: true}, // arg0 + auxint + offset encoded in aux
+ {name: "LEAQ1", argLength: 2, reg: gp21sb, aux: "SymOff"}, // arg0 + arg1 + auxint + aux
+ {name: "LEAQ2", argLength: 2, reg: gp21sb, aux: "SymOff"}, // arg0 + 2*arg1 + auxint + aux
+ {name: "LEAQ4", argLength: 2, reg: gp21sb, aux: "SymOff"}, // arg0 + 4*arg1 + auxint + aux
+ {name: "LEAQ8", argLength: 2, reg: gp21sb, aux: "SymOff"}, // arg0 + 8*arg1 + auxint + aux
+ // Note: LEAQ{1,2,4,8} must not have OpSB as either argument.
+
+ // auxint+aux == add auxint and the offset of the symbol in aux (if any) to the effective address
+ {name: "MOVBload", argLength: 2, reg: gpload, asm: "MOVBLZX", aux: "SymOff", typ: "UInt8"}, // load byte from arg0+auxint+aux. arg1=mem
+ {name: "MOVBQSXload", argLength: 2, reg: gpload, asm: "MOVBQSX", aux: "SymOff"}, // ditto, extend to int64
+ {name: "MOVBQZXload", argLength: 2, reg: gpload, asm: "MOVBQZX", aux: "SymOff"}, // ditto, extend to uint64
+ {name: "MOVWload", argLength: 2, reg: gpload, asm: "MOVWLZX", aux: "SymOff", typ: "UInt16"}, // load 2 bytes from arg0+auxint+aux. arg1=mem
+ {name: "MOVWQSXload", argLength: 2, reg: gpload, asm: "MOVWQSX", aux: "SymOff"}, // ditto, extend to int64
+ {name: "MOVWQZXload", argLength: 2, reg: gpload, asm: "MOVWQZX", aux: "SymOff"}, // ditto, extend to uint64
+ {name: "MOVLload", argLength: 2, reg: gpload, asm: "MOVL", aux: "SymOff", typ: "UInt32"}, // load 4 bytes from arg0+auxint+aux. arg1=mem
+ {name: "MOVLQSXload", argLength: 2, reg: gpload, asm: "MOVLQSX", aux: "SymOff"}, // ditto, extend to int64
+ {name: "MOVLQZXload", argLength: 2, reg: gpload, asm: "MOVLQZX", aux: "SymOff"}, // ditto, extend to uint64
+ {name: "MOVQload", argLength: 2, reg: gpload, asm: "MOVQ", aux: "SymOff", typ: "UInt64"}, // load 8 bytes from arg0+auxint+aux. arg1=mem
+ {name: "MOVBstore", argLength: 3, reg: gpstore, asm: "MOVB", aux: "SymOff", typ: "Mem"}, // store byte in arg1 to arg0+auxint+aux. arg2=mem
+ {name: "MOVWstore", argLength: 3, reg: gpstore, asm: "MOVW", aux: "SymOff", typ: "Mem"}, // store 2 bytes in arg1 to arg0+auxint+aux. arg2=mem
+ {name: "MOVLstore", argLength: 3, reg: gpstore, asm: "MOVL", aux: "SymOff", typ: "Mem"}, // store 4 bytes in arg1 to arg0+auxint+aux. arg2=mem
+ {name: "MOVQstore", argLength: 3, reg: gpstore, asm: "MOVQ", aux: "SymOff", typ: "Mem"}, // store 8 bytes in arg1 to arg0+auxint+aux. arg2=mem
+ {name: "MOVOload", argLength: 2, reg: fpload, asm: "MOVUPS", aux: "SymOff", typ: "Int128"}, // load 16 bytes from arg0+auxint+aux. arg1=mem
+ {name: "MOVOstore", argLength: 3, reg: fpstore, asm: "MOVUPS", aux: "SymOff", typ: "Mem"}, // store 16 bytes in arg1 to arg0+auxint+aux. arg2=mem
+
+ // indexed loads/stores
+ {name: "MOVBloadidx1", argLength: 3, reg: gploadidx, asm: "MOVBLZX", aux: "SymOff"}, // load a byte from arg0+arg1+auxint+aux. arg2=mem
+ {name: "MOVWloadidx2", argLength: 3, reg: gploadidx, asm: "MOVWLZX", aux: "SymOff"}, // load 2 bytes from arg0+2*arg1+auxint+aux. arg2=mem
+ {name: "MOVLloadidx4", argLength: 3, reg: gploadidx, asm: "MOVL", aux: "SymOff"}, // load 4 bytes from arg0+4*arg1+auxint+aux. arg2=mem
+ {name: "MOVQloadidx8", argLength: 3, reg: gploadidx, asm: "MOVQ", aux: "SymOff"}, // load 8 bytes from arg0+8*arg1+auxint+aux. arg2=mem
+ // TODO: sign-extending indexed loads
+ {name: "MOVBstoreidx1", argLength: 4, reg: gpstoreidx, asm: "MOVB", aux: "SymOff"}, // store byte in arg2 to arg0+arg1+auxint+aux. arg3=mem
+ {name: "MOVWstoreidx2", argLength: 4, reg: gpstoreidx, asm: "MOVW", aux: "SymOff"}, // store 2 bytes in arg2 to arg0+2*arg1+auxint+aux. arg3=mem
+ {name: "MOVLstoreidx4", argLength: 4, reg: gpstoreidx, asm: "MOVL", aux: "SymOff"}, // store 4 bytes in arg2 to arg0+4*arg1+auxint+aux. arg3=mem
+ {name: "MOVQstoreidx8", argLength: 4, reg: gpstoreidx, asm: "MOVQ", aux: "SymOff"}, // store 8 bytes in arg2 to arg0+8*arg1+auxint+aux. arg3=mem
+ // TODO: add size-mismatched indexed loads, like MOVBstoreidx4.
+
+ // For storeconst ops, the AuxInt field encodes both
+ // the value to store and an address offset of the store.
+ // Cast AuxInt to a ValAndOff to extract Val and Off fields.
+ {name: "MOVBstoreconst", argLength: 2, reg: gpstoreconst, asm: "MOVB", aux: "SymValAndOff", typ: "Mem"}, // store low byte of ValAndOff(AuxInt).Val() to arg0+ValAndOff(AuxInt).Off()+aux. arg1=mem
+ {name: "MOVWstoreconst", argLength: 2, reg: gpstoreconst, asm: "MOVW", aux: "SymValAndOff", typ: "Mem"}, // store low 2 bytes of ...
+ {name: "MOVLstoreconst", argLength: 2, reg: gpstoreconst, asm: "MOVL", aux: "SymValAndOff", typ: "Mem"}, // store low 4 bytes of ...
+ {name: "MOVQstoreconst", argLength: 2, reg: gpstoreconst, asm: "MOVQ", aux: "SymValAndOff", typ: "Mem"}, // store 8 bytes of ...
+
+ {name: "MOVBstoreconstidx1", argLength: 3, reg: gpstoreconstidx, asm: "MOVB", aux: "SymValAndOff", typ: "Mem"}, // store low byte of ValAndOff(AuxInt).Val() to arg0+1*arg1+ValAndOff(AuxInt).Off()+aux. arg2=mem
+ {name: "MOVWstoreconstidx2", argLength: 3, reg: gpstoreconstidx, asm: "MOVW", aux: "SymValAndOff", typ: "Mem"}, // store low 2 bytes of ... 2*arg1 ...
+ {name: "MOVLstoreconstidx4", argLength: 3, reg: gpstoreconstidx, asm: "MOVL", aux: "SymValAndOff", typ: "Mem"}, // store low 4 bytes of ... 4*arg1 ...
+ {name: "MOVQstoreconstidx8", argLength: 3, reg: gpstoreconstidx, asm: "MOVQ", aux: "SymValAndOff", typ: "Mem"}, // store 8 bytes of ... 8*arg1 ...
+
+ // arg0 = (duff-adjusted) pointer to start of memory to zero
+ // arg1 = value to store (will always be zero)
+ // arg2 = mem
+ // auxint = offset into duffzero code to start executing
+ // returns mem
+ {
+ name: "DUFFZERO",
+ aux: "Int64",
+ argLength: 3,
+ reg: regInfo{
+ inputs: []regMask{buildReg("DI"), buildReg("X0")},
+ clobbers: buildReg("DI FLAGS"),
+ },
+ },
+ {name: "MOVOconst", reg: regInfo{nil, 0, []regMask{fp}}, typ: "Int128", rematerializeable: true},
+
+ // arg0 = address of memory to zero
+ // arg1 = # of 8-byte words to zero
+ // arg2 = value to store (will always be zero)
+ // arg3 = mem
+ // returns mem
+ {
+ name: "REPSTOSQ",
+ argLength: 4,
+ reg: regInfo{
+ inputs: []regMask{buildReg("DI"), buildReg("CX"), buildReg("AX")},
+ clobbers: buildReg("DI CX"),
+ },
+ },
+
+ {name: "CALLstatic", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "SymOff"}, // call static function aux.(*gc.Sym). arg0=mem, auxint=argsize, returns mem
+ {name: "CALLclosure", argLength: 3, reg: regInfo{[]regMask{gpsp, buildReg("DX"), 0}, callerSave, nil}, aux: "Int64"}, // call function via closure. arg0=codeptr, arg1=closure, arg2=mem, auxint=argsize, returns mem
+ {name: "CALLdefer", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "Int64"}, // call deferproc. arg0=mem, auxint=argsize, returns mem
+ {name: "CALLgo", argLength: 1, reg: regInfo{clobbers: callerSave}, aux: "Int64"}, // call newproc. arg0=mem, auxint=argsize, returns mem
+ {name: "CALLinter", argLength: 2, reg: regInfo{inputs: []regMask{gp}, clobbers: callerSave}, aux: "Int64"}, // call fn by pointer. arg0=codeptr, arg1=mem, auxint=argsize, returns mem
+
+ // arg0 = destination pointer
+ // arg1 = source pointer
+ // arg2 = mem
+ // auxint = offset from duffcopy symbol to call
+ // returns memory
+ {
+ name: "DUFFCOPY",
+ aux: "Int64",
+ argLength: 3,
+ reg: regInfo{
+ inputs: []regMask{buildReg("DI"), buildReg("SI")},
+ clobbers: buildReg("DI SI X0 FLAGS"), // uses X0 as a temporary
+ },
+ },
+
+ // arg0 = destination pointer
+ // arg1 = source pointer
+ // arg2 = # of 8-byte words to copy
+ // arg3 = mem
+ // returns memory
+ {
+ name: "REPMOVSQ",
+ argLength: 4,
+ reg: regInfo{
+ inputs: []regMask{buildReg("DI"), buildReg("SI"), buildReg("CX")},
+ clobbers: buildReg("DI SI CX"),
+ },
+ },
+
+ // (InvertFlags (CMPQ a b)) == (CMPQ b a)
+ // So if we want (SETL (CMPQ a b)) but we can't do that because a is a constant,
+ // then we do (SETL (InvertFlags (CMPQ b a))) instead.
+ // Rewrites will convert this to (SETG (CMPQ b a)).
+ // InvertFlags is a pseudo-op which can't appear in assembly output.
+ {name: "InvertFlags", argLength: 1}, // reverse direction of arg0
+
+ // Pseudo-ops
+ {name: "LoweredGetG", argLength: 1, reg: gp01}, // arg0=mem
+ // Scheduler ensures LoweredGetClosurePtr occurs only in entry block,
+ // and sorts it to the very beginning of the block to prevent other
+ // use of DX (the closure pointer)
+ {name: "LoweredGetClosurePtr", reg: regInfo{outputs: []regMask{buildReg("DX")}}},
+ //arg0=ptr,arg1=mem, returns void. Faults if ptr is nil.
+ {name: "LoweredNilCheck", argLength: 2, reg: regInfo{inputs: []regMask{gpsp}, clobbers: flags}},
+
+ // MOVQconvert converts between pointers and integers.
+ // We have a special op for this so as to not confuse GC
+ // (particularly stack maps). It takes a memory arg so it
+ // gets correctly ordered with respect to GC safepoints.
+ // arg0=ptr/int arg1=mem, output=int/ptr
+ {name: "MOVQconvert", argLength: 2, reg: gp11nf, asm: "MOVQ"},
+
+ // Constant flag values. For any comparison, there are 5 possible
+ // outcomes: the three from the signed total order (<,==,>) and the
+ // three from the unsigned total order. The == cases overlap.
+ // Note: there's a sixth "unordered" outcome for floating-point
+ // comparisons, but we don't use such a beast yet.
+ // These ops are for temporary use by rewrite rules. They
+ // cannot appear in the generated assembly.
+ {name: "FlagEQ"}, // equal
+ {name: "FlagLT_ULT"}, // signed < and unsigned <
+ {name: "FlagLT_UGT"}, // signed < and unsigned >
+ {name: "FlagGT_UGT"}, // signed > and unsigned <
+ {name: "FlagGT_ULT"}, // signed > and unsigned >
+ }
+
+ var AMD64blocks = []blockData{
+ {name: "EQ"},
+ {name: "NE"},
+ {name: "LT"},
+ {name: "LE"},
+ {name: "GT"},
+ {name: "GE"},
+ {name: "ULT"},
+ {name: "ULE"},
+ {name: "UGT"},
+ {name: "UGE"},
+ {name: "EQF"},
+ {name: "NEF"},
+ {name: "ORD"}, // FP, ordered comparison (parity zero)
+ {name: "NAN"}, // FP, unordered comparison (parity one)
+ }
+
+ archs = append(archs, arch{"AMD64", AMD64ops, AMD64blocks, regNamesAMD64})
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+This package generates opcode tables, rewrite rules, etc. for the ssa compiler.
+Run it with:
+ go run *.go
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// values are specified using the following format:
+// (op <type> [auxint] {aux} arg0 arg1 ...)
+// the type and aux fields are optional
+// on the matching side
+// - the type, aux, and auxint fields must match if they are specified.
+// on the generated side
+// - the type of the top-level expression is the same as the one on the left-hand side.
+// - the type of any subexpressions must be specified explicitly.
+// - auxint will be 0 if not specified.
+// - aux will be nil if not specified.
+
+// blocks are specified using the following format:
+// (kind controlvalue succ0 succ1 ...)
+// controlvalue must be "nil" or a value expression
+// succ* fields must be variables
+// For now, the generated successors must be a permutation of the matched successors.
+
+// constant folding
+(Trunc16to8 (Const16 [c])) -> (Const8 [int64(int8(c))])
+(Trunc32to8 (Const32 [c])) -> (Const8 [int64(int8(c))])
+(Trunc32to16 (Const32 [c])) -> (Const16 [int64(int16(c))])
+(Trunc64to8 (Const64 [c])) -> (Const8 [int64(int8(c))])
+(Trunc64to16 (Const64 [c])) -> (Const16 [int64(int16(c))])
+(Trunc64to32 (Const64 [c])) -> (Const32 [int64(int32(c))])
+
+(Neg8 (Const8 [c])) -> (Const8 [-c])
+(Neg16 (Const16 [c])) -> (Const16 [-c])
+(Neg32 (Const32 [c])) -> (Const32 [-c])
+(Neg64 (Const64 [c])) -> (Const64 [-c])
+
+(Add8 (Const8 [c]) (Const8 [d])) -> (Const8 [c+d])
+(Add16 (Const16 [c]) (Const16 [d])) -> (Const16 [c+d])
+(Add32 (Const32 [c]) (Const32 [d])) -> (Const32 [c+d])
+(Add64 (Const64 [c]) (Const64 [d])) -> (Const64 [c+d])
+
+(Sub8 (Const8 [c]) (Const8 [d])) -> (Const8 [c-d])
+(Sub16 (Const16 [c]) (Const16 [d])) -> (Const16 [c-d])
+(Sub32 (Const32 [c]) (Const32 [d])) -> (Const32 [c-d])
+(Sub64 (Const64 [c]) (Const64 [d])) -> (Const64 [c-d])
+
+(Mul8 (Const8 [c]) (Const8 [d])) -> (Const8 [c*d])
+(Mul16 (Const16 [c]) (Const16 [d])) -> (Const16 [c*d])
+(Mul32 (Const32 [c]) (Const32 [d])) -> (Const32 [c*d])
+(Mul64 (Const64 [c]) (Const64 [d])) -> (Const64 [c*d])
+
+(Lsh64x64 (Const64 [c]) (Const64 [d])) -> (Const64 [c << uint64(d)])
+(Rsh64x64 (Const64 [c]) (Const64 [d])) -> (Const64 [c >> uint64(d)])
+(Rsh64Ux64 (Const64 [c]) (Const64 [d])) -> (Const64 [int64(uint64(c) >> uint64(d))])
+(Lsh32x64 (Const32 [c]) (Const64 [d])) -> (Const32 [int64(int32(c) << uint64(d))])
+(Rsh32x64 (Const32 [c]) (Const64 [d])) -> (Const32 [int64(int32(c) >> uint64(d))])
+(Rsh32Ux64 (Const32 [c]) (Const64 [d])) -> (Const32 [int64(uint32(c) >> uint64(d))])
+(Lsh16x64 (Const16 [c]) (Const64 [d])) -> (Const16 [int64(int16(c) << uint64(d))])
+(Rsh16x64 (Const16 [c]) (Const64 [d])) -> (Const16 [int64(int16(c) >> uint64(d))])
+(Rsh16Ux64 (Const16 [c]) (Const64 [d])) -> (Const16 [int64(uint16(c) >> uint64(d))])
+(Lsh8x64 (Const8 [c]) (Const64 [d])) -> (Const8 [int64(int8(c) << uint64(d))])
+(Rsh8x64 (Const8 [c]) (Const64 [d])) -> (Const8 [int64(int8(c) >> uint64(d))])
+(Rsh8Ux64 (Const8 [c]) (Const64 [d])) -> (Const8 [int64(uint8(c) >> uint64(d))])
+
+(Lsh64x64 (Const64 [0]) _) -> (Const64 [0])
+(Rsh64x64 (Const64 [0]) _) -> (Const64 [0])
+(Rsh64Ux64 (Const64 [0]) _) -> (Const64 [0])
+(Lsh32x64 (Const32 [0]) _) -> (Const32 [0])
+(Rsh32x64 (Const32 [0]) _) -> (Const32 [0])
+(Rsh32Ux64 (Const32 [0]) _) -> (Const32 [0])
+(Lsh16x64 (Const16 [0]) _) -> (Const16 [0])
+(Rsh16x64 (Const16 [0]) _) -> (Const16 [0])
+(Rsh16Ux64 (Const16 [0]) _) -> (Const16 [0])
+(Lsh8x64 (Const8 [0]) _) -> (Const8 [0])
+(Rsh8x64 (Const8 [0]) _) -> (Const8 [0])
+(Rsh8Ux64 (Const8 [0]) _) -> (Const8 [0])
+
+(IsInBounds (Const32 [c]) (Const32 [d])) -> (ConstBool [b2i(inBounds32(c,d))])
+(IsInBounds (Const64 [c]) (Const64 [d])) -> (ConstBool [b2i(inBounds64(c,d))])
+(IsSliceInBounds (Const32 [c]) (Const32 [d])) -> (ConstBool [b2i(sliceInBounds32(c,d))])
+(IsSliceInBounds (Const64 [c]) (Const64 [d])) -> (ConstBool [b2i(sliceInBounds64(c,d))])
+
+(Eq64 x x) -> (ConstBool [1])
+(Eq32 x x) -> (ConstBool [1])
+(Eq16 x x) -> (ConstBool [1])
+(Eq8 x x) -> (ConstBool [1])
+(Eq8 (ConstBool [c]) (ConstBool [d])) -> (ConstBool [b2i((int8(c) != 0) == (int8(d) != 0))])
+(Eq8 (ConstBool [0]) x) -> (Not x)
+(Eq8 (ConstBool [1]) x) -> x
+
+(Neq64 x x) -> (ConstBool [0])
+(Neq32 x x) -> (ConstBool [0])
+(Neq16 x x) -> (ConstBool [0])
+(Neq8 x x) -> (ConstBool [0])
+(Neq8 (ConstBool [c]) (ConstBool [d])) -> (ConstBool [b2i((int8(c) != 0) != (int8(d) != 0))])
+(Neq8 (ConstBool [0]) x) -> x
+(Neq8 (ConstBool [1]) x) -> (Not x)
+
+(Eq64 (Const64 <t> [c]) (Add64 (Const64 <t> [d]) x)) -> (Eq64 (Const64 <t> [c-d]) x)
+(Eq32 (Const32 <t> [c]) (Add32 (Const32 <t> [d]) x)) -> (Eq32 (Const32 <t> [c-d]) x)
+(Eq16 (Const16 <t> [c]) (Add16 (Const16 <t> [d]) x)) -> (Eq16 (Const16 <t> [c-d]) x)
+(Eq8 (Const8 <t> [c]) (Add8 (Const8 <t> [d]) x)) -> (Eq8 (Const8 <t> [c-d]) x)
+
+(Neq64 (Const64 <t> [c]) (Add64 (Const64 <t> [d]) x)) -> (Neq64 (Const64 <t> [c-d]) x)
+(Neq32 (Const32 <t> [c]) (Add32 (Const32 <t> [d]) x)) -> (Neq32 (Const32 <t> [c-d]) x)
+(Neq16 (Const16 <t> [c]) (Add16 (Const16 <t> [d]) x)) -> (Neq16 (Const16 <t> [c-d]) x)
+(Neq8 (Const8 <t> [c]) (Add8 (Const8 <t> [d]) x)) -> (Neq8 (Const8 <t> [c-d]) x)
+
+// canonicalize: swap arguments for commutative operations when one argument is a constant.
+(Eq64 x (Const64 <t> [c])) && x.Op != OpConst64 -> (Eq64 (Const64 <t> [c]) x)
+(Eq32 x (Const32 <t> [c])) && x.Op != OpConst32 -> (Eq32 (Const32 <t> [c]) x)
+(Eq16 x (Const16 <t> [c])) && x.Op != OpConst16 -> (Eq16 (Const16 <t> [c]) x)
+(Eq8 x (Const8 <t> [c])) && x.Op != OpConst8 -> (Eq8 (Const8 <t> [c]) x)
+(Eq8 x (ConstBool <t> [c])) && x.Op != OpConstBool -> (Eq8 (ConstBool <t> [c]) x)
+
+(Neq64 x (Const64 <t> [c])) && x.Op != OpConst64 -> (Neq64 (Const64 <t> [c]) x)
+(Neq32 x (Const32 <t> [c])) && x.Op != OpConst32 -> (Neq32 (Const32 <t> [c]) x)
+(Neq16 x (Const16 <t> [c])) && x.Op != OpConst16 -> (Neq16 (Const16 <t> [c]) x)
+(Neq8 x (Const8 <t> [c])) && x.Op != OpConst8 -> (Neq8 (Const8 <t> [c]) x)
+(Neq8 x (ConstBool <t> [c])) && x.Op != OpConstBool -> (Neq8 (ConstBool <t> [c]) x)
+
+(Add64 x (Const64 <t> [c])) && x.Op != OpConst64 -> (Add64 (Const64 <t> [c]) x)
+(Add32 x (Const32 <t> [c])) && x.Op != OpConst32 -> (Add32 (Const32 <t> [c]) x)
+(Add16 x (Const16 <t> [c])) && x.Op != OpConst16 -> (Add16 (Const16 <t> [c]) x)
+(Add8 x (Const8 <t> [c])) && x.Op != OpConst8 -> (Add8 (Const8 <t> [c]) x)
+
+(Mul64 x (Const64 <t> [c])) && x.Op != OpConst64 -> (Mul64 (Const64 <t> [c]) x)
+(Mul32 x (Const32 <t> [c])) && x.Op != OpConst32 -> (Mul32 (Const32 <t> [c]) x)
+(Mul16 x (Const16 <t> [c])) && x.Op != OpConst16 -> (Mul16 (Const16 <t> [c]) x)
+(Mul8 x (Const8 <t> [c])) && x.Op != OpConst8 -> (Mul8 (Const8 <t> [c]) x)
+
+(Sub64 x (Const64 <t> [c])) && x.Op != OpConst64 -> (Add64 (Const64 <t> [-c]) x)
+(Sub32 x (Const32 <t> [c])) && x.Op != OpConst32 -> (Add32 (Const32 <t> [-c]) x)
+(Sub16 x (Const16 <t> [c])) && x.Op != OpConst16 -> (Add16 (Const16 <t> [-c]) x)
+(Sub8 x (Const8 <t> [c])) && x.Op != OpConst8 -> (Add8 (Const8 <t> [-c]) x)
+
+(And64 x (Const64 <t> [c])) && x.Op != OpConst64 -> (And64 (Const64 <t> [c]) x)
+(And32 x (Const32 <t> [c])) && x.Op != OpConst32 -> (And32 (Const32 <t> [c]) x)
+(And16 x (Const16 <t> [c])) && x.Op != OpConst16 -> (And16 (Const16 <t> [c]) x)
+(And8 x (Const8 <t> [c])) && x.Op != OpConst8 -> (And8 (Const8 <t> [c]) x)
+
+(Or64 x (Const64 <t> [c])) && x.Op != OpConst64 -> (Or64 (Const64 <t> [c]) x)
+(Or32 x (Const32 <t> [c])) && x.Op != OpConst32 -> (Or32 (Const32 <t> [c]) x)
+(Or16 x (Const16 <t> [c])) && x.Op != OpConst16 -> (Or16 (Const16 <t> [c]) x)
+(Or8 x (Const8 <t> [c])) && x.Op != OpConst8 -> (Or8 (Const8 <t> [c]) x)
+
+(Xor64 x (Const64 <t> [c])) && x.Op != OpConst64 -> (Xor64 (Const64 <t> [c]) x)
+(Xor32 x (Const32 <t> [c])) && x.Op != OpConst32 -> (Xor32 (Const32 <t> [c]) x)
+(Xor16 x (Const16 <t> [c])) && x.Op != OpConst16 -> (Xor16 (Const16 <t> [c]) x)
+(Xor8 x (Const8 <t> [c])) && x.Op != OpConst8 -> (Xor8 (Const8 <t> [c]) x)
+
+// Distribute multiplication c * (d+x) -> c*d + c*x. Useful for:
+// a[i].b = ...; a[i+1].b = ...
+(Mul64 (Const64 <t> [c]) (Add64 <t> (Const64 <t> [d]) x)) -> (Add64 (Const64 <t> [c*d]) (Mul64 <t> (Const64 <t> [c]) x))
+(Mul32 (Const32 <t> [c]) (Add32 <t> (Const32 <t> [d]) x)) -> (Add32 (Const32 <t> [c*d]) (Mul32 <t> (Const32 <t> [c]) x))
+
+// rewrite shifts of 8/16/32 bit consts into 64 bit consts to reduce
+// the number of the other rewrite rules for const shifts
+(Lsh64x32 <t> x (Const32 [c])) -> (Lsh64x64 x (Const64 <t> [int64(uint32(c))]))
+(Lsh64x16 <t> x (Const16 [c])) -> (Lsh64x64 x (Const64 <t> [int64(uint16(c))]))
+(Lsh64x8 <t> x (Const8 [c])) -> (Lsh64x64 x (Const64 <t> [int64(uint8(c))]))
+(Rsh64x32 <t> x (Const32 [c])) -> (Rsh64x64 x (Const64 <t> [int64(uint32(c))]))
+(Rsh64x16 <t> x (Const16 [c])) -> (Rsh64x64 x (Const64 <t> [int64(uint16(c))]))
+(Rsh64x8 <t> x (Const8 [c])) -> (Rsh64x64 x (Const64 <t> [int64(uint8(c))]))
+(Rsh64Ux32 <t> x (Const32 [c])) -> (Rsh64Ux64 x (Const64 <t> [int64(uint32(c))]))
+(Rsh64Ux16 <t> x (Const16 [c])) -> (Rsh64Ux64 x (Const64 <t> [int64(uint16(c))]))
+(Rsh64Ux8 <t> x (Const8 [c])) -> (Rsh64Ux64 x (Const64 <t> [int64(uint8(c))]))
+
+(Lsh32x32 <t> x (Const32 [c])) -> (Lsh32x64 x (Const64 <t> [int64(uint32(c))]))
+(Lsh32x16 <t> x (Const16 [c])) -> (Lsh32x64 x (Const64 <t> [int64(uint16(c))]))
+(Lsh32x8 <t> x (Const8 [c])) -> (Lsh32x64 x (Const64 <t> [int64(uint8(c))]))
+(Rsh32x32 <t> x (Const32 [c])) -> (Rsh32x64 x (Const64 <t> [int64(uint32(c))]))
+(Rsh32x16 <t> x (Const16 [c])) -> (Rsh32x64 x (Const64 <t> [int64(uint16(c))]))
+(Rsh32x8 <t> x (Const8 [c])) -> (Rsh32x64 x (Const64 <t> [int64(uint8(c))]))
+(Rsh32Ux32 <t> x (Const32 [c])) -> (Rsh32Ux64 x (Const64 <t> [int64(uint32(c))]))
+(Rsh32Ux16 <t> x (Const16 [c])) -> (Rsh32Ux64 x (Const64 <t> [int64(uint16(c))]))
+(Rsh32Ux8 <t> x (Const8 [c])) -> (Rsh32Ux64 x (Const64 <t> [int64(uint8(c))]))
+
+(Lsh16x32 <t> x (Const32 [c])) -> (Lsh16x64 x (Const64 <t> [int64(uint32(c))]))
+(Lsh16x16 <t> x (Const16 [c])) -> (Lsh16x64 x (Const64 <t> [int64(uint16(c))]))
+(Lsh16x8 <t> x (Const8 [c])) -> (Lsh16x64 x (Const64 <t> [int64(uint8(c))]))
+(Rsh16x32 <t> x (Const32 [c])) -> (Rsh16x64 x (Const64 <t> [int64(uint32(c))]))
+(Rsh16x16 <t> x (Const16 [c])) -> (Rsh16x64 x (Const64 <t> [int64(uint16(c))]))
+(Rsh16x8 <t> x (Const8 [c])) -> (Rsh16x64 x (Const64 <t> [int64(uint8(c))]))
+(Rsh16Ux32 <t> x (Const32 [c])) -> (Rsh16Ux64 x (Const64 <t> [int64(uint32(c))]))
+(Rsh16Ux16 <t> x (Const16 [c])) -> (Rsh16Ux64 x (Const64 <t> [int64(uint16(c))]))
+(Rsh16Ux8 <t> x (Const8 [c])) -> (Rsh16Ux64 x (Const64 <t> [int64(uint8(c))]))
+
+(Lsh8x32 <t> x (Const32 [c])) -> (Lsh8x64 x (Const64 <t> [int64(uint32(c))]))
+(Lsh8x16 <t> x (Const16 [c])) -> (Lsh8x64 x (Const64 <t> [int64(uint16(c))]))
+(Lsh8x8 <t> x (Const8 [c])) -> (Lsh8x64 x (Const64 <t> [int64(uint8(c))]))
+(Rsh8x32 <t> x (Const32 [c])) -> (Rsh8x64 x (Const64 <t> [int64(uint32(c))]))
+(Rsh8x16 <t> x (Const16 [c])) -> (Rsh8x64 x (Const64 <t> [int64(uint16(c))]))
+(Rsh8x8 <t> x (Const8 [c])) -> (Rsh8x64 x (Const64 <t> [int64(uint8(c))]))
+(Rsh8Ux32 <t> x (Const32 [c])) -> (Rsh8Ux64 x (Const64 <t> [int64(uint32(c))]))
+(Rsh8Ux16 <t> x (Const16 [c])) -> (Rsh8Ux64 x (Const64 <t> [int64(uint16(c))]))
+(Rsh8Ux8 <t> x (Const8 [c])) -> (Rsh8Ux64 x (Const64 <t> [int64(uint8(c))]))
+
+// shifts by zero
+(Lsh64x64 x (Const64 [0])) -> x
+(Rsh64x64 x (Const64 [0])) -> x
+(Rsh64Ux64 x (Const64 [0])) -> x
+(Lsh32x64 x (Const64 [0])) -> x
+(Rsh32x64 x (Const64 [0])) -> x
+(Rsh32Ux64 x (Const64 [0])) -> x
+(Lsh16x64 x (Const64 [0])) -> x
+(Rsh16x64 x (Const64 [0])) -> x
+(Rsh16Ux64 x (Const64 [0])) -> x
+(Lsh8x64 x (Const64 [0])) -> x
+(Rsh8x64 x (Const64 [0])) -> x
+(Rsh8Ux64 x (Const64 [0])) -> x
+
+// zero shifted.
+// TODO: other bit sizes.
+(Lsh64x64 (Const64 [0]) _) -> (Const64 [0])
+(Rsh64x64 (Const64 [0]) _) -> (Const64 [0])
+(Rsh64Ux64 (Const64 [0]) _) -> (Const64 [0])
+(Lsh64x32 (Const64 [0]) _) -> (Const64 [0])
+(Rsh64x32 (Const64 [0]) _) -> (Const64 [0])
+(Rsh64Ux32 (Const64 [0]) _) -> (Const64 [0])
+(Lsh64x16 (Const64 [0]) _) -> (Const64 [0])
+(Rsh64x16 (Const64 [0]) _) -> (Const64 [0])
+(Rsh64Ux16 (Const64 [0]) _) -> (Const64 [0])
+(Lsh64x8 (Const64 [0]) _) -> (Const64 [0])
+(Rsh64x8 (Const64 [0]) _) -> (Const64 [0])
+(Rsh64Ux8 (Const64 [0]) _) -> (Const64 [0])
+
+// large left shifts of all values, and right shifts of unsigned values
+(Lsh64x64 _ (Const64 [c])) && uint64(c) >= 64 -> (Const64 [0])
+(Rsh64Ux64 _ (Const64 [c])) && uint64(c) >= 64 -> (Const64 [0])
+(Lsh32x64 _ (Const64 [c])) && uint64(c) >= 32 -> (Const32 [0])
+(Rsh32Ux64 _ (Const64 [c])) && uint64(c) >= 32 -> (Const32 [0])
+(Lsh16x64 _ (Const64 [c])) && uint64(c) >= 16 -> (Const16 [0])
+(Rsh16Ux64 _ (Const64 [c])) && uint64(c) >= 16 -> (Const16 [0])
+(Lsh8x64 _ (Const64 [c])) && uint64(c) >= 8 -> (Const8 [0])
+(Rsh8Ux64 _ (Const64 [c])) && uint64(c) >= 8 -> (Const8 [0])
+
+
+// combine const shifts
+(Lsh64x64 <t> (Lsh64x64 x (Const64 [c])) (Const64 [d])) && !uaddOvf(c,d) -> (Lsh64x64 x (Const64 <t> [c+d]))
+(Lsh32x64 <t> (Lsh32x64 x (Const64 [c])) (Const64 [d])) && !uaddOvf(c,d) -> (Lsh32x64 x (Const64 <t> [c+d]))
+(Lsh16x64 <t> (Lsh16x64 x (Const64 [c])) (Const64 [d])) && !uaddOvf(c,d) -> (Lsh16x64 x (Const64 <t> [c+d]))
+(Lsh8x64 <t> (Lsh8x64 x (Const64 [c])) (Const64 [d])) && !uaddOvf(c,d) -> (Lsh8x64 x (Const64 <t> [c+d]))
+
+(Rsh64x64 <t> (Rsh64x64 x (Const64 [c])) (Const64 [d])) && !uaddOvf(c,d) -> (Rsh64x64 x (Const64 <t> [c+d]))
+(Rsh32x64 <t> (Rsh32x64 x (Const64 [c])) (Const64 [d])) && !uaddOvf(c,d) -> (Rsh32x64 x (Const64 <t> [c+d]))
+(Rsh16x64 <t> (Rsh16x64 x (Const64 [c])) (Const64 [d])) && !uaddOvf(c,d) -> (Rsh16x64 x (Const64 <t> [c+d]))
+(Rsh8x64 <t> (Rsh8x64 x (Const64 [c])) (Const64 [d])) && !uaddOvf(c,d) -> (Rsh8x64 x (Const64 <t> [c+d]))
+
+(Rsh64Ux64 <t> (Rsh64Ux64 x (Const64 [c])) (Const64 [d])) && !uaddOvf(c,d) -> (Rsh64Ux64 x (Const64 <t> [c+d]))
+(Rsh32Ux64 <t> (Rsh32Ux64 x (Const64 [c])) (Const64 [d])) && !uaddOvf(c,d) -> (Rsh32Ux64 x (Const64 <t> [c+d]))
+(Rsh16Ux64 <t> (Rsh16Ux64 x (Const64 [c])) (Const64 [d])) && !uaddOvf(c,d) -> (Rsh16Ux64 x (Const64 <t> [c+d]))
+(Rsh8Ux64 <t> (Rsh8Ux64 x (Const64 [c])) (Const64 [d])) && !uaddOvf(c,d) -> (Rsh8Ux64 x (Const64 <t> [c+d]))
+
+// constant comparisons
+(Eq64 (Const64 [c]) (Const64 [d])) -> (ConstBool [b2i(int64(c) == int64(d))])
+(Eq32 (Const32 [c]) (Const32 [d])) -> (ConstBool [b2i(int32(c) == int32(d))])
+(Eq16 (Const16 [c]) (Const16 [d])) -> (ConstBool [b2i(int16(c) == int16(d))])
+(Eq8 (Const8 [c]) (Const8 [d])) -> (ConstBool [b2i(int8(c) == int8(d))])
+
+(Neq64 (Const64 [c]) (Const64 [d])) -> (ConstBool [b2i(int64(c) != int64(d))])
+(Neq32 (Const32 [c]) (Const32 [d])) -> (ConstBool [b2i(int32(c) != int32(d))])
+(Neq16 (Const16 [c]) (Const16 [d])) -> (ConstBool [b2i(int16(c) != int16(d))])
+(Neq8 (Const8 [c]) (Const8 [d])) -> (ConstBool [b2i(int8(c) != int8(d))])
+
+(Greater64 (Const64 [c]) (Const64 [d])) -> (ConstBool [b2i(int64(c) > int64(d))])
+(Greater32 (Const32 [c]) (Const32 [d])) -> (ConstBool [b2i(int32(c) > int32(d))])
+(Greater16 (Const16 [c]) (Const16 [d])) -> (ConstBool [b2i(int16(c) > int16(d))])
+(Greater8 (Const8 [c]) (Const8 [d])) -> (ConstBool [b2i(int8(c) > int8(d))])
+
+(Greater64U (Const64 [c]) (Const64 [d])) -> (ConstBool [b2i(uint64(c) > uint64(d))])
+(Greater32U (Const32 [c]) (Const32 [d])) -> (ConstBool [b2i(uint32(c) > uint32(d))])
+(Greater16U (Const16 [c]) (Const16 [d])) -> (ConstBool [b2i(uint16(c) > uint16(d))])
+(Greater8U (Const8 [c]) (Const8 [d])) -> (ConstBool [b2i(uint8(c) > uint8(d))])
+
+(Geq64 (Const64 [c]) (Const64 [d])) -> (ConstBool [b2i(int64(c) >= int64(d))])
+(Geq32 (Const32 [c]) (Const32 [d])) -> (ConstBool [b2i(int32(c) >= int32(d))])
+(Geq16 (Const16 [c]) (Const16 [d])) -> (ConstBool [b2i(int16(c) >= int16(d))])
+(Geq8 (Const8 [c]) (Const8 [d])) -> (ConstBool [b2i(int8(c) >= int8(d))])
+
+(Geq64U (Const64 [c]) (Const64 [d])) -> (ConstBool [b2i(uint64(c) >= uint64(d))])
+(Geq32U (Const32 [c]) (Const32 [d])) -> (ConstBool [b2i(uint32(c) >= uint32(d))])
+(Geq16U (Const16 [c]) (Const16 [d])) -> (ConstBool [b2i(uint16(c) >= uint16(d))])
+(Geq8U (Const8 [c]) (Const8 [d])) -> (ConstBool [b2i(uint8(c) >= uint8(d))])
+
+(Less64 (Const64 [c]) (Const64 [d])) -> (ConstBool [b2i(int64(c) < int64(d))])
+(Less32 (Const32 [c]) (Const32 [d])) -> (ConstBool [b2i(int32(c) < int32(d))])
+(Less16 (Const16 [c]) (Const16 [d])) -> (ConstBool [b2i(int16(c) < int16(d))])
+(Less8 (Const8 [c]) (Const8 [d])) -> (ConstBool [b2i(int8(c) < int8(d))])
+
+(Less64U (Const64 [c]) (Const64 [d])) -> (ConstBool [b2i(uint64(c) < uint64(d))])
+(Less32U (Const32 [c]) (Const32 [d])) -> (ConstBool [b2i(uint32(c) < uint32(d))])
+(Less16U (Const16 [c]) (Const16 [d])) -> (ConstBool [b2i(uint16(c) < uint16(d))])
+(Less8U (Const8 [c]) (Const8 [d])) -> (ConstBool [b2i(uint8(c) < uint8(d))])
+
+(Leq64 (Const64 [c]) (Const64 [d])) -> (ConstBool [b2i(int64(c) <= int64(d))])
+(Leq32 (Const32 [c]) (Const32 [d])) -> (ConstBool [b2i(int32(c) <= int32(d))])
+(Leq16 (Const16 [c]) (Const16 [d])) -> (ConstBool [b2i(int16(c) <= int16(d))])
+(Leq8 (Const8 [c]) (Const8 [d])) -> (ConstBool [b2i(int8(c) <= int8(d))])
+
+(Leq64U (Const64 [c]) (Const64 [d])) -> (ConstBool [b2i(uint64(c) <= uint64(d))])
+(Leq32U (Const32 [c]) (Const32 [d])) -> (ConstBool [b2i(uint32(c) <= uint32(d))])
+(Leq16U (Const16 [c]) (Const16 [d])) -> (ConstBool [b2i(uint16(c) <= uint16(d))])
+(Leq8U (Const8 [c]) (Const8 [d])) -> (ConstBool [b2i(uint8(c) <= uint8(d))])
+
+// simplifications
+(Or64 x x) -> x
+(Or32 x x) -> x
+(Or16 x x) -> x
+(Or8 x x) -> x
+(Or64 (Const64 [0]) x) -> x
+(Or32 (Const32 [0]) x) -> x
+(Or16 (Const16 [0]) x) -> x
+(Or8 (Const8 [0]) x) -> x
+(Or64 (Const64 [-1]) _) -> (Const64 [-1])
+(Or32 (Const32 [-1]) _) -> (Const32 [-1])
+(Or16 (Const16 [-1]) _) -> (Const16 [-1])
+(Or8 (Const8 [-1]) _) -> (Const8 [-1])
+(And64 x x) -> x
+(And32 x x) -> x
+(And16 x x) -> x
+(And8 x x) -> x
+(And64 (Const64 [-1]) x) -> x
+(And32 (Const32 [-1]) x) -> x
+(And16 (Const16 [-1]) x) -> x
+(And8 (Const8 [-1]) x) -> x
+(And64 (Const64 [0]) _) -> (Const64 [0])
+(And32 (Const32 [0]) _) -> (Const32 [0])
+(And16 (Const16 [0]) _) -> (Const16 [0])
+(And8 (Const8 [0]) _) -> (Const8 [0])
+(Xor64 x x) -> (Const64 [0])
+(Xor32 x x) -> (Const32 [0])
+(Xor16 x x) -> (Const16 [0])
+(Xor8 x x) -> (Const8 [0])
+(Xor64 (Const64 [0]) x) -> x
+(Xor32 (Const32 [0]) x) -> x
+(Xor16 (Const16 [0]) x) -> x
+(Xor8 (Const8 [0]) x) -> x
+(Add64 (Const64 [0]) x) -> x
+(Add32 (Const32 [0]) x) -> x
+(Add16 (Const16 [0]) x) -> x
+(Add8 (Const8 [0]) x) -> x
+(Sub64 x x) -> (Const64 [0])
+(Sub32 x x) -> (Const32 [0])
+(Sub16 x x) -> (Const16 [0])
+(Sub8 x x) -> (Const8 [0])
+(Mul64 (Const64 [0]) _) -> (Const64 [0])
+(Mul32 (Const32 [0]) _) -> (Const32 [0])
+(Mul16 (Const16 [0]) _) -> (Const16 [0])
+(Mul8 (Const8 [0]) _) -> (Const8 [0])
+(Com8 (Com8 x)) -> x
+(Com16 (Com16 x)) -> x
+(Com32 (Com32 x)) -> x
+(Com64 (Com64 x)) -> x
+(Neg8 (Sub8 x y)) -> (Sub8 y x)
+(Neg16 (Sub16 x y)) -> (Sub16 y x)
+(Neg32 (Sub32 x y)) -> (Sub32 y x)
+(Neg64 (Sub64 x y)) -> (Sub64 y x)
+
+// Rewrite AND of consts as shifts if possible, slightly faster for 32/64 bit operands
+// leading zeros can be shifted left, then right
+(And64 <t> (Const64 [y]) x) && nlz(y) + nto(y) == 64 -> (Rsh64Ux64 (Lsh64x64 <t> x (Const64 <t> [nlz(y)])) (Const64 <t> [nlz(y)]))
+(And32 <t> (Const32 [y]) x) && nlz(int64(int32(y))) + nto(int64(int32(y))) == 64 -> (Rsh32Ux32 (Lsh32x32 <t> x (Const32 <t> [nlz(int64(int32(y)))-32])) (Const32 <t> [nlz(int64(int32(y)))-32]))
+// trailing zeros can be shifted right, then left
+(And64 <t> (Const64 [y]) x) && nlo(y) + ntz(y) == 64 -> (Lsh64x64 (Rsh64Ux64 <t> x (Const64 <t> [ntz(y)])) (Const64 <t> [ntz(y)]))
+(And32 <t> (Const32 [y]) x) && nlo(int64(int32(y))) + ntz(int64(int32(y))) == 64 -> (Lsh32x32 (Rsh32Ux32 <t> x (Const32 <t> [ntz(int64(int32(y)))])) (Const32 <t> [ntz(int64(int32(y)))]))
+
+// simplifications often used for lengths. e.g. len(s[i:i+5])==5
+(Sub64 (Add64 x y) x) -> y
+(Sub64 (Add64 x y) y) -> x
+(Sub32 (Add32 x y) x) -> y
+(Sub32 (Add32 x y) y) -> x
+(Sub16 (Add16 x y) x) -> y
+(Sub16 (Add16 x y) y) -> x
+(Sub8 (Add8 x y) x) -> y
+(Sub8 (Add8 x y) y) -> x
+
+// basic phi simplifications
+(Phi (Const8 [c]) (Const8 [d])) && int8(c) == int8(d) -> (Const8 [c])
+(Phi (Const16 [c]) (Const16 [d])) && int16(c) == int16(d) -> (Const16 [c])
+(Phi (Const32 [c]) (Const32 [d])) && int32(c) == int32(d) -> (Const32 [c])
+(Phi (Const64 [c]) (Const64 [c])) -> (Const64 [c])
+
+// user nil checks
+(NeqPtr p (ConstNil)) -> (IsNonNil p)
+(NeqPtr (ConstNil) p) -> (IsNonNil p)
+(EqPtr p (ConstNil)) -> (Not (IsNonNil p))
+(EqPtr (ConstNil) p) -> (Not (IsNonNil p))
+
+// slice and interface comparisons
+// The frontend ensures that we can only compare against nil,
+// so we need only compare the first word (interface type or slice ptr).
+(EqInter x y) -> (EqPtr (ITab x) (ITab y))
+(NeqInter x y) -> (NeqPtr (ITab x) (ITab y))
+(EqSlice x y) -> (EqPtr (SlicePtr x) (SlicePtr y))
+(NeqSlice x y) -> (NeqPtr (SlicePtr x) (SlicePtr y))
+
+
+// Load of store of same address, with compatibly typed value and same size
+(Load <t1> p1 (Store [w] p2 x _)) && isSamePtr(p1,p2) && t1.Compare(x.Type)==CMPeq && w == t1.Size() -> x
+
+
+// indexing operations
+// Note: bounds check has already been done
+(ArrayIndex <t> [0] (Load ptr mem)) -> @v.Args[0].Block (Load <t> ptr mem)
+(PtrIndex <t> ptr idx) && config.PtrSize == 4 -> (AddPtr ptr (Mul32 <config.fe.TypeInt()> idx (Const32 <config.fe.TypeInt()> [t.Elem().Size()])))
+(PtrIndex <t> ptr idx) && config.PtrSize == 8 -> (AddPtr ptr (Mul64 <config.fe.TypeInt()> idx (Const64 <config.fe.TypeInt()> [t.Elem().Size()])))
+
+// struct operations
+(StructSelect (StructMake1 x)) -> x
+(StructSelect [0] (StructMake2 x _)) -> x
+(StructSelect [1] (StructMake2 _ x)) -> x
+(StructSelect [0] (StructMake3 x _ _)) -> x
+(StructSelect [1] (StructMake3 _ x _)) -> x
+(StructSelect [2] (StructMake3 _ _ x)) -> x
+(StructSelect [0] (StructMake4 x _ _ _)) -> x
+(StructSelect [1] (StructMake4 _ x _ _)) -> x
+(StructSelect [2] (StructMake4 _ _ x _)) -> x
+(StructSelect [3] (StructMake4 _ _ _ x)) -> x
+
+(Load <t> _ _) && t.IsStruct() && t.NumFields() == 0 && config.fe.CanSSA(t) ->
+ (StructMake0)
+(Load <t> ptr mem) && t.IsStruct() && t.NumFields() == 1 && config.fe.CanSSA(t) ->
+ (StructMake1
+ (Load <t.FieldType(0)> ptr mem))
+(Load <t> ptr mem) && t.IsStruct() && t.NumFields() == 2 && config.fe.CanSSA(t) ->
+ (StructMake2
+ (Load <t.FieldType(0)> ptr mem)
+ (Load <t.FieldType(1)> (OffPtr <t.FieldType(1).PtrTo()> [t.FieldOff(1)] ptr) mem))
+(Load <t> ptr mem) && t.IsStruct() && t.NumFields() == 3 && config.fe.CanSSA(t) ->
+ (StructMake3
+ (Load <t.FieldType(0)> ptr mem)
+ (Load <t.FieldType(1)> (OffPtr <t.FieldType(1).PtrTo()> [t.FieldOff(1)] ptr) mem)
+ (Load <t.FieldType(2)> (OffPtr <t.FieldType(2).PtrTo()> [t.FieldOff(2)] ptr) mem))
+(Load <t> ptr mem) && t.IsStruct() && t.NumFields() == 4 && config.fe.CanSSA(t) ->
+ (StructMake4
+ (Load <t.FieldType(0)> ptr mem)
+ (Load <t.FieldType(1)> (OffPtr <t.FieldType(1).PtrTo()> [t.FieldOff(1)] ptr) mem)
+ (Load <t.FieldType(2)> (OffPtr <t.FieldType(2).PtrTo()> [t.FieldOff(2)] ptr) mem)
+ (Load <t.FieldType(3)> (OffPtr <t.FieldType(3).PtrTo()> [t.FieldOff(3)] ptr) mem))
+
+(StructSelect [i] (Load <t> ptr mem)) && !config.fe.CanSSA(t) ->
+ @v.Args[0].Block (Load <v.Type> (OffPtr <v.Type.PtrTo()> [t.FieldOff(i)] ptr) mem)
+
+(Store _ (StructMake0) mem) -> mem
+(Store dst (StructMake1 <t> f0) mem) ->
+ (Store [t.FieldType(0).Size()] dst f0 mem)
+(Store dst (StructMake2 <t> f0 f1) mem) ->
+ (Store [t.FieldType(1).Size()]
+ (OffPtr <t.FieldType(1).PtrTo()> [t.FieldOff(1)] dst)
+ f1
+ (Store [t.FieldType(0).Size()] dst f0 mem))
+(Store dst (StructMake3 <t> f0 f1 f2) mem) ->
+ (Store [t.FieldType(2).Size()]
+ (OffPtr <t.FieldType(2).PtrTo()> [t.FieldOff(2)] dst)
+ f2
+ (Store [t.FieldType(1).Size()]
+ (OffPtr <t.FieldType(1).PtrTo()> [t.FieldOff(1)] dst)
+ f1
+ (Store [t.FieldType(0).Size()] dst f0 mem)))
+(Store dst (StructMake4 <t> f0 f1 f2 f3) mem) ->
+ (Store [t.FieldType(3).Size()]
+ (OffPtr <t.FieldType(3).PtrTo()> [t.FieldOff(3)] dst)
+ f3
+ (Store [t.FieldType(2).Size()]
+ (OffPtr <t.FieldType(2).PtrTo()> [t.FieldOff(2)] dst)
+ f2
+ (Store [t.FieldType(1).Size()]
+ (OffPtr <t.FieldType(1).PtrTo()> [t.FieldOff(1)] dst)
+ f1
+ (Store [t.FieldType(0).Size()] dst f0 mem))))
+
+// complex ops
+(ComplexReal (ComplexMake real _ )) -> real
+(ComplexImag (ComplexMake _ imag )) -> imag
+
+(Load <t> ptr mem) && t.IsComplex() && t.Size() == 8 ->
+ (ComplexMake
+ (Load <config.fe.TypeFloat32()> ptr mem)
+ (Load <config.fe.TypeFloat32()>
+ (OffPtr <config.fe.TypeFloat32().PtrTo()> [4] ptr)
+ mem)
+ )
+(Store [8] dst (ComplexMake real imag) mem) ->
+ (Store [4]
+ (OffPtr <config.fe.TypeFloat32().PtrTo()> [4] dst)
+ imag
+ (Store [4] dst real mem))
+
+(Load <t> ptr mem) && t.IsComplex() && t.Size() == 16 ->
+ (ComplexMake
+ (Load <config.fe.TypeFloat64()> ptr mem)
+ (Load <config.fe.TypeFloat64()>
+ (OffPtr <config.fe.TypeFloat64().PtrTo()> [8] ptr)
+ mem)
+ )
+(Store [16] dst (ComplexMake real imag) mem) ->
+ (Store [8]
+ (OffPtr <config.fe.TypeFloat64().PtrTo()> [8] dst)
+ imag
+ (Store [8] dst real mem))
+
+// string ops
+(StringPtr (StringMake ptr _)) -> ptr
+(StringLen (StringMake _ len)) -> len
+(ConstString {s}) && config.PtrSize == 4 && s.(string) == "" ->
+ (StringMake (ConstNil) (Const32 <config.fe.TypeInt()> [0]))
+(ConstString {s}) && config.PtrSize == 8 && s.(string) == "" ->
+ (StringMake (ConstNil) (Const64 <config.fe.TypeInt()> [0]))
+(ConstString {s}) && config.PtrSize == 4 && s.(string) != "" ->
+ (StringMake
+ (Addr <config.fe.TypeBytePtr()> {config.fe.StringData(s.(string))}
+ (SB))
+ (Const32 <config.fe.TypeInt()> [int64(len(s.(string)))]))
+(ConstString {s}) && config.PtrSize == 8 && s.(string) != "" ->
+ (StringMake
+ (Addr <config.fe.TypeBytePtr()> {config.fe.StringData(s.(string))}
+ (SB))
+ (Const64 <config.fe.TypeInt()> [int64(len(s.(string)))]))
+(Load <t> ptr mem) && t.IsString() ->
+ (StringMake
+ (Load <config.fe.TypeBytePtr()> ptr mem)
+ (Load <config.fe.TypeInt()>
+ (OffPtr <config.fe.TypeInt().PtrTo()> [config.PtrSize] ptr)
+ mem))
+(Store [2*config.PtrSize] dst (StringMake ptr len) mem) ->
+ (Store [config.PtrSize]
+ (OffPtr <config.fe.TypeInt().PtrTo()> [config.PtrSize] dst)
+ len
+ (Store [config.PtrSize] dst ptr mem))
+
+// slice ops
+(SlicePtr (SliceMake ptr _ _ )) -> ptr
+(SliceLen (SliceMake _ len _)) -> len
+(SliceCap (SliceMake _ _ cap)) -> cap
+(ConstSlice) && config.PtrSize == 4 ->
+ (SliceMake
+ (ConstNil <config.fe.TypeBytePtr()>)
+ (Const32 <config.fe.TypeInt()> [0])
+ (Const32 <config.fe.TypeInt()> [0]))
+(ConstSlice) && config.PtrSize == 8 ->
+ (SliceMake
+ (ConstNil <config.fe.TypeBytePtr()>)
+ (Const64 <config.fe.TypeInt()> [0])
+ (Const64 <config.fe.TypeInt()> [0]))
+
+(Load <t> ptr mem) && t.IsSlice() ->
+ (SliceMake
+ (Load <config.fe.TypeBytePtr()> ptr mem)
+ (Load <config.fe.TypeInt()>
+ (OffPtr <config.fe.TypeInt().PtrTo()> [config.PtrSize] ptr)
+ mem)
+ (Load <config.fe.TypeInt()>
+ (OffPtr <config.fe.TypeInt().PtrTo()> [2*config.PtrSize] ptr)
+ mem))
+(Store [3*config.PtrSize] dst (SliceMake ptr len cap) mem) ->
+ (Store [config.PtrSize]
+ (OffPtr <config.fe.TypeInt().PtrTo()> [2*config.PtrSize] dst)
+ cap
+ (Store [config.PtrSize]
+ (OffPtr <config.fe.TypeInt().PtrTo()> [config.PtrSize] dst)
+ len
+ (Store [config.PtrSize] dst ptr mem)))
+
+// interface ops
+(ITab (IMake itab _)) -> itab
+(IData (IMake _ data)) -> data
+(ConstInterface) ->
+ (IMake
+ (ConstNil <config.fe.TypeBytePtr()>)
+ (ConstNil <config.fe.TypeBytePtr()>))
+(Load <t> ptr mem) && t.IsInterface() ->
+ (IMake
+ (Load <config.fe.TypeBytePtr()> ptr mem)
+ (Load <config.fe.TypeBytePtr()>
+ (OffPtr <config.fe.TypeBytePtr().PtrTo()> [config.PtrSize] ptr)
+ mem))
+(Store [2*config.PtrSize] dst (IMake itab data) mem) ->
+ (Store [config.PtrSize]
+ (OffPtr <config.fe.TypeBytePtr().PtrTo()> [config.PtrSize] dst)
+ data
+ (Store [config.PtrSize] dst itab mem))
+
+// un-SSAable values use mem->mem copies
+(Store [size] dst (Load <t> src mem) mem) && !config.fe.CanSSA(t) -> (Move [size] dst src mem)
+(Store [size] dst (Load <t> src mem) (VarDef {x} mem)) && !config.fe.CanSSA(t) -> (Move [size] dst src (VarDef {x} mem))
+
+(Check (NilCheck (GetG _) _) next) -> (Plain nil next)
+
+(If (Not cond) yes no) -> (If cond no yes)
+(If (ConstBool [c]) yes no) && c == 1 -> (First nil yes no)
+(If (ConstBool [c]) yes no) && c == 0 -> (First nil no yes)
+
+// Get rid of Convert ops for pointer arithmetic on unsafe.Pointer.
+(Convert (Add64 (Convert ptr mem) off) mem) -> (Add64 ptr off)
+(Convert (Add64 off (Convert ptr mem)) mem) -> (Add64 ptr off)
+(Convert (Convert ptr mem) mem) -> ptr
+
+// Decompose compound argument values
+(Arg {n} [off]) && v.Type.IsString() ->
+ (StringMake
+ (Arg <config.fe.TypeBytePtr()> {n} [off])
+ (Arg <config.fe.TypeInt()> {n} [off+config.PtrSize]))
+
+(Arg {n} [off]) && v.Type.IsSlice() ->
+ (SliceMake
+ (Arg <config.fe.TypeBytePtr()> {n} [off])
+ (Arg <config.fe.TypeInt()> {n} [off+config.PtrSize])
+ (Arg <config.fe.TypeInt()> {n} [off+2*config.PtrSize]))
+
+(Arg {n} [off]) && v.Type.IsInterface() ->
+ (IMake
+ (Arg <config.fe.TypeBytePtr()> {n} [off])
+ (Arg <config.fe.TypeBytePtr()> {n} [off+config.PtrSize]))
+
+(Arg {n} [off]) && v.Type.IsComplex() && v.Type.Size() == 16 ->
+ (ComplexMake
+ (Arg <config.fe.TypeFloat64()> {n} [off])
+ (Arg <config.fe.TypeFloat64()> {n} [off+8]))
+
+(Arg {n} [off]) && v.Type.IsComplex() && v.Type.Size() == 8 ->
+ (ComplexMake
+ (Arg <config.fe.TypeFloat32()> {n} [off])
+ (Arg <config.fe.TypeFloat32()> {n} [off+4]))
+
+(Arg <t>) && t.IsStruct() && t.NumFields() == 0 && config.fe.CanSSA(t) ->
+ (StructMake0)
+(Arg <t> {n} [off]) && t.IsStruct() && t.NumFields() == 1 && config.fe.CanSSA(t) ->
+ (StructMake1
+ (Arg <t.FieldType(0)> {n} [off+t.FieldOff(0)]))
+(Arg <t> {n} [off]) && t.IsStruct() && t.NumFields() == 2 && config.fe.CanSSA(t) ->
+ (StructMake2
+ (Arg <t.FieldType(0)> {n} [off+t.FieldOff(0)])
+ (Arg <t.FieldType(1)> {n} [off+t.FieldOff(1)]))
+(Arg <t> {n} [off]) && t.IsStruct() && t.NumFields() == 3 && config.fe.CanSSA(t) ->
+ (StructMake3
+ (Arg <t.FieldType(0)> {n} [off+t.FieldOff(0)])
+ (Arg <t.FieldType(1)> {n} [off+t.FieldOff(1)])
+ (Arg <t.FieldType(2)> {n} [off+t.FieldOff(2)]))
+(Arg <t> {n} [off]) && t.IsStruct() && t.NumFields() == 4 && config.fe.CanSSA(t) ->
+ (StructMake4
+ (Arg <t.FieldType(0)> {n} [off+t.FieldOff(0)])
+ (Arg <t.FieldType(1)> {n} [off+t.FieldOff(1)])
+ (Arg <t.FieldType(2)> {n} [off+t.FieldOff(2)])
+ (Arg <t.FieldType(3)> {n} [off+t.FieldOff(3)]))
+
+// strength reduction of divide by a constant.
+// Note: frontend does <=32 bits. We only need to do 64 bits here.
+// TODO: Do them all here?
+
+// Div/mod by 1. Currently handled by frontend.
+//(Div64 n (Const64 [1])) -> n
+//(Div64u n (Const64 [1])) -> n
+//(Mod64 n (Const64 [1])) -> (Const64 [0])
+//(Mod64u n (Const64 [1])) -> (Const64 [0])
+
+// Unsigned divide by power of 2. Currently handled by frontend.
+//(Div64u <t> n (Const64 [c])) && isPowerOfTwo(c) -> (Rsh64Ux64 n (Const64 <t> [log2(c)]))
+//(Mod64u <t> n (Const64 [c])) && isPowerOfTwo(c) -> (And64 n (Const64 <t> [c-1]))
+
+// Signed divide by power of 2. Currently handled by frontend.
+// n / c = n >> log(c) if n >= 0
+// = (n+c-1) >> log(c) if n < 0
+// We conditionally add c-1 by adding n>>63>>(64-log(c)) (first shift signed, second shift unsigned).
+//(Div64 <t> n (Const64 [c])) && isPowerOfTwo(c) ->
+// (Rsh64x64
+// (Add64 <t>
+// n
+// (Rsh64Ux64 <t>
+// (Rsh64x64 <t> n (Const64 <t> [63]))
+// (Const64 <t> [64-log2(c)])))
+// (Const64 <t> [log2(c)]))
+
+// Unsigned divide, not a power of 2. Strength reduce to a multiply.
+(Div64u <t> x (Const64 [c])) && umagic64ok(c) && !umagic64a(c) ->
+ (Rsh64Ux64
+ (Hmul64u <t>
+ (Const64 <t> [umagic64m(c)])
+ x)
+ (Const64 <t> [umagic64s(c)]))
+(Div64u <t> x (Const64 [c])) && umagic64ok(c) && umagic64a(c) ->
+ (Rsh64Ux64
+ (Avg64u <t>
+ (Hmul64u <t>
+ x
+ (Const64 <t> [umagic64m(c)]))
+ x)
+ (Const64 <t> [umagic64s(c)-1]))
+
+// Signed divide, not a power of 2. Strength reduce to a multiply.
+(Div64 <t> x (Const64 [c])) && c > 0 && smagic64ok(c) && smagic64m(c) > 0 ->
+ (Sub64 <t>
+ (Rsh64x64 <t>
+ (Hmul64 <t>
+ (Const64 <t> [smagic64m(c)])
+ x)
+ (Const64 <t> [smagic64s(c)]))
+ (Rsh64x64 <t>
+ x
+ (Const64 <t> [63])))
+(Div64 <t> x (Const64 [c])) && c > 0 && smagic64ok(c) && smagic64m(c) < 0 ->
+ (Sub64 <t>
+ (Rsh64x64 <t>
+ (Add64 <t>
+ (Hmul64 <t>
+ (Const64 <t> [smagic64m(c)])
+ x)
+ x)
+ (Const64 <t> [smagic64s(c)]))
+ (Rsh64x64 <t>
+ x
+ (Const64 <t> [63])))
+(Div64 <t> x (Const64 [c])) && c < 0 && smagic64ok(c) && smagic64m(c) > 0 ->
+ (Neg64 <t>
+ (Sub64 <t>
+ (Rsh64x64 <t>
+ (Hmul64 <t>
+ (Const64 <t> [smagic64m(c)])
+ x)
+ (Const64 <t> [smagic64s(c)]))
+ (Rsh64x64 <t>
+ x
+ (Const64 <t> [63]))))
+(Div64 <t> x (Const64 [c])) && c < 0 && smagic64ok(c) && smagic64m(c) < 0 ->
+ (Neg64 <t>
+ (Sub64 <t>
+ (Rsh64x64 <t>
+ (Add64 <t>
+ (Hmul64 <t>
+ (Const64 <t> [smagic64m(c)])
+ x)
+ x)
+ (Const64 <t> [smagic64s(c)]))
+ (Rsh64x64 <t>
+ x
+ (Const64 <t> [63]))))
+
+// A%B = A-(A/B*B).
+// This implements % with two * and a bunch of ancillary ops.
+// One of the * is free if the user's code also computes A/B.
+(Mod64 <t> x (Const64 [c])) && smagic64ok(c) -> (Sub64 x (Mul64 <t> (Div64 <t> x (Const64 <t> [c])) (Const64 <t> [c])))
+(Mod64u <t> x (Const64 [c])) && umagic64ok(c) -> (Sub64 x (Mul64 <t> (Div64u <t> x (Const64 <t> [c])) (Const64 <t> [c])))
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package main
+
+var genericOps = []opData{
+ // 2-input arithmetic
+ // Types must be consistent with Go typing. Add, for example, must take two values
+ // of the same type and produces that same type.
+ {name: "Add8", argLength: 2, commutative: true}, // arg0 + arg1
+ {name: "Add16", argLength: 2, commutative: true},
+ {name: "Add32", argLength: 2, commutative: true},
+ {name: "Add64", argLength: 2, commutative: true},
+ {name: "AddPtr", argLength: 2}, // For address calculations. arg0 is a pointer and arg1 is an int.
+ {name: "Add32F", argLength: 2},
+ {name: "Add64F", argLength: 2},
+ // TODO: Add64C, Add128C
+
+ {name: "Sub8", argLength: 2}, // arg0 - arg1
+ {name: "Sub16", argLength: 2},
+ {name: "Sub32", argLength: 2},
+ {name: "Sub64", argLength: 2},
+ {name: "SubPtr", argLength: 2},
+ {name: "Sub32F", argLength: 2},
+ {name: "Sub64F", argLength: 2},
+
+ {name: "Mul8", argLength: 2, commutative: true}, // arg0 * arg1
+ {name: "Mul16", argLength: 2, commutative: true},
+ {name: "Mul32", argLength: 2, commutative: true},
+ {name: "Mul64", argLength: 2, commutative: true},
+ {name: "Mul32F", argLength: 2},
+ {name: "Mul64F", argLength: 2},
+
+ {name: "Div32F", argLength: 2}, // arg0 / arg1
+ {name: "Div64F", argLength: 2},
+
+ {name: "Hmul8", argLength: 2}, // (arg0 * arg1) >> width
+ {name: "Hmul8u", argLength: 2},
+ {name: "Hmul16", argLength: 2},
+ {name: "Hmul16u", argLength: 2},
+ {name: "Hmul32", argLength: 2},
+ {name: "Hmul32u", argLength: 2},
+ {name: "Hmul64", argLength: 2},
+ {name: "Hmul64u", argLength: 2},
+
+ // Weird special instruction for strength reduction of divides.
+ {name: "Avg64u", argLength: 2}, // (uint64(arg0) + uint64(arg1)) / 2, correct to all 64 bits.
+
+ {name: "Div8", argLength: 2}, // arg0 / arg1
+ {name: "Div8u", argLength: 2},
+ {name: "Div16", argLength: 2},
+ {name: "Div16u", argLength: 2},
+ {name: "Div32", argLength: 2},
+ {name: "Div32u", argLength: 2},
+ {name: "Div64", argLength: 2},
+ {name: "Div64u", argLength: 2},
+
+ {name: "Mod8", argLength: 2}, // arg0 % arg1
+ {name: "Mod8u", argLength: 2},
+ {name: "Mod16", argLength: 2},
+ {name: "Mod16u", argLength: 2},
+ {name: "Mod32", argLength: 2},
+ {name: "Mod32u", argLength: 2},
+ {name: "Mod64", argLength: 2},
+ {name: "Mod64u", argLength: 2},
+
+ {name: "And8", argLength: 2, commutative: true}, // arg0 & arg1
+ {name: "And16", argLength: 2, commutative: true},
+ {name: "And32", argLength: 2, commutative: true},
+ {name: "And64", argLength: 2, commutative: true},
+
+ {name: "Or8", argLength: 2, commutative: true}, // arg0 | arg1
+ {name: "Or16", argLength: 2, commutative: true},
+ {name: "Or32", argLength: 2, commutative: true},
+ {name: "Or64", argLength: 2, commutative: true},
+
+ {name: "Xor8", argLength: 2, commutative: true}, // arg0 ^ arg1
+ {name: "Xor16", argLength: 2, commutative: true},
+ {name: "Xor32", argLength: 2, commutative: true},
+ {name: "Xor64", argLength: 2, commutative: true},
+
+ // For shifts, AxB means the shifted value has A bits and the shift amount has B bits.
+ {name: "Lsh8x8", argLength: 2}, // arg0 << arg1
+ {name: "Lsh8x16", argLength: 2},
+ {name: "Lsh8x32", argLength: 2},
+ {name: "Lsh8x64", argLength: 2},
+ {name: "Lsh16x8", argLength: 2},
+ {name: "Lsh16x16", argLength: 2},
+ {name: "Lsh16x32", argLength: 2},
+ {name: "Lsh16x64", argLength: 2},
+ {name: "Lsh32x8", argLength: 2},
+ {name: "Lsh32x16", argLength: 2},
+ {name: "Lsh32x32", argLength: 2},
+ {name: "Lsh32x64", argLength: 2},
+ {name: "Lsh64x8", argLength: 2},
+ {name: "Lsh64x16", argLength: 2},
+ {name: "Lsh64x32", argLength: 2},
+ {name: "Lsh64x64", argLength: 2},
+
+ {name: "Rsh8x8", argLength: 2}, // arg0 >> arg1, signed
+ {name: "Rsh8x16", argLength: 2},
+ {name: "Rsh8x32", argLength: 2},
+ {name: "Rsh8x64", argLength: 2},
+ {name: "Rsh16x8", argLength: 2},
+ {name: "Rsh16x16", argLength: 2},
+ {name: "Rsh16x32", argLength: 2},
+ {name: "Rsh16x64", argLength: 2},
+ {name: "Rsh32x8", argLength: 2},
+ {name: "Rsh32x16", argLength: 2},
+ {name: "Rsh32x32", argLength: 2},
+ {name: "Rsh32x64", argLength: 2},
+ {name: "Rsh64x8", argLength: 2},
+ {name: "Rsh64x16", argLength: 2},
+ {name: "Rsh64x32", argLength: 2},
+ {name: "Rsh64x64", argLength: 2},
+
+ {name: "Rsh8Ux8", argLength: 2}, // arg0 >> arg1, unsigned
+ {name: "Rsh8Ux16", argLength: 2},
+ {name: "Rsh8Ux32", argLength: 2},
+ {name: "Rsh8Ux64", argLength: 2},
+ {name: "Rsh16Ux8", argLength: 2},
+ {name: "Rsh16Ux16", argLength: 2},
+ {name: "Rsh16Ux32", argLength: 2},
+ {name: "Rsh16Ux64", argLength: 2},
+ {name: "Rsh32Ux8", argLength: 2},
+ {name: "Rsh32Ux16", argLength: 2},
+ {name: "Rsh32Ux32", argLength: 2},
+ {name: "Rsh32Ux64", argLength: 2},
+ {name: "Rsh64Ux8", argLength: 2},
+ {name: "Rsh64Ux16", argLength: 2},
+ {name: "Rsh64Ux32", argLength: 2},
+ {name: "Rsh64Ux64", argLength: 2},
+
+ // (Left) rotates replace pattern matches in the front end
+ // of (arg0 << arg1) ^ (arg0 >> (A-arg1))
+ // where A is the bit width of arg0 and result.
+ // Note that because rotates are pattern-matched from
+ // shifts, that a rotate of arg1=A+k (k > 0) bits originated from
+ // (arg0 << A+k) ^ (arg0 >> -k) =
+ // 0 ^ arg0>>huge_unsigned =
+ // 0 ^ 0 = 0
+ // which is not the same as a rotation by A+k
+ //
+ // However, in the specific case of k = 0, the result of
+ // the shift idiom is the same as the result for the
+ // rotate idiom, i.e., result=arg0.
+ // This is different from shifts, where
+ // arg0 << A is defined to be zero.
+ //
+ // Because of this, and also because the primary use case
+ // for rotates is hashing and crypto code with constant
+ // distance, rotate instructions are only substituted
+ // when arg1 is a constant between 1 and A-1, inclusive.
+ {name: "Lrot8", argLength: 1, aux: "Int64"},
+ {name: "Lrot16", argLength: 1, aux: "Int64"},
+ {name: "Lrot32", argLength: 1, aux: "Int64"},
+ {name: "Lrot64", argLength: 1, aux: "Int64"},
+
+ // 2-input comparisons
+ {name: "Eq8", argLength: 2, commutative: true}, // arg0 == arg1
+ {name: "Eq16", argLength: 2, commutative: true},
+ {name: "Eq32", argLength: 2, commutative: true},
+ {name: "Eq64", argLength: 2, commutative: true},
+ {name: "EqPtr", argLength: 2, commutative: true},
+ {name: "EqInter", argLength: 2}, // arg0 or arg1 is nil; other cases handled by frontend
+ {name: "EqSlice", argLength: 2}, // arg0 or arg1 is nil; other cases handled by frontend
+ {name: "Eq32F", argLength: 2},
+ {name: "Eq64F", argLength: 2},
+
+ {name: "Neq8", argLength: 2, commutative: true}, // arg0 != arg1
+ {name: "Neq16", argLength: 2, commutative: true},
+ {name: "Neq32", argLength: 2, commutative: true},
+ {name: "Neq64", argLength: 2, commutative: true},
+ {name: "NeqPtr", argLength: 2, commutative: true},
+ {name: "NeqInter", argLength: 2}, // arg0 or arg1 is nil; other cases handled by frontend
+ {name: "NeqSlice", argLength: 2}, // arg0 or arg1 is nil; other cases handled by frontend
+ {name: "Neq32F", argLength: 2},
+ {name: "Neq64F", argLength: 2},
+
+ {name: "Less8", argLength: 2}, // arg0 < arg1
+ {name: "Less8U", argLength: 2},
+ {name: "Less16", argLength: 2},
+ {name: "Less16U", argLength: 2},
+ {name: "Less32", argLength: 2},
+ {name: "Less32U", argLength: 2},
+ {name: "Less64", argLength: 2},
+ {name: "Less64U", argLength: 2},
+ {name: "Less32F", argLength: 2},
+ {name: "Less64F", argLength: 2},
+
+ {name: "Leq8", argLength: 2}, // arg0 <= arg1
+ {name: "Leq8U", argLength: 2},
+ {name: "Leq16", argLength: 2},
+ {name: "Leq16U", argLength: 2},
+ {name: "Leq32", argLength: 2},
+ {name: "Leq32U", argLength: 2},
+ {name: "Leq64", argLength: 2},
+ {name: "Leq64U", argLength: 2},
+ {name: "Leq32F", argLength: 2},
+ {name: "Leq64F", argLength: 2},
+
+ {name: "Greater8", argLength: 2}, // arg0 > arg1
+ {name: "Greater8U", argLength: 2},
+ {name: "Greater16", argLength: 2},
+ {name: "Greater16U", argLength: 2},
+ {name: "Greater32", argLength: 2},
+ {name: "Greater32U", argLength: 2},
+ {name: "Greater64", argLength: 2},
+ {name: "Greater64U", argLength: 2},
+ {name: "Greater32F", argLength: 2},
+ {name: "Greater64F", argLength: 2},
+
+ {name: "Geq8", argLength: 2}, // arg0 <= arg1
+ {name: "Geq8U", argLength: 2},
+ {name: "Geq16", argLength: 2},
+ {name: "Geq16U", argLength: 2},
+ {name: "Geq32", argLength: 2},
+ {name: "Geq32U", argLength: 2},
+ {name: "Geq64", argLength: 2},
+ {name: "Geq64U", argLength: 2},
+ {name: "Geq32F", argLength: 2},
+ {name: "Geq64F", argLength: 2},
+
+ // 1-input ops
+ {name: "Not", argLength: 1}, // !arg0
+
+ {name: "Neg8", argLength: 1}, // -arg0
+ {name: "Neg16", argLength: 1},
+ {name: "Neg32", argLength: 1},
+ {name: "Neg64", argLength: 1},
+ {name: "Neg32F", argLength: 1},
+ {name: "Neg64F", argLength: 1},
+
+ {name: "Com8", argLength: 1}, // ^arg0
+ {name: "Com16", argLength: 1},
+ {name: "Com32", argLength: 1},
+ {name: "Com64", argLength: 1},
+
+ {name: "Sqrt", argLength: 1}, // sqrt(arg0), float64 only
+
+ // Data movement, max argument length for Phi is indefinite so just pick
+ // a really large number
+ {name: "Phi", argLength: -1}, // select an argument based on which predecessor block we came from
+ {name: "Copy", argLength: 1}, // output = arg0
+ // Convert converts between pointers and integers.
+ // We have a special op for this so as to not confuse GC
+ // (particularly stack maps). It takes a memory arg so it
+ // gets correctly ordered with respect to GC safepoints.
+ // arg0=ptr/int arg1=mem, output=int/ptr
+ {name: "Convert", argLength: 2},
+
+ // constants. Constant values are stored in the aux or
+ // auxint fields.
+ {name: "ConstBool", aux: "Bool"}, // auxint is 0 for false and 1 for true
+ {name: "ConstString", aux: "String"}, // value is aux.(string)
+ {name: "ConstNil", typ: "BytePtr"}, // nil pointer
+ {name: "Const8", aux: "Int8"}, // value is low 8 bits of auxint
+ {name: "Const16", aux: "Int16"}, // value is low 16 bits of auxint
+ {name: "Const32", aux: "Int32"}, // value is low 32 bits of auxint
+ {name: "Const64", aux: "Int64"}, // value is auxint
+ {name: "Const32F", aux: "Float"}, // value is math.Float64frombits(uint64(auxint))
+ {name: "Const64F", aux: "Float"}, // value is math.Float64frombits(uint64(auxint))
+ {name: "ConstInterface"}, // nil interface
+ {name: "ConstSlice"}, // nil slice
+
+ // Constant-like things
+ {name: "InitMem"}, // memory input to the function.
+ {name: "Arg", aux: "SymOff"}, // argument to the function. aux=GCNode of arg, off = offset in that arg.
+
+ // The address of a variable. arg0 is the base pointer (SB or SP, depending
+ // on whether it is a global or stack variable). The Aux field identifies the
+ // variable. It will be either an *ExternSymbol (with arg0=SB), *ArgSymbol (arg0=SP),
+ // or *AutoSymbol (arg0=SP).
+ {name: "Addr", argLength: 1, aux: "Sym"}, // Address of a variable. Arg0=SP or SB. Aux identifies the variable.
+
+ {name: "SP"}, // stack pointer
+ {name: "SB", typ: "Uintptr"}, // static base pointer (a.k.a. globals pointer)
+ {name: "Func", aux: "Sym"}, // entry address of a function
+
+ // Memory operations
+ {name: "Load", argLength: 2}, // Load from arg0. arg1=memory
+ {name: "Store", argLength: 3, typ: "Mem", aux: "Int64"}, // Store arg1 to arg0. arg2=memory, auxint=size. Returns memory.
+ {name: "Move", argLength: 3, aux: "Int64"}, // arg0=destptr, arg1=srcptr, arg2=mem, auxint=size. Returns memory.
+ {name: "Zero", argLength: 2, aux: "Int64"}, // arg0=destptr, arg1=mem, auxint=size. Returns memory.
+
+ // Function calls. Arguments to the call have already been written to the stack.
+ // Return values appear on the stack. The method receiver, if any, is treated
+ // as a phantom first argument.
+ {name: "ClosureCall", argLength: 3, aux: "Int64"}, // arg0=code pointer, arg1=context ptr, arg2=memory. auxint=arg size. Returns memory.
+ {name: "StaticCall", argLength: 1, aux: "SymOff"}, // call function aux.(*gc.Sym), arg0=memory. auxint=arg size. Returns memory.
+ {name: "DeferCall", argLength: 1, aux: "Int64"}, // defer call. arg0=memory, auxint=arg size. Returns memory.
+ {name: "GoCall", argLength: 1, aux: "Int64"}, // go call. arg0=memory, auxint=arg size. Returns memory.
+ {name: "InterCall", argLength: 2, aux: "Int64"}, // interface call. arg0=code pointer, arg1=memory, auxint=arg size. Returns memory.
+
+ // Conversions: signed extensions, zero (unsigned) extensions, truncations
+ {name: "SignExt8to16", argLength: 1, typ: "Int16"},
+ {name: "SignExt8to32", argLength: 1},
+ {name: "SignExt8to64", argLength: 1},
+ {name: "SignExt16to32", argLength: 1},
+ {name: "SignExt16to64", argLength: 1},
+ {name: "SignExt32to64", argLength: 1},
+ {name: "ZeroExt8to16", argLength: 1, typ: "UInt16"},
+ {name: "ZeroExt8to32", argLength: 1},
+ {name: "ZeroExt8to64", argLength: 1},
+ {name: "ZeroExt16to32", argLength: 1},
+ {name: "ZeroExt16to64", argLength: 1},
+ {name: "ZeroExt32to64", argLength: 1},
+ {name: "Trunc16to8", argLength: 1},
+ {name: "Trunc32to8", argLength: 1},
+ {name: "Trunc32to16", argLength: 1},
+ {name: "Trunc64to8", argLength: 1},
+ {name: "Trunc64to16", argLength: 1},
+ {name: "Trunc64to32", argLength: 1},
+
+ {name: "Cvt32to32F", argLength: 1},
+ {name: "Cvt32to64F", argLength: 1},
+ {name: "Cvt64to32F", argLength: 1},
+ {name: "Cvt64to64F", argLength: 1},
+ {name: "Cvt32Fto32", argLength: 1},
+ {name: "Cvt32Fto64", argLength: 1},
+ {name: "Cvt64Fto32", argLength: 1},
+ {name: "Cvt64Fto64", argLength: 1},
+ {name: "Cvt32Fto64F", argLength: 1},
+ {name: "Cvt64Fto32F", argLength: 1},
+
+ // Automatically inserted safety checks
+ {name: "IsNonNil", argLength: 1, typ: "Bool"}, // arg0 != nil
+ {name: "IsInBounds", argLength: 2, typ: "Bool"}, // 0 <= arg0 < arg1
+ {name: "IsSliceInBounds", argLength: 2, typ: "Bool"}, // 0 <= arg0 <= arg1
+ {name: "NilCheck", argLength: 2, typ: "Void"}, // arg0=ptr, arg1=mem. Panics if arg0 is nil, returns void.
+
+ // Pseudo-ops
+ {name: "GetG", argLength: 1}, // runtime.getg() (read g pointer). arg0=mem
+ {name: "GetClosurePtr"}, // get closure pointer from dedicated register
+
+ // Indexing operations
+ {name: "ArrayIndex", aux: "Int64", argLength: 1}, // arg0=array, auxint=index. Returns a[i]
+ {name: "PtrIndex", argLength: 2}, // arg0=ptr, arg1=index. Computes ptr+sizeof(*v.type)*index, where index is extended to ptrwidth type
+ {name: "OffPtr", argLength: 1, aux: "Int64"}, // arg0 + auxint (arg0 and result are pointers)
+
+ // Slices
+ {name: "SliceMake", argLength: 3}, // arg0=ptr, arg1=len, arg2=cap
+ {name: "SlicePtr", argLength: 1, typ: "BytePtr"}, // ptr(arg0)
+ {name: "SliceLen", argLength: 1}, // len(arg0)
+ {name: "SliceCap", argLength: 1}, // cap(arg0)
+
+ // Complex (part/whole)
+ {name: "ComplexMake", argLength: 2}, // arg0=real, arg1=imag
+ {name: "ComplexReal", argLength: 1}, // real(arg0)
+ {name: "ComplexImag", argLength: 1}, // imag(arg0)
+
+ // Strings
+ {name: "StringMake", argLength: 2}, // arg0=ptr, arg1=len
+ {name: "StringPtr", argLength: 1}, // ptr(arg0)
+ {name: "StringLen", argLength: 1}, // len(arg0)
+
+ // Interfaces
+ {name: "IMake", argLength: 2}, // arg0=itab, arg1=data
+ {name: "ITab", argLength: 1, typ: "BytePtr"}, // arg0=interface, returns itable field
+ {name: "IData", argLength: 1}, // arg0=interface, returns data field
+
+ // Structs
+ {name: "StructMake0"}, // Returns struct with 0 fields.
+ {name: "StructMake1", argLength: 1}, // arg0=field0. Returns struct.
+ {name: "StructMake2", argLength: 2}, // arg0,arg1=field0,field1. Returns struct.
+ {name: "StructMake3", argLength: 3}, // arg0..2=field0..2. Returns struct.
+ {name: "StructMake4", argLength: 4}, // arg0..3=field0..3. Returns struct.
+ {name: "StructSelect", argLength: 1, aux: "Int64"}, // arg0=struct, auxint=field index. Returns the auxint'th field.
+
+ // Spill&restore ops for the register allocator. These are
+ // semantically identical to OpCopy; they do not take/return
+ // stores like regular memory ops do. We can get away without memory
+ // args because we know there is no aliasing of spill slots on the stack.
+ {name: "StoreReg", argLength: 1},
+ {name: "LoadReg", argLength: 1},
+
+ // Used during ssa construction. Like Copy, but the arg has not been specified yet.
+ {name: "FwdRef"},
+
+ // Unknown value. Used for Values whose values don't matter because they are dead code.
+ {name: "Unknown"},
+
+ {name: "VarDef", argLength: 1, aux: "Sym", typ: "Mem"}, // aux is a *gc.Node of a variable that is about to be initialized. arg0=mem, returns mem
+ {name: "VarKill", argLength: 1, aux: "Sym"}, // aux is a *gc.Node of a variable that is known to be dead. arg0=mem, returns mem
+ {name: "VarLive", argLength: 1, aux: "Sym"}, // aux is a *gc.Node of a variable that must be kept live. arg0=mem, returns mem
+}
+
+// kind control successors implicit exit
+// ----------------------------------------------------------
+// Exit return mem [] yes
+// Ret return mem [] yes
+// RetJmp return mem [] yes
+// Plain nil [next]
+// If a boolean Value [then, else]
+// Call mem [next] yes (control opcode should be OpCall or OpStaticCall)
+// Check void [next] yes (control opcode should be Op{Lowered}NilCheck)
+// First nil [always,never]
+
+var genericBlocks = []blockData{
+ {name: "Plain"}, // a single successor
+ {name: "If"}, // 2 successors, if control goto Succs[0] else goto Succs[1]
+ {name: "Call"}, // 1 successor, control is call op (of memory type)
+ {name: "Check"}, // 1 successor, control is nilcheck op (of void type)
+ {name: "Ret"}, // no successors, control value is memory result
+ {name: "RetJmp"}, // no successors, jumps to b.Aux.(*gc.Sym)
+ {name: "Exit"}, // no successors, control value generates a panic
+
+ // transient block states used for dead code removal
+ {name: "First"}, // 2 successors, always takes the first one (second is dead)
+ {name: "Dead"}, // no successors; determined to be dead but not yet removed
+}
+
+func init() {
+ archs = append(archs, arch{"generic", genericOps, genericBlocks, nil})
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// The gen command generates Go code (in the parent directory) for all
+// the architecture-specific opcodes, blocks, and rewrites.
+
+package main
+
+import (
+ "bytes"
+ "flag"
+ "fmt"
+ "go/format"
+ "io/ioutil"
+ "log"
+ "regexp"
+ "sort"
+)
+
+type arch struct {
+ name string
+ ops []opData
+ blocks []blockData
+ regnames []string
+}
+
+type opData struct {
+ name string
+ reg regInfo
+ asm string
+ typ string // default result type
+ aux string
+ rematerializeable bool
+ argLength int32 // number of arguments, if -1, then this operation has a variable number of arguments
+ commutative bool // this operation is commutative (e.g. addition)
+}
+
+type blockData struct {
+ name string
+}
+
+type regInfo struct {
+ inputs []regMask
+ clobbers regMask
+ outputs []regMask
+}
+
+type regMask uint64
+
+func (a arch) regMaskComment(r regMask) string {
+ var buf bytes.Buffer
+ for i := uint64(0); r != 0; i++ {
+ if r&1 != 0 {
+ if buf.Len() == 0 {
+ buf.WriteString(" //")
+ }
+ buf.WriteString(" ")
+ buf.WriteString(a.regnames[i])
+ }
+ r >>= 1
+ }
+ return buf.String()
+}
+
+var archs []arch
+
+func main() {
+ flag.Parse()
+ genOp()
+ genLower()
+}
+
+func genOp() {
+ w := new(bytes.Buffer)
+ fmt.Fprintf(w, "// autogenerated: do not edit!\n")
+ fmt.Fprintf(w, "// generated from gen/*Ops.go\n")
+ fmt.Fprintln(w)
+ fmt.Fprintln(w, "package ssa")
+
+ fmt.Fprintln(w, "import \"cmd/internal/obj/x86\"")
+
+ // generate Block* declarations
+ fmt.Fprintln(w, "const (")
+ fmt.Fprintln(w, "BlockInvalid BlockKind = iota")
+ for _, a := range archs {
+ fmt.Fprintln(w)
+ for _, d := range a.blocks {
+ fmt.Fprintf(w, "Block%s%s\n", a.Name(), d.name)
+ }
+ }
+ fmt.Fprintln(w, ")")
+
+ // generate block kind string method
+ fmt.Fprintln(w, "var blockString = [...]string{")
+ fmt.Fprintln(w, "BlockInvalid:\"BlockInvalid\",")
+ for _, a := range archs {
+ fmt.Fprintln(w)
+ for _, b := range a.blocks {
+ fmt.Fprintf(w, "Block%s%s:\"%s\",\n", a.Name(), b.name, b.name)
+ }
+ }
+ fmt.Fprintln(w, "}")
+ fmt.Fprintln(w, "func (k BlockKind) String() string {return blockString[k]}")
+
+ // generate Op* declarations
+ fmt.Fprintln(w, "const (")
+ fmt.Fprintln(w, "OpInvalid Op = iota")
+ for _, a := range archs {
+ fmt.Fprintln(w)
+ for _, v := range a.ops {
+ fmt.Fprintf(w, "Op%s%s\n", a.Name(), v.name)
+ }
+ }
+ fmt.Fprintln(w, ")")
+
+ // generate OpInfo table
+ fmt.Fprintln(w, "var opcodeTable = [...]opInfo{")
+ fmt.Fprintln(w, " { name: \"OpInvalid\" },")
+ for _, a := range archs {
+ fmt.Fprintln(w)
+ for _, v := range a.ops {
+ fmt.Fprintln(w, "{")
+ fmt.Fprintf(w, "name:\"%s\",\n", v.name)
+
+ // flags
+ if v.aux != "" {
+ fmt.Fprintf(w, "auxType: aux%s,\n", v.aux)
+ }
+ fmt.Fprintf(w, "argLen: %d,\n", v.argLength)
+
+ if v.rematerializeable {
+ if v.reg.clobbers != 0 {
+ log.Fatalf("%s is rematerializeable and clobbers registers", v.name)
+ }
+ fmt.Fprintln(w, "rematerializeable: true,")
+ }
+ if v.commutative {
+ fmt.Fprintln(w, "commutative: true,")
+ }
+ if a.name == "generic" {
+ fmt.Fprintln(w, "generic:true,")
+ fmt.Fprintln(w, "},") // close op
+ // generic ops have no reg info or asm
+ continue
+ }
+ if v.asm != "" {
+ fmt.Fprintf(w, "asm: x86.A%s,\n", v.asm)
+ }
+ fmt.Fprintln(w, "reg:regInfo{")
+
+ // Compute input allocation order. We allocate from the
+ // most to the least constrained input. This order guarantees
+ // that we will always be able to find a register.
+ var s []intPair
+ for i, r := range v.reg.inputs {
+ if r != 0 {
+ s = append(s, intPair{countRegs(r), i})
+ }
+ }
+ if len(s) > 0 {
+ sort.Sort(byKey(s))
+ fmt.Fprintln(w, "inputs: []inputInfo{")
+ for _, p := range s {
+ r := v.reg.inputs[p.val]
+ fmt.Fprintf(w, "{%d,%d},%s\n", p.val, r, a.regMaskComment(r))
+ }
+ fmt.Fprintln(w, "},")
+ }
+ if v.reg.clobbers > 0 {
+ fmt.Fprintf(w, "clobbers: %d,%s\n", v.reg.clobbers, a.regMaskComment(v.reg.clobbers))
+ }
+ // reg outputs
+ if len(v.reg.outputs) > 0 {
+ fmt.Fprintln(w, "outputs: []regMask{")
+ for _, r := range v.reg.outputs {
+ fmt.Fprintf(w, "%d,%s\n", r, a.regMaskComment(r))
+ }
+ fmt.Fprintln(w, "},")
+ }
+ fmt.Fprintln(w, "},") // close reg info
+ fmt.Fprintln(w, "},") // close op
+ }
+ }
+ fmt.Fprintln(w, "}")
+
+ fmt.Fprintln(w, "func (o Op) Asm() int {return opcodeTable[o].asm}")
+
+ // generate op string method
+ fmt.Fprintln(w, "func (o Op) String() string {return opcodeTable[o].name }")
+
+ // gofmt result
+ b := w.Bytes()
+ var err error
+ b, err = format.Source(b)
+ if err != nil {
+ fmt.Printf("%s\n", w.Bytes())
+ panic(err)
+ }
+
+ err = ioutil.WriteFile("../opGen.go", b, 0666)
+ if err != nil {
+ log.Fatalf("can't write output: %v\n", err)
+ }
+
+ // Check that ../gc/ssa.go handles all the arch-specific opcodes.
+ // This is very much a hack, but it is better than nothing.
+ ssa, err := ioutil.ReadFile("../../gc/ssa.go")
+ if err != nil {
+ log.Fatalf("can't read ../../gc/ssa.go: %v", err)
+ }
+ for _, a := range archs {
+ if a.name == "generic" {
+ continue
+ }
+ for _, v := range a.ops {
+ pattern := fmt.Sprintf("\\Wssa[.]Op%s%s\\W", a.name, v.name)
+ match, err := regexp.Match(pattern, ssa)
+ if err != nil {
+ log.Fatalf("bad opcode regexp %s: %v", pattern, err)
+ }
+ if !match {
+ log.Fatalf("Op%s%s has no code generation in ../../gc/ssa.go", a.name, v.name)
+ }
+ }
+ }
+}
+
+// Name returns the name of the architecture for use in Op* and Block* enumerations.
+func (a arch) Name() string {
+ s := a.name
+ if s == "generic" {
+ s = ""
+ }
+ return s
+}
+
+func genLower() {
+ for _, a := range archs {
+ genRules(a)
+ }
+}
+
+// countRegs returns the number of set bits in the register mask.
+func countRegs(r regMask) int {
+ n := 0
+ for r != 0 {
+ n += int(r & 1)
+ r >>= 1
+ }
+ return n
+}
+
+// for sorting a pair of integers by key
+type intPair struct {
+ key, val int
+}
+type byKey []intPair
+
+func (a byKey) Len() int { return len(a) }
+func (a byKey) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
+func (a byKey) Less(i, j int) bool { return a[i].key < a[j].key }
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// This program generates Go code that applies rewrite rules to a Value.
+// The generated code implements a function of type func (v *Value) bool
+// which returns true iff if did something.
+// Ideas stolen from Swift: http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-2000-2.html
+
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "flag"
+ "fmt"
+ "go/format"
+ "io"
+ "io/ioutil"
+ "log"
+ "os"
+ "regexp"
+ "sort"
+ "strings"
+)
+
+// rule syntax:
+// sexpr [&& extra conditions] -> [@block] sexpr
+//
+// sexpr are s-expressions (lisp-like parenthesized groupings)
+// sexpr ::= (opcode sexpr*)
+// | variable
+// | <type>
+// | [auxint]
+// | {aux}
+//
+// aux ::= variable | {code}
+// type ::= variable | {code}
+// variable ::= some token
+// opcode ::= one of the opcodes from ../op.go (without the Op prefix)
+
+// extra conditions is just a chunk of Go that evaluates to a boolean. It may use
+// variables declared in the matching sexpr. The variable "v" is predefined to be
+// the value matched by the entire rule.
+
+// If multiple rules match, the first one in file order is selected.
+
+var (
+ genLog = flag.Bool("log", false, "generate code that logs; for debugging only")
+)
+
+type Rule struct {
+ rule string
+ lineno int
+}
+
+func (r Rule) String() string {
+ return fmt.Sprintf("rule %q at line %d", r.rule, r.lineno)
+}
+
+// parse returns the matching part of the rule, additional conditions, and the result.
+func (r Rule) parse() (match, cond, result string) {
+ s := strings.Split(r.rule, "->")
+ if len(s) != 2 {
+ log.Fatalf("no arrow in %s", r)
+ }
+ match = strings.TrimSpace(s[0])
+ result = strings.TrimSpace(s[1])
+ cond = ""
+ if i := strings.Index(match, "&&"); i >= 0 {
+ cond = strings.TrimSpace(match[i+2:])
+ match = strings.TrimSpace(match[:i])
+ }
+ return match, cond, result
+}
+
+func genRules(arch arch) {
+ // Open input file.
+ text, err := os.Open(arch.name + ".rules")
+ if err != nil {
+ log.Fatalf("can't read rule file: %v", err)
+ }
+
+ // oprules contains a list of rules for each block and opcode
+ blockrules := map[string][]Rule{}
+ oprules := map[string][]Rule{}
+
+ // read rule file
+ scanner := bufio.NewScanner(text)
+ rule := ""
+ var lineno int
+ for scanner.Scan() {
+ lineno++
+ line := scanner.Text()
+ if i := strings.Index(line, "//"); i >= 0 {
+ // Remove comments. Note that this isn't string safe, so
+ // it will truncate lines with // inside strings. Oh well.
+ line = line[:i]
+ }
+ rule += " " + line
+ rule = strings.TrimSpace(rule)
+ if rule == "" {
+ continue
+ }
+ if !strings.Contains(rule, "->") {
+ continue
+ }
+ if strings.HasSuffix(rule, "->") {
+ continue
+ }
+ if unbalanced(rule) {
+ continue
+ }
+ op := strings.Split(rule, " ")[0][1:]
+ if op[len(op)-1] == ')' {
+ op = op[:len(op)-1] // rule has only opcode, e.g. (ConstNil) -> ...
+ }
+ if isBlock(op, arch) {
+ blockrules[op] = append(blockrules[op], Rule{rule: rule, lineno: lineno})
+ } else {
+ oprules[op] = append(oprules[op], Rule{rule: rule, lineno: lineno})
+ }
+ rule = ""
+ }
+ if err := scanner.Err(); err != nil {
+ log.Fatalf("scanner failed: %v\n", err)
+ }
+ if unbalanced(rule) {
+ log.Fatalf("unbalanced rule at line %d: %v\n", lineno, rule)
+ }
+
+ // Order all the ops.
+ var ops []string
+ for op := range oprules {
+ ops = append(ops, op)
+ }
+ sort.Strings(ops)
+
+ // Start output buffer, write header.
+ w := new(bytes.Buffer)
+ fmt.Fprintf(w, "// autogenerated from gen/%s.rules: do not edit!\n", arch.name)
+ fmt.Fprintln(w, "// generated with: cd gen; go run *.go")
+ fmt.Fprintln(w)
+ fmt.Fprintln(w, "package ssa")
+ if *genLog {
+ fmt.Fprintln(w, "import \"fmt\"")
+ }
+ fmt.Fprintln(w, "import \"math\"")
+ fmt.Fprintln(w, "var _ = math.MinInt8 // in case not otherwise used")
+
+ // Main rewrite routine is a switch on v.Op.
+ fmt.Fprintf(w, "func rewriteValue%s(v *Value, config *Config) bool {\n", arch.name)
+ fmt.Fprintf(w, "switch v.Op {\n")
+ for _, op := range ops {
+ fmt.Fprintf(w, "case %s:\n", opName(op, arch))
+ fmt.Fprintf(w, "return rewriteValue%s_%s(v, config)\n", arch.name, opName(op, arch))
+ }
+ fmt.Fprintf(w, "}\n")
+ fmt.Fprintf(w, "return false\n")
+ fmt.Fprintf(w, "}\n")
+
+ // Generate a routine per op. Note that we don't make one giant routine
+ // because it is too big for some compilers.
+ for _, op := range ops {
+ fmt.Fprintf(w, "func rewriteValue%s_%s(v *Value, config *Config) bool {\n", arch.name, opName(op, arch))
+ fmt.Fprintln(w, "b := v.Block")
+ fmt.Fprintln(w, "_ = b")
+ for _, rule := range oprules[op] {
+ match, cond, result := rule.parse()
+ fmt.Fprintf(w, "// match: %s\n", match)
+ fmt.Fprintf(w, "// cond: %s\n", cond)
+ fmt.Fprintf(w, "// result: %s\n", result)
+
+ fmt.Fprintf(w, "for {\n")
+ genMatch(w, arch, match)
+
+ if cond != "" {
+ fmt.Fprintf(w, "if !(%s) {\nbreak\n}\n", cond)
+ }
+
+ genResult(w, arch, result)
+ if *genLog {
+ fmt.Fprintf(w, "fmt.Println(\"rewrite %s.rules:%d\")\n", arch.name, rule.lineno)
+ }
+ fmt.Fprintf(w, "return true\n")
+
+ fmt.Fprintf(w, "}\n")
+ }
+ fmt.Fprintf(w, "return false\n")
+ fmt.Fprintf(w, "}\n")
+ }
+
+ // Generate block rewrite function. There are only a few block types
+ // so we can make this one function with a switch.
+ fmt.Fprintf(w, "func rewriteBlock%s(b *Block) bool {\n", arch.name)
+ fmt.Fprintf(w, "switch b.Kind {\n")
+ ops = nil
+ for op := range blockrules {
+ ops = append(ops, op)
+ }
+ sort.Strings(ops)
+ for _, op := range ops {
+ fmt.Fprintf(w, "case %s:\n", blockName(op, arch))
+ for _, rule := range blockrules[op] {
+ match, cond, result := rule.parse()
+ fmt.Fprintf(w, "// match: %s\n", match)
+ fmt.Fprintf(w, "// cond: %s\n", cond)
+ fmt.Fprintf(w, "// result: %s\n", result)
+
+ fmt.Fprintf(w, "for {\n")
+
+ s := split(match[1 : len(match)-1]) // remove parens, then split
+
+ // check match of control value
+ if s[1] != "nil" {
+ fmt.Fprintf(w, "v := b.Control\n")
+ genMatch0(w, arch, s[1], "v", map[string]string{}, false)
+ }
+
+ // assign successor names
+ succs := s[2:]
+ for i, a := range succs {
+ if a != "_" {
+ fmt.Fprintf(w, "%s := b.Succs[%d]\n", a, i)
+ }
+ }
+
+ if cond != "" {
+ fmt.Fprintf(w, "if !(%s) {\nbreak\n}\n", cond)
+ }
+
+ // Rule matches. Generate result.
+ t := split(result[1 : len(result)-1]) // remove parens, then split
+ newsuccs := t[2:]
+
+ // Check if newsuccs is the same set as succs.
+ m := map[string]bool{}
+ for _, succ := range succs {
+ if m[succ] {
+ log.Fatalf("can't have a repeat successor name %s in %s", succ, rule)
+ }
+ m[succ] = true
+ }
+ for _, succ := range newsuccs {
+ if !m[succ] {
+ log.Fatalf("unknown successor %s in %s", succ, rule)
+ }
+ delete(m, succ)
+ }
+ if len(m) != 0 {
+ log.Fatalf("unmatched successors %v in %s", m, rule)
+ }
+
+ // Modify predecessor lists for no-longer-reachable blocks
+ for succ := range m {
+ fmt.Fprintf(w, "b.Func.removePredecessor(b, %s)\n", succ)
+ }
+
+ fmt.Fprintf(w, "b.Kind = %s\n", blockName(t[0], arch))
+ if t[1] == "nil" {
+ fmt.Fprintf(w, "b.Control = nil\n")
+ } else {
+ fmt.Fprintf(w, "b.Control = %s\n", genResult0(w, arch, t[1], new(int), false, false))
+ }
+ if len(newsuccs) < len(succs) {
+ fmt.Fprintf(w, "b.Succs = b.Succs[:%d]\n", len(newsuccs))
+ }
+ for i, a := range newsuccs {
+ fmt.Fprintf(w, "b.Succs[%d] = %s\n", i, a)
+ }
+ // Update branch prediction
+ switch {
+ case len(newsuccs) != 2:
+ fmt.Fprintln(w, "b.Likely = BranchUnknown")
+ case newsuccs[0] == succs[0] && newsuccs[1] == succs[1]:
+ // unchanged
+ case newsuccs[0] == succs[1] && newsuccs[1] == succs[0]:
+ // flipped
+ fmt.Fprintln(w, "b.Likely *= -1")
+ default:
+ // unknown
+ fmt.Fprintln(w, "b.Likely = BranchUnknown")
+ }
+
+ if *genLog {
+ fmt.Fprintf(w, "fmt.Println(\"rewrite %s.rules:%d\")\n", arch.name, rule.lineno)
+ }
+ fmt.Fprintf(w, "return true\n")
+
+ fmt.Fprintf(w, "}\n")
+ }
+ }
+ fmt.Fprintf(w, "}\n")
+ fmt.Fprintf(w, "return false\n")
+ fmt.Fprintf(w, "}\n")
+
+ // gofmt result
+ b := w.Bytes()
+ src, err := format.Source(b)
+ if err != nil {
+ fmt.Printf("%s\n", b)
+ panic(err)
+ }
+
+ // Write to file
+ err = ioutil.WriteFile("../rewrite"+arch.name+".go", src, 0666)
+ if err != nil {
+ log.Fatalf("can't write output: %v\n", err)
+ }
+}
+
+func genMatch(w io.Writer, arch arch, match string) {
+ genMatch0(w, arch, match, "v", map[string]string{}, true)
+}
+
+func genMatch0(w io.Writer, arch arch, match, v string, m map[string]string, top bool) {
+ if match[0] != '(' {
+ if _, ok := m[match]; ok {
+ // variable already has a definition. Check whether
+ // the old definition and the new definition match.
+ // For example, (add x x). Equality is just pointer equality
+ // on Values (so cse is important to do before lowering).
+ fmt.Fprintf(w, "if %s != %s {\nbreak\n}\n", v, match)
+ return
+ }
+ // remember that this variable references the given value
+ if match == "_" {
+ return
+ }
+ m[match] = v
+ fmt.Fprintf(w, "%s := %s\n", match, v)
+ return
+ }
+
+ // split body up into regions. Split by spaces/tabs, except those
+ // contained in () or {}.
+ s := split(match[1 : len(match)-1]) // remove parens, then split
+
+ // check op
+ if !top {
+ fmt.Fprintf(w, "if %s.Op != %s {\nbreak\n}\n", v, opName(s[0], arch))
+ }
+
+ // check type/aux/args
+ argnum := 0
+ for _, a := range s[1:] {
+ if a[0] == '<' {
+ // type restriction
+ t := a[1 : len(a)-1] // remove <>
+ if !isVariable(t) {
+ // code. We must match the results of this code.
+ fmt.Fprintf(w, "if %s.Type != %s {\nbreak\n}\n", v, t)
+ } else {
+ // variable
+ if u, ok := m[t]; ok {
+ // must match previous variable
+ fmt.Fprintf(w, "if %s.Type != %s {\nbreak\n}\n", v, u)
+ } else {
+ m[t] = v + ".Type"
+ fmt.Fprintf(w, "%s := %s.Type\n", t, v)
+ }
+ }
+ } else if a[0] == '[' {
+ // auxint restriction
+ x := a[1 : len(a)-1] // remove []
+ if !isVariable(x) {
+ // code
+ fmt.Fprintf(w, "if %s.AuxInt != %s {\nbreak\n}\n", v, x)
+ } else {
+ // variable
+ if y, ok := m[x]; ok {
+ fmt.Fprintf(w, "if %s.AuxInt != %s {\nbreak\n}\n", v, y)
+ } else {
+ m[x] = v + ".AuxInt"
+ fmt.Fprintf(w, "%s := %s.AuxInt\n", x, v)
+ }
+ }
+ } else if a[0] == '{' {
+ // auxint restriction
+ x := a[1 : len(a)-1] // remove {}
+ if !isVariable(x) {
+ // code
+ fmt.Fprintf(w, "if %s.Aux != %s {\nbreak\n}\n", v, x)
+ } else {
+ // variable
+ if y, ok := m[x]; ok {
+ fmt.Fprintf(w, "if %s.Aux != %s {\nbreak\n}\n", v, y)
+ } else {
+ m[x] = v + ".Aux"
+ fmt.Fprintf(w, "%s := %s.Aux\n", x, v)
+ }
+ }
+ } else {
+ // variable or sexpr
+ genMatch0(w, arch, a, fmt.Sprintf("%s.Args[%d]", v, argnum), m, false)
+ argnum++
+ }
+ }
+
+ variableLength := false
+ for _, op := range genericOps {
+ if op.name == s[0] && op.argLength == -1 {
+ variableLength = true
+ break
+ }
+ }
+ for _, op := range arch.ops {
+ if op.name == s[0] && op.argLength == -1 {
+ variableLength = true
+ break
+ }
+ }
+ if variableLength {
+ fmt.Fprintf(w, "if len(%s.Args) != %d {\nbreak\n}\n", v, argnum)
+ }
+}
+
+func genResult(w io.Writer, arch arch, result string) {
+ move := false
+ if result[0] == '@' {
+ // parse @block directive
+ s := strings.SplitN(result[1:], " ", 2)
+ fmt.Fprintf(w, "b = %s\n", s[0])
+ result = s[1]
+ move = true
+ }
+ genResult0(w, arch, result, new(int), true, move)
+}
+func genResult0(w io.Writer, arch arch, result string, alloc *int, top, move bool) string {
+ // TODO: when generating a constant result, use f.constVal to avoid
+ // introducing copies just to clean them up again.
+ if result[0] != '(' {
+ // variable
+ if top {
+ // It in not safe in general to move a variable between blocks
+ // (and particularly not a phi node).
+ // Introduce a copy.
+ fmt.Fprintf(w, "v.reset(OpCopy)\n")
+ fmt.Fprintf(w, "v.Type = %s.Type\n", result)
+ fmt.Fprintf(w, "v.AddArg(%s)\n", result)
+ }
+ return result
+ }
+
+ s := split(result[1 : len(result)-1]) // remove parens, then split
+
+ // Find the type of the variable.
+ var opType string
+ var typeOverride bool
+ for _, a := range s[1:] {
+ if a[0] == '<' {
+ // type restriction
+ opType = a[1 : len(a)-1] // remove <>
+ typeOverride = true
+ break
+ }
+ }
+ if opType == "" {
+ // find default type, if any
+ for _, op := range arch.ops {
+ if op.name == s[0] && op.typ != "" {
+ opType = typeName(op.typ)
+ break
+ }
+ }
+ }
+ if opType == "" {
+ for _, op := range genericOps {
+ if op.name == s[0] && op.typ != "" {
+ opType = typeName(op.typ)
+ break
+ }
+ }
+ }
+ var v string
+ if top && !move {
+ v = "v"
+ fmt.Fprintf(w, "v.reset(%s)\n", opName(s[0], arch))
+ if typeOverride {
+ fmt.Fprintf(w, "v.Type = %s\n", opType)
+ }
+ } else {
+ if opType == "" {
+ log.Fatalf("sub-expression %s (op=%s) must have a type", result, s[0])
+ }
+ v = fmt.Sprintf("v%d", *alloc)
+ *alloc++
+ fmt.Fprintf(w, "%s := b.NewValue0(v.Line, %s, %s)\n", v, opName(s[0], arch), opType)
+ if move {
+ // Rewrite original into a copy
+ fmt.Fprintf(w, "v.reset(OpCopy)\n")
+ fmt.Fprintf(w, "v.AddArg(%s)\n", v)
+ }
+ }
+ for _, a := range s[1:] {
+ if a[0] == '<' {
+ // type restriction, handled above
+ } else if a[0] == '[' {
+ // auxint restriction
+ x := a[1 : len(a)-1] // remove []
+ fmt.Fprintf(w, "%s.AuxInt = %s\n", v, x)
+ } else if a[0] == '{' {
+ // aux restriction
+ x := a[1 : len(a)-1] // remove {}
+ fmt.Fprintf(w, "%s.Aux = %s\n", v, x)
+ } else {
+ // regular argument (sexpr or variable)
+ x := genResult0(w, arch, a, alloc, false, move)
+ fmt.Fprintf(w, "%s.AddArg(%s)\n", v, x)
+ }
+ }
+
+ return v
+}
+
+func split(s string) []string {
+ var r []string
+
+outer:
+ for s != "" {
+ d := 0 // depth of ({[<
+ var open, close byte // opening and closing markers ({[< or )}]>
+ nonsp := false // found a non-space char so far
+ for i := 0; i < len(s); i++ {
+ switch {
+ case d == 0 && s[i] == '(':
+ open, close = '(', ')'
+ d++
+ case d == 0 && s[i] == '<':
+ open, close = '<', '>'
+ d++
+ case d == 0 && s[i] == '[':
+ open, close = '[', ']'
+ d++
+ case d == 0 && s[i] == '{':
+ open, close = '{', '}'
+ d++
+ case d == 0 && (s[i] == ' ' || s[i] == '\t'):
+ if nonsp {
+ r = append(r, strings.TrimSpace(s[:i]))
+ s = s[i:]
+ continue outer
+ }
+ case d > 0 && s[i] == open:
+ d++
+ case d > 0 && s[i] == close:
+ d--
+ default:
+ nonsp = true
+ }
+ }
+ if d != 0 {
+ panic("imbalanced expression: " + s)
+ }
+ if nonsp {
+ r = append(r, strings.TrimSpace(s))
+ }
+ break
+ }
+ return r
+}
+
+// isBlock returns true if this op is a block opcode.
+func isBlock(name string, arch arch) bool {
+ for _, b := range genericBlocks {
+ if b.name == name {
+ return true
+ }
+ }
+ for _, b := range arch.blocks {
+ if b.name == name {
+ return true
+ }
+ }
+ return false
+}
+
+// opName converts from an op name specified in a rule file to an Op enum.
+// if the name matches a generic op, returns "Op" plus the specified name.
+// Otherwise, returns "Op" plus arch name plus op name.
+func opName(name string, arch arch) string {
+ for _, op := range genericOps {
+ if op.name == name {
+ return "Op" + name
+ }
+ }
+ return "Op" + arch.name + name
+}
+
+func blockName(name string, arch arch) string {
+ for _, b := range genericBlocks {
+ if b.name == name {
+ return "Block" + name
+ }
+ }
+ return "Block" + arch.name + name
+}
+
+// typeName returns the string to use to generate a type.
+func typeName(typ string) string {
+ switch typ {
+ case "Flags", "Mem", "Void", "Int128":
+ return "Type" + typ
+ default:
+ return "config.fe.Type" + typ + "()"
+ }
+}
+
+// unbalanced returns true if there aren't the same number of ( and ) in the string.
+func unbalanced(s string) bool {
+ var left, right int
+ for _, c := range s {
+ if c == '(' {
+ left++
+ }
+ if c == ')' {
+ right++
+ }
+ }
+ return left != right
+}
+
+// isVariable reports whether s is a single Go alphanumeric identifier.
+func isVariable(s string) bool {
+ b, err := regexp.MatchString("^[A-Za-z_][A-Za-z_0-9]*$", s)
+ if err != nil {
+ panic("bad variable regexp")
+ }
+ return b
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import (
+ "bytes"
+ "fmt"
+ "html"
+ "io"
+ "os"
+)
+
+type HTMLWriter struct {
+ Logger
+ *os.File
+}
+
+func NewHTMLWriter(path string, logger Logger, funcname string) *HTMLWriter {
+ out, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0644)
+ if err != nil {
+ logger.Fatalf(0, "%v", err)
+ }
+ html := HTMLWriter{File: out, Logger: logger}
+ html.start(funcname)
+ return &html
+}
+
+func (w *HTMLWriter) start(name string) {
+ if w == nil {
+ return
+ }
+ w.WriteString("<html>")
+ w.WriteString(`<head>
+<style>
+
+#helplink {
+ margin-bottom: 15px;
+ display: block;
+ margin-top: -15px;
+}
+
+#help {
+ display: none;
+}
+
+.stats {
+ font-size: 60%;
+}
+
+table {
+ border: 1px solid black;
+ table-layout: fixed;
+ width: 300px;
+}
+
+th, td {
+ border: 1px solid black;
+ overflow: hidden;
+ width: 400px;
+ vertical-align: top;
+ padding: 5px;
+}
+
+li {
+ list-style-type: none;
+}
+
+li.ssa-long-value {
+ text-indent: -2em; /* indent wrapped lines */
+}
+
+li.ssa-value-list {
+ display: inline;
+}
+
+li.ssa-start-block {
+ padding: 0;
+ margin: 0;
+}
+
+li.ssa-end-block {
+ padding: 0;
+ margin: 0;
+}
+
+ul.ssa-print-func {
+ padding-left: 0;
+}
+
+dl.ssa-gen {
+ padding-left: 0;
+}
+
+dt.ssa-prog-src {
+ padding: 0;
+ margin: 0;
+ float: left;
+ width: 4em;
+}
+
+dd.ssa-prog {
+ padding: 0;
+ margin-right: 0;
+ margin-left: 4em;
+}
+
+.dead-value {
+ color: gray;
+}
+
+.dead-block {
+ opacity: 0.5;
+}
+
+.depcycle {
+ font-style: italic;
+}
+
+.highlight-yellow { background-color: yellow; }
+.highlight-aquamarine { background-color: aquamarine; }
+.highlight-coral { background-color: coral; }
+.highlight-lightpink { background-color: lightpink; }
+.highlight-lightsteelblue { background-color: lightsteelblue; }
+.highlight-palegreen { background-color: palegreen; }
+.highlight-powderblue { background-color: powderblue; }
+.highlight-lightgray { background-color: lightgray; }
+
+.outline-blue { outline: blue solid 2px; }
+.outline-red { outline: red solid 2px; }
+.outline-blueviolet { outline: blueviolet solid 2px; }
+.outline-darkolivegreen { outline: darkolivegreen solid 2px; }
+.outline-fuchsia { outline: fuchsia solid 2px; }
+.outline-sienna { outline: sienna solid 2px; }
+.outline-gold { outline: gold solid 2px; }
+
+</style>
+
+<script type="text/javascript">
+// ordered list of all available highlight colors
+var highlights = [
+ "highlight-yellow",
+ "highlight-aquamarine",
+ "highlight-coral",
+ "highlight-lightpink",
+ "highlight-lightsteelblue",
+ "highlight-palegreen",
+ "highlight-lightgray"
+];
+
+// state: which value is highlighted this color?
+var highlighted = {};
+for (var i = 0; i < highlights.length; i++) {
+ highlighted[highlights[i]] = "";
+}
+
+// ordered list of all available outline colors
+var outlines = [
+ "outline-blue",
+ "outline-red",
+ "outline-blueviolet",
+ "outline-darkolivegreen",
+ "outline-fuchsia",
+ "outline-sienna",
+ "outline-gold"
+];
+
+// state: which value is outlined this color?
+var outlined = {};
+for (var i = 0; i < outlines.length; i++) {
+ outlined[outlines[i]] = "";
+}
+
+window.onload = function() {
+ var ssaElemClicked = function(elem, event, selections, selected) {
+ event.stopPropagation()
+
+ // TODO: pushState with updated state and read it on page load,
+ // so that state can survive across reloads
+
+ // find all values with the same name
+ var c = elem.classList.item(0);
+ var x = document.getElementsByClassName(c);
+
+ // if selected, remove selections from all of them
+ // otherwise, attempt to add
+
+ var remove = "";
+ for (var i = 0; i < selections.length; i++) {
+ var color = selections[i];
+ if (selected[color] == c) {
+ remove = color;
+ break;
+ }
+ }
+
+ if (remove != "") {
+ for (var i = 0; i < x.length; i++) {
+ x[i].classList.remove(remove);
+ }
+ selected[remove] = "";
+ return;
+ }
+
+ // we're adding a selection
+ // find first available color
+ var avail = "";
+ for (var i = 0; i < selections.length; i++) {
+ var color = selections[i];
+ if (selected[color] == "") {
+ avail = color;
+ break;
+ }
+ }
+ if (avail == "") {
+ alert("out of selection colors; go add more");
+ return;
+ }
+
+ // set that as the selection
+ for (var i = 0; i < x.length; i++) {
+ x[i].classList.add(avail);
+ }
+ selected[avail] = c;
+ };
+
+ var ssaValueClicked = function(event) {
+ ssaElemClicked(this, event, highlights, highlighted);
+ }
+
+ var ssaBlockClicked = function(event) {
+ ssaElemClicked(this, event, outlines, outlined);
+ }
+
+ var ssavalues = document.getElementsByClassName("ssa-value");
+ for (var i = 0; i < ssavalues.length; i++) {
+ ssavalues[i].addEventListener('click', ssaValueClicked);
+ }
+
+ var ssalongvalues = document.getElementsByClassName("ssa-long-value");
+ for (var i = 0; i < ssalongvalues.length; i++) {
+ // don't attach listeners to li nodes, just the spans they contain
+ if (ssalongvalues[i].nodeName == "SPAN") {
+ ssalongvalues[i].addEventListener('click', ssaValueClicked);
+ }
+ }
+
+ var ssablocks = document.getElementsByClassName("ssa-block");
+ for (var i = 0; i < ssablocks.length; i++) {
+ ssablocks[i].addEventListener('click', ssaBlockClicked);
+ }
+};
+
+function toggle_visibility(id) {
+ var e = document.getElementById(id);
+ if(e.style.display == 'block')
+ e.style.display = 'none';
+ else
+ e.style.display = 'block';
+}
+</script>
+
+</head>`)
+ // TODO: Add javascript click handlers for blocks
+ // to outline that block across all phases
+ w.WriteString("<body>")
+ w.WriteString("<h1>")
+ w.WriteString(html.EscapeString(name))
+ w.WriteString("</h1>")
+ w.WriteString(`
+<a href="#" onclick="toggle_visibility('help');" id="helplink">help</a>
+<div id="help">
+
+<p>
+Click on a value or block to toggle highlighting of that value/block and its uses.
+Values and blocks are highlighted by ID, which may vary across passes.
+(TODO: Fix this.)
+</p>
+
+<p>
+Faded out values and blocks are dead code that has not been eliminated.
+</p>
+
+<p>
+Values printed in italics have a dependency cycle.
+</p>
+
+</div>
+`)
+ w.WriteString("<table>")
+ w.WriteString("<tr>")
+}
+
+func (w *HTMLWriter) Close() {
+ if w == nil {
+ return
+ }
+ w.WriteString("</tr>")
+ w.WriteString("</table>")
+ w.WriteString("</body>")
+ w.WriteString("</html>")
+ w.File.Close()
+}
+
+// WriteFunc writes f in a column headed by title.
+func (w *HTMLWriter) WriteFunc(title string, f *Func) {
+ if w == nil {
+ return // avoid generating HTML just to discard it
+ }
+ w.WriteColumn(title, f.HTML())
+ // TODO: Add visual representation of f's CFG.
+}
+
+// WriteColumn writes raw HTML in a column headed by title.
+// It is intended for pre- and post-compilation log output.
+func (w *HTMLWriter) WriteColumn(title string, html string) {
+ if w == nil {
+ return
+ }
+ w.WriteString("<td>")
+ w.WriteString("<h2>" + title + "</h2>")
+ w.WriteString(html)
+ w.WriteString("</td>")
+}
+
+func (w *HTMLWriter) Printf(msg string, v ...interface{}) {
+ if _, err := fmt.Fprintf(w.File, msg, v...); err != nil {
+ w.Fatalf(0, "%v", err)
+ }
+}
+
+func (w *HTMLWriter) WriteString(s string) {
+ if _, err := w.File.WriteString(s); err != nil {
+ w.Fatalf(0, "%v", err)
+ }
+}
+
+func (v *Value) HTML() string {
+ // TODO: Using the value ID as the class ignores the fact
+ // that value IDs get recycled and that some values
+ // are transmuted into other values.
+ return fmt.Sprintf("<span class=\"%[1]s ssa-value\">%[1]s</span>", v.String())
+}
+
+func (v *Value) LongHTML() string {
+ // TODO: Any intra-value formatting?
+ // I'm wary of adding too much visual noise,
+ // but a little bit might be valuable.
+ // We already have visual noise in the form of punctuation
+ // maybe we could replace some of that with formatting.
+ s := fmt.Sprintf("<span class=\"%s ssa-long-value\">", v.String())
+ s += fmt.Sprintf("%s = %s", v.HTML(), v.Op.String())
+ s += " <" + html.EscapeString(v.Type.String()) + ">"
+ if v.AuxInt != 0 {
+ s += fmt.Sprintf(" [%d]", v.AuxInt)
+ }
+ if v.Aux != nil {
+ if _, ok := v.Aux.(string); ok {
+ s += html.EscapeString(fmt.Sprintf(" {%q}", v.Aux))
+ } else {
+ s += html.EscapeString(fmt.Sprintf(" {%v}", v.Aux))
+ }
+ }
+ for _, a := range v.Args {
+ s += fmt.Sprintf(" %s", a.HTML())
+ }
+ r := v.Block.Func.RegAlloc
+ if int(v.ID) < len(r) && r[v.ID] != nil {
+ s += " : " + r[v.ID].Name()
+ }
+
+ s += "</span>"
+ return s
+}
+
+func (b *Block) HTML() string {
+ // TODO: Using the value ID as the class ignores the fact
+ // that value IDs get recycled and that some values
+ // are transmuted into other values.
+ return fmt.Sprintf("<span class=\"%[1]s ssa-block\">%[1]s</span>", html.EscapeString(b.String()))
+}
+
+func (b *Block) LongHTML() string {
+ // TODO: improve this for HTML?
+ s := fmt.Sprintf("<span class=\"%s ssa-block\">%s</span>", html.EscapeString(b.String()), html.EscapeString(b.Kind.String()))
+ if b.Aux != nil {
+ s += html.EscapeString(fmt.Sprintf(" {%v}", b.Aux))
+ }
+ if b.Control != nil {
+ s += fmt.Sprintf(" %s", b.Control.HTML())
+ }
+ if len(b.Succs) > 0 {
+ s += " →" // right arrow
+ for _, c := range b.Succs {
+ s += " " + c.HTML()
+ }
+ }
+ switch b.Likely {
+ case BranchUnlikely:
+ s += " (unlikely)"
+ case BranchLikely:
+ s += " (likely)"
+ }
+ return s
+}
+
+func (f *Func) HTML() string {
+ var buf bytes.Buffer
+ fmt.Fprint(&buf, "<code>")
+ p := htmlFuncPrinter{w: &buf}
+ fprintFunc(p, f)
+
+ // fprintFunc(&buf, f) // TODO: HTML, not text, <br /> for line breaks, etc.
+ fmt.Fprint(&buf, "</code>")
+ return buf.String()
+}
+
+type htmlFuncPrinter struct {
+ w io.Writer
+}
+
+func (p htmlFuncPrinter) header(f *Func) {}
+
+func (p htmlFuncPrinter) startBlock(b *Block, reachable bool) {
+ // TODO: Make blocks collapsable?
+ var dead string
+ if !reachable {
+ dead = "dead-block"
+ }
+ fmt.Fprintf(p.w, "<ul class=\"%s ssa-print-func %s\">", b, dead)
+ fmt.Fprintf(p.w, "<li class=\"ssa-start-block\">%s:", b.HTML())
+ if len(b.Preds) > 0 {
+ io.WriteString(p.w, " ←") // left arrow
+ for _, pred := range b.Preds {
+ fmt.Fprintf(p.w, " %s", pred.HTML())
+ }
+ }
+ io.WriteString(p.w, "</li>")
+ if len(b.Values) > 0 { // start list of values
+ io.WriteString(p.w, "<li class=\"ssa-value-list\">")
+ io.WriteString(p.w, "<ul>")
+ }
+}
+
+func (p htmlFuncPrinter) endBlock(b *Block) {
+ if len(b.Values) > 0 { // end list of values
+ io.WriteString(p.w, "</ul>")
+ io.WriteString(p.w, "</li>")
+ }
+ io.WriteString(p.w, "<li class=\"ssa-end-block\">")
+ fmt.Fprint(p.w, b.LongHTML())
+ io.WriteString(p.w, "</li>")
+ io.WriteString(p.w, "</ul>")
+ // io.WriteString(p.w, "</span>")
+}
+
+func (p htmlFuncPrinter) value(v *Value, live bool) {
+ var dead string
+ if !live {
+ dead = "dead-value"
+ }
+ fmt.Fprintf(p.w, "<li class=\"ssa-long-value %s\">", dead)
+ fmt.Fprint(p.w, v.LongHTML())
+ io.WriteString(p.w, "</li>")
+}
+
+func (p htmlFuncPrinter) startDepCycle() {
+ fmt.Fprintln(p.w, "<span class=\"depcycle\">")
+}
+
+func (p htmlFuncPrinter) endDepCycle() {
+ fmt.Fprintln(p.w, "</span>")
+}
+
+func (p htmlFuncPrinter) named(n LocalSlot, vals []*Value) {
+ // TODO
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+type ID int32
+
+// idAlloc provides an allocator for unique integers.
+type idAlloc struct {
+ last ID
+}
+
+// get allocates an ID and returns it.
+func (a *idAlloc) get() ID {
+ x := a.last
+ x++
+ if x == 1<<31-1 {
+ panic("too many ids for this function")
+ }
+ a.last = x
+ return x
+}
+
+// num returns the maximum ID ever returned + 1.
+func (a *idAlloc) num() int {
+ return int(a.last + 1)
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// layout orders basic blocks in f with the goal of minimizing control flow instructions.
+// After this phase returns, the order of f.Blocks matters and is the order
+// in which those blocks will appear in the assembly output.
+func layout(f *Func) {
+ order := make([]*Block, 0, f.NumBlocks())
+ scheduled := make([]bool, f.NumBlocks())
+ idToBlock := make([]*Block, f.NumBlocks())
+ indegree := make([]int, f.NumBlocks())
+ posdegree := f.newSparseSet(f.NumBlocks()) // blocks with positive remaining degree
+ defer f.retSparseSet(posdegree)
+ zerodegree := f.newSparseSet(f.NumBlocks()) // blocks with zero remaining degree
+ defer f.retSparseSet(zerodegree)
+
+ // Initialize indegree of each block
+ for _, b := range f.Blocks {
+ idToBlock[b.ID] = b
+ indegree[b.ID] = len(b.Preds)
+ if len(b.Preds) == 0 {
+ zerodegree.add(b.ID)
+ } else {
+ posdegree.add(b.ID)
+ }
+ }
+
+ bid := f.Entry.ID
+blockloop:
+ for {
+ // add block to schedule
+ b := idToBlock[bid]
+ order = append(order, b)
+ scheduled[bid] = true
+ if len(order) == len(f.Blocks) {
+ break
+ }
+
+ for _, c := range b.Succs {
+ indegree[c.ID]--
+ if indegree[c.ID] == 0 {
+ posdegree.remove(c.ID)
+ zerodegree.add(c.ID)
+ }
+ }
+
+ // Pick the next block to schedule
+ // Pick among the successor blocks that have not been scheduled yet.
+
+ // Use likely direction if we have it.
+ var likely *Block
+ switch b.Likely {
+ case BranchLikely:
+ likely = b.Succs[0]
+ case BranchUnlikely:
+ likely = b.Succs[1]
+ }
+ if likely != nil && !scheduled[likely.ID] {
+ bid = likely.ID
+ continue
+ }
+
+ // Use degree for now.
+ bid = 0
+ mindegree := f.NumBlocks()
+ for _, c := range order[len(order)-1].Succs {
+ if scheduled[c.ID] {
+ continue
+ }
+ if indegree[c.ID] < mindegree {
+ mindegree = indegree[c.ID]
+ bid = c.ID
+ }
+ }
+ if bid != 0 {
+ continue
+ }
+ // TODO: improve this part
+ // No successor of the previously scheduled block works.
+ // Pick a zero-degree block if we can.
+ for zerodegree.size() > 0 {
+ cid := zerodegree.pop()
+ if !scheduled[cid] {
+ bid = cid
+ continue blockloop
+ }
+ }
+ // Still nothing, pick any block.
+ for {
+ cid := posdegree.pop()
+ if !scheduled[cid] {
+ bid = cid
+ continue blockloop
+ }
+ }
+ b.Fatalf("no block available for layout")
+ }
+ f.Blocks = order
+}
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import (
+ "fmt"
+)
+
+type loop struct {
+ header *Block // The header node of this (reducible) loop
+ outer *loop // loop containing this loop
+ // Next two fields not currently used, but cheap to maintain,
+ // and aid in computation of inner-ness and list of blocks.
+ nBlocks int32 // Number of blocks in this loop but not within inner loops
+ isInner bool // True if never discovered to contain a loop
+}
+
+// outerinner records that outer contains inner
+func (sdom sparseTree) outerinner(outer, inner *loop) {
+ oldouter := inner.outer
+ if oldouter == nil || sdom.isAncestorEq(oldouter.header, outer.header) {
+ inner.outer = outer
+ outer.isInner = false
+ }
+}
+
+type loopnest struct {
+ f *Func
+ b2l []*loop
+ po []*Block
+ sdom sparseTree
+ loops []*loop
+}
+
+func min8(a, b int8) int8 {
+ if a < b {
+ return a
+ }
+ return b
+}
+
+func max8(a, b int8) int8 {
+ if a > b {
+ return a
+ }
+ return b
+}
+
+const (
+ blDEFAULT = 0
+ blMin = blDEFAULT
+ blCALL = 1
+ blRET = 2
+ blEXIT = 3
+)
+
+var bllikelies [4]string = [4]string{"default", "call", "ret", "exit"}
+
+func describePredictionAgrees(b *Block, prediction BranchPrediction) string {
+ s := ""
+ if prediction == b.Likely {
+ s = " (agrees with previous)"
+ } else if b.Likely != BranchUnknown {
+ s = " (disagrees with previous, ignored)"
+ }
+ return s
+}
+
+func describeBranchPrediction(f *Func, b *Block, likely, not int8, prediction BranchPrediction) {
+ f.Config.Warnl(int(b.Line), "Branch prediction rule %s < %s%s",
+ bllikelies[likely-blMin], bllikelies[not-blMin], describePredictionAgrees(b, prediction))
+}
+
+func likelyadjust(f *Func) {
+ // The values assigned to certain and local only matter
+ // in their rank order. 0 is default, more positive
+ // is less likely. It's possible to assign a negative
+ // unlikeliness (though not currently the case).
+ certain := make([]int8, f.NumBlocks()) // In the long run, all outcomes are at least this bad. Mainly for Exit
+ local := make([]int8, f.NumBlocks()) // for our immediate predecessors.
+
+ nest := loopnestfor(f)
+ po := nest.po
+ b2l := nest.b2l
+
+ for _, b := range po {
+ switch b.Kind {
+ case BlockExit:
+ // Very unlikely.
+ local[b.ID] = blEXIT
+ certain[b.ID] = blEXIT
+
+ // Ret, it depends.
+ case BlockRet, BlockRetJmp:
+ local[b.ID] = blRET
+ certain[b.ID] = blRET
+
+ // Calls. TODO not all calls are equal, names give useful clues.
+ // Any name-based heuristics are only relative to other calls,
+ // and less influential than inferences from loop structure.
+ case BlockCall:
+ local[b.ID] = blCALL
+ certain[b.ID] = max8(blCALL, certain[b.Succs[0].ID])
+
+ default:
+ if len(b.Succs) == 1 {
+ certain[b.ID] = certain[b.Succs[0].ID]
+ } else if len(b.Succs) == 2 {
+ // If successor is an unvisited backedge, it's in loop and we don't care.
+ // Its default unlikely is also zero which is consistent with favoring loop edges.
+ // Notice that this can act like a "reset" on unlikeliness at loops; the
+ // default "everything returns" unlikeliness is erased by min with the
+ // backedge likeliness; however a loop with calls on every path will be
+ // tagged with call cost. Net effect is that loop entry is favored.
+ b0 := b.Succs[0].ID
+ b1 := b.Succs[1].ID
+ certain[b.ID] = min8(certain[b0], certain[b1])
+
+ l := b2l[b.ID]
+ l0 := b2l[b0]
+ l1 := b2l[b1]
+
+ prediction := b.Likely
+ // Weak loop heuristic -- both source and at least one dest are in loops,
+ // and there is a difference in the destinations.
+ // TODO what is best arrangement for nested loops?
+ if l != nil && l0 != l1 {
+ noprediction := false
+ switch {
+ // prefer not to exit loops
+ case l1 == nil:
+ prediction = BranchLikely
+ case l0 == nil:
+ prediction = BranchUnlikely
+
+ // prefer to stay in loop, not exit to outer.
+ case l == l0:
+ prediction = BranchLikely
+ case l == l1:
+ prediction = BranchUnlikely
+ default:
+ noprediction = true
+ }
+ if f.pass.debug > 0 && !noprediction {
+ f.Config.Warnl(int(b.Line), "Branch prediction rule stay in loop%s",
+ describePredictionAgrees(b, prediction))
+ }
+
+ } else {
+ // Lacking loop structure, fall back on heuristics.
+ if certain[b1] > certain[b0] {
+ prediction = BranchLikely
+ if f.pass.debug > 0 {
+ describeBranchPrediction(f, b, certain[b0], certain[b1], prediction)
+ }
+ } else if certain[b0] > certain[b1] {
+ prediction = BranchUnlikely
+ if f.pass.debug > 0 {
+ describeBranchPrediction(f, b, certain[b1], certain[b0], prediction)
+ }
+ } else if local[b1] > local[b0] {
+ prediction = BranchLikely
+ if f.pass.debug > 0 {
+ describeBranchPrediction(f, b, local[b0], local[b1], prediction)
+ }
+ } else if local[b0] > local[b1] {
+ prediction = BranchUnlikely
+ if f.pass.debug > 0 {
+ describeBranchPrediction(f, b, local[b1], local[b0], prediction)
+ }
+ }
+ }
+ if b.Likely != prediction {
+ if b.Likely == BranchUnknown {
+ b.Likely = prediction
+ }
+ }
+ }
+ }
+ if f.pass.debug > 2 {
+ f.Config.Warnl(int(b.Line), "BP: Block %s, local=%s, certain=%s", b, bllikelies[local[b.ID]-blMin], bllikelies[certain[b.ID]-blMin])
+ }
+
+ }
+}
+
+func (l *loop) String() string {
+ return fmt.Sprintf("hdr:%s", l.header)
+}
+
+func (l *loop) LongString() string {
+ i := ""
+ o := ""
+ if l.isInner {
+ i = ", INNER"
+ }
+ if l.outer != nil {
+ o = ", o=" + l.outer.header.String()
+ }
+ return fmt.Sprintf("hdr:%s%s%s", l.header, i, o)
+}
+
+// nearestOuterLoop returns the outer loop of loop most nearly
+// containing block b; the header must dominate b. loop itself
+// is assumed to not be that loop. For acceptable performance,
+// we're relying on loop nests to not be terribly deep.
+func (l *loop) nearestOuterLoop(sdom sparseTree, b *Block) *loop {
+ var o *loop
+ for o = l.outer; o != nil && !sdom.isAncestorEq(o.header, b); o = o.outer {
+ }
+ return o
+}
+
+func loopnestfor(f *Func) *loopnest {
+ po := postorder(f)
+ dom := dominators(f)
+ sdom := newSparseTree(f, dom)
+ b2l := make([]*loop, f.NumBlocks())
+ loops := make([]*loop, 0)
+
+ // Reducible-loop-nest-finding.
+ for _, b := range po {
+ if f.pass.debug > 3 {
+ fmt.Printf("loop finding (0) at %s\n", b)
+ }
+
+ var innermost *loop // innermost header reachable from this block
+
+ // IF any successor s of b is in a loop headed by h
+ // AND h dominates b
+ // THEN b is in the loop headed by h.
+ //
+ // Choose the first/innermost such h.
+ //
+ // IF s itself dominates b, the s is a loop header;
+ // and there may be more than one such s.
+ // Since there's at most 2 successors, the inner/outer ordering
+ // between them can be established with simple comparisons.
+ for _, bb := range b.Succs {
+ l := b2l[bb.ID]
+
+ if sdom.isAncestorEq(bb, b) { // Found a loop header
+ if l == nil {
+ l = &loop{header: bb, isInner: true}
+ loops = append(loops, l)
+ b2l[bb.ID] = l
+ }
+ } else { // Perhaps a loop header is inherited.
+ // is there any loop containing our successor whose
+ // header dominates b?
+ if l != nil && !sdom.isAncestorEq(l.header, b) {
+ l = l.nearestOuterLoop(sdom, b)
+ }
+ }
+
+ if l == nil || innermost == l {
+ continue
+ }
+
+ if innermost == nil {
+ innermost = l
+ continue
+ }
+
+ if sdom.isAncestor(innermost.header, l.header) {
+ sdom.outerinner(innermost, l)
+ innermost = l
+ } else if sdom.isAncestor(l.header, innermost.header) {
+ sdom.outerinner(l, innermost)
+ }
+ }
+
+ if innermost != nil {
+ b2l[b.ID] = innermost
+ innermost.nBlocks++
+ }
+ }
+ if f.pass.debug > 1 && len(loops) > 0 {
+ fmt.Printf("Loops in %s:\n", f.Name)
+ for _, l := range loops {
+ fmt.Printf("%s, b=", l.LongString())
+ for _, b := range f.Blocks {
+ if b2l[b.ID] == l {
+ fmt.Printf(" %s", b)
+ }
+ }
+ fmt.Print("\n")
+ }
+ fmt.Printf("Nonloop blocks in %s:", f.Name)
+ for _, b := range f.Blocks {
+ if b2l[b.ID] == nil {
+ fmt.Printf(" %s", b)
+ }
+ }
+ fmt.Print("\n")
+ }
+ return &loopnest{f, b2l, po, sdom, loops}
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import "fmt"
+
+// A place that an ssa variable can reside.
+type Location interface {
+ Name() string // name to use in assembly templates: %rax, 16(%rsp), ...
+}
+
+// A Register is a machine register, like %rax.
+// They are numbered densely from 0 (for each architecture).
+type Register struct {
+ Num int32
+ name string
+}
+
+func (r *Register) Name() string {
+ return r.name
+}
+
+// A LocalSlot is a location in the stack frame.
+// It is (possibly a subpiece of) a PPARAM, PPARAMOUT, or PAUTO ONAME node.
+type LocalSlot struct {
+ N GCNode // an ONAME *gc.Node representing a variable on the stack
+ Type Type // type of slot
+ Off int64 // offset of slot in N
+}
+
+func (s LocalSlot) Name() string {
+ if s.Off == 0 {
+ return fmt.Sprintf("%s[%s]", s.N, s.Type)
+ }
+ return fmt.Sprintf("%s+%d[%s]", s.N, s.Off, s.Type)
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// convert to machine-dependent ops
+func lower(f *Func) {
+ // repeat rewrites until we find no more rewrites
+ applyRewrite(f, f.Config.lowerBlock, f.Config.lowerValue)
+}
+
+// checkLower checks for unlowered opcodes and fails if we find one.
+func checkLower(f *Func) {
+ // Needs to be a separate phase because it must run after both
+ // lowering and a subsequent dead code elimination (because lowering
+ // rules may leave dead generic ops behind).
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ if !opcodeTable[v.Op].generic {
+ continue // lowered
+ }
+ switch v.Op {
+ case OpSP, OpSB, OpInitMem, OpArg, OpPhi, OpVarDef, OpVarKill, OpVarLive:
+ continue // ok not to lower
+ }
+ s := "not lowered: " + v.Op.String() + " " + v.Type.SimpleString()
+ for _, a := range v.Args {
+ s += " " + a.Type.SimpleString()
+ }
+ f.Unimplementedf("%s", s)
+ }
+ }
+}
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// A copy of the code in ../gc/subr.go.
+// We can't use it directly because it would generate
+// an import cycle. TODO: move to a common support package.
+
+// argument passing to/from
+// smagic and umagic
+type magic struct {
+ W int // input for both - width
+ S int // output for both - shift
+ Bad int // output for both - unexpected failure
+
+ // magic multiplier for signed literal divisors
+ Sd int64 // input - literal divisor
+ Sm int64 // output - multiplier
+
+ // magic multiplier for unsigned literal divisors
+ Ud uint64 // input - literal divisor
+ Um uint64 // output - multiplier
+ Ua int // output - adder
+}
+
+// magic number for signed division
+// see hacker's delight chapter 10
+func smagic(m *magic) {
+ var mask uint64
+
+ m.Bad = 0
+ switch m.W {
+ default:
+ m.Bad = 1
+ return
+
+ case 8:
+ mask = 0xff
+
+ case 16:
+ mask = 0xffff
+
+ case 32:
+ mask = 0xffffffff
+
+ case 64:
+ mask = 0xffffffffffffffff
+ }
+
+ two31 := mask ^ (mask >> 1)
+
+ p := m.W - 1
+ ad := uint64(m.Sd)
+ if m.Sd < 0 {
+ ad = -uint64(m.Sd)
+ }
+
+ // bad denominators
+ if ad == 0 || ad == 1 || ad == two31 {
+ m.Bad = 1
+ return
+ }
+
+ t := two31
+ ad &= mask
+
+ anc := t - 1 - t%ad
+ anc &= mask
+
+ q1 := two31 / anc
+ r1 := two31 - q1*anc
+ q1 &= mask
+ r1 &= mask
+
+ q2 := two31 / ad
+ r2 := two31 - q2*ad
+ q2 &= mask
+ r2 &= mask
+
+ var delta uint64
+ for {
+ p++
+ q1 <<= 1
+ r1 <<= 1
+ q1 &= mask
+ r1 &= mask
+ if r1 >= anc {
+ q1++
+ r1 -= anc
+ q1 &= mask
+ r1 &= mask
+ }
+
+ q2 <<= 1
+ r2 <<= 1
+ q2 &= mask
+ r2 &= mask
+ if r2 >= ad {
+ q2++
+ r2 -= ad
+ q2 &= mask
+ r2 &= mask
+ }
+
+ delta = ad - r2
+ delta &= mask
+ if q1 < delta || (q1 == delta && r1 == 0) {
+ continue
+ }
+
+ break
+ }
+
+ m.Sm = int64(q2 + 1)
+ if uint64(m.Sm)&two31 != 0 {
+ m.Sm |= ^int64(mask)
+ }
+ m.S = p - m.W
+}
+
+// magic number for unsigned division
+// see hacker's delight chapter 10
+func umagic(m *magic) {
+ var mask uint64
+
+ m.Bad = 0
+ m.Ua = 0
+
+ switch m.W {
+ default:
+ m.Bad = 1
+ return
+
+ case 8:
+ mask = 0xff
+
+ case 16:
+ mask = 0xffff
+
+ case 32:
+ mask = 0xffffffff
+
+ case 64:
+ mask = 0xffffffffffffffff
+ }
+
+ two31 := mask ^ (mask >> 1)
+
+ m.Ud &= mask
+ if m.Ud == 0 || m.Ud == two31 {
+ m.Bad = 1
+ return
+ }
+
+ nc := mask - (-m.Ud&mask)%m.Ud
+ p := m.W - 1
+
+ q1 := two31 / nc
+ r1 := two31 - q1*nc
+ q1 &= mask
+ r1 &= mask
+
+ q2 := (two31 - 1) / m.Ud
+ r2 := (two31 - 1) - q2*m.Ud
+ q2 &= mask
+ r2 &= mask
+
+ var delta uint64
+ for {
+ p++
+ if r1 >= nc-r1 {
+ q1 <<= 1
+ q1++
+ r1 <<= 1
+ r1 -= nc
+ } else {
+ q1 <<= 1
+ r1 <<= 1
+ }
+
+ q1 &= mask
+ r1 &= mask
+ if r2+1 >= m.Ud-r2 {
+ if q2 >= two31-1 {
+ m.Ua = 1
+ }
+
+ q2 <<= 1
+ q2++
+ r2 <<= 1
+ r2++
+ r2 -= m.Ud
+ } else {
+ if q2 >= two31 {
+ m.Ua = 1
+ }
+
+ q2 <<= 1
+ r2 <<= 1
+ r2++
+ }
+
+ q2 &= mask
+ r2 &= mask
+
+ delta = m.Ud - 1 - r2
+ delta &= mask
+
+ if p < m.W+m.W {
+ if q1 < delta || (q1 == delta && r1 == 0) {
+ continue
+ }
+ }
+
+ break
+ }
+
+ m.Um = q2 + 1
+ m.S = p - m.W
+}
+
+// adaptors for use by rewrite rules
+func smagic64ok(d int64) bool {
+ m := magic{W: 64, Sd: d}
+ smagic(&m)
+ return m.Bad == 0
+}
+func smagic64m(d int64) int64 {
+ m := magic{W: 64, Sd: d}
+ smagic(&m)
+ return m.Sm
+}
+func smagic64s(d int64) int64 {
+ m := magic{W: 64, Sd: d}
+ smagic(&m)
+ return int64(m.S)
+}
+
+func umagic64ok(d int64) bool {
+ m := magic{W: 64, Ud: uint64(d)}
+ umagic(&m)
+ return m.Bad == 0
+}
+func umagic64m(d int64) int64 {
+ m := magic{W: 64, Ud: uint64(d)}
+ umagic(&m)
+ return int64(m.Um)
+}
+func umagic64s(d int64) int64 {
+ m := magic{W: 64, Ud: uint64(d)}
+ umagic(&m)
+ return int64(m.S)
+}
+func umagic64a(d int64) bool {
+ m := magic{W: 64, Ud: uint64(d)}
+ umagic(&m)
+ return m.Ua != 0
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// TODO: return value from newobject/newarray is non-nil.
+
+// nilcheckelim eliminates unnecessary nil checks.
+func nilcheckelim(f *Func) {
+ // A nil check is redundant if the same nil check was successful in a
+ // dominating block. The efficacy of this pass depends heavily on the
+ // efficacy of the cse pass.
+ idom := dominators(f)
+ domTree := make([][]*Block, f.NumBlocks())
+
+ // Create a block ID -> [dominees] mapping
+ for _, b := range f.Blocks {
+ if dom := idom[b.ID]; dom != nil {
+ domTree[dom.ID] = append(domTree[dom.ID], b)
+ }
+ }
+
+ // TODO: Eliminate more nil checks.
+ // We can recursively remove any chain of fixed offset calculations,
+ // i.e. struct fields and array elements, even with non-constant
+ // indices: x is non-nil iff x.a.b[i].c is.
+
+ type walkState int
+ const (
+ Work walkState = iota // clear nil check if we should and traverse to dominees regardless
+ RecPtr // record the pointer as being nil checked
+ ClearPtr
+ )
+
+ type bp struct {
+ block *Block // block, or nil in RecPtr/ClearPtr state
+ ptr *Value // if non-nil, ptr that is to be set/cleared in RecPtr/ClearPtr state
+ op walkState
+ }
+
+ work := make([]bp, 0, 256)
+ work = append(work, bp{block: f.Entry})
+
+ // map from value ID to bool indicating if value is known to be non-nil
+ // in the current dominator path being walked. This slice is updated by
+ // walkStates to maintain the known non-nil values.
+ nonNilValues := make([]bool, f.NumValues())
+
+ // make an initial pass identifying any non-nil values
+ for _, b := range f.Blocks {
+ // a value resulting from taking the address of a
+ // value, or a value constructed from an offset of a
+ // non-nil ptr (OpAddPtr) implies it is non-nil
+ for _, v := range b.Values {
+ if v.Op == OpAddr || v.Op == OpAddPtr {
+ nonNilValues[v.ID] = true
+ } else if v.Op == OpPhi {
+ // phis whose arguments are all non-nil
+ // are non-nil
+ argsNonNil := true
+ for _, a := range v.Args {
+ if !nonNilValues[a.ID] {
+ argsNonNil = false
+ }
+ }
+ if argsNonNil {
+ nonNilValues[v.ID] = true
+ }
+ }
+ }
+ }
+
+ // perform a depth first walk of the dominee tree
+ for len(work) > 0 {
+ node := work[len(work)-1]
+ work = work[:len(work)-1]
+
+ switch node.op {
+ case Work:
+ checked := checkedptr(node.block) // ptr being checked for nil/non-nil
+ nonnil := nonnilptr(node.block) // ptr that is non-nil due to this blocks pred
+
+ if checked != nil {
+ // already have a nilcheck in the dominator path, or this block is a success
+ // block for the same value it is checking
+ if nonNilValues[checked.ID] || checked == nonnil {
+ // Eliminate the nil check.
+ // The deadcode pass will remove vestigial values,
+ // and the fuse pass will join this block with its successor.
+
+ // Logging in the style of the former compiler -- and omit line 1,
+ // which is usually in generated code.
+ if f.Config.Debug_checknil() && int(node.block.Control.Line) > 1 {
+ f.Config.Warnl(int(node.block.Control.Line), "removed nil check")
+ }
+
+ switch node.block.Kind {
+ case BlockIf:
+ node.block.Kind = BlockFirst
+ node.block.Control = nil
+ case BlockCheck:
+ node.block.Kind = BlockPlain
+ node.block.Control = nil
+ default:
+ f.Fatalf("bad block kind in nilcheck %s", node.block.Kind)
+ }
+ }
+ }
+
+ if nonnil != nil && !nonNilValues[nonnil.ID] {
+ // this is a new nilcheck so add a ClearPtr node to clear the
+ // ptr from the map of nil checks once we traverse
+ // back up the tree
+ work = append(work, bp{op: ClearPtr, ptr: nonnil})
+ }
+
+ // add all dominated blocks to the work list
+ for _, w := range domTree[node.block.ID] {
+ work = append(work, bp{block: w})
+ }
+
+ if nonnil != nil && !nonNilValues[nonnil.ID] {
+ work = append(work, bp{op: RecPtr, ptr: nonnil})
+ }
+ case RecPtr:
+ nonNilValues[node.ptr.ID] = true
+ continue
+ case ClearPtr:
+ nonNilValues[node.ptr.ID] = false
+ continue
+ }
+ }
+}
+
+// checkedptr returns the Value, if any,
+// that is used in a nil check in b's Control op.
+func checkedptr(b *Block) *Value {
+ if b.Kind == BlockCheck {
+ return b.Control.Args[0]
+ }
+ if b.Kind == BlockIf && b.Control.Op == OpIsNonNil {
+ return b.Control.Args[0]
+ }
+ return nil
+}
+
+// nonnilptr returns the Value, if any,
+// that is non-nil due to b being the successor block
+// of an OpIsNonNil or OpNilCheck block for the value and having a single
+// predecessor.
+func nonnilptr(b *Block) *Value {
+ if len(b.Preds) == 1 {
+ bp := b.Preds[0]
+ if bp.Kind == BlockCheck {
+ return bp.Control.Args[0]
+ }
+ if bp.Kind == BlockIf && bp.Control.Op == OpIsNonNil && bp.Succs[0] == b {
+ return bp.Control.Args[0]
+ }
+ }
+ return nil
+}
--- /dev/null
+package ssa
+
+import (
+ "strconv"
+ "testing"
+)
+
+func BenchmarkNilCheckDeep1(b *testing.B) { benchmarkNilCheckDeep(b, 1) }
+func BenchmarkNilCheckDeep10(b *testing.B) { benchmarkNilCheckDeep(b, 10) }
+func BenchmarkNilCheckDeep100(b *testing.B) { benchmarkNilCheckDeep(b, 100) }
+func BenchmarkNilCheckDeep1000(b *testing.B) { benchmarkNilCheckDeep(b, 1000) }
+func BenchmarkNilCheckDeep10000(b *testing.B) { benchmarkNilCheckDeep(b, 10000) }
+
+// benchmarkNilCheckDeep is a stress test of nilcheckelim.
+// It uses the worst possible input: A linear string of
+// nil checks, none of which can be eliminated.
+// Run with multiple depths to observe big-O behavior.
+func benchmarkNilCheckDeep(b *testing.B, depth int) {
+ ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+
+ var blocs []bloc
+ blocs = append(blocs,
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Goto(blockn(0)),
+ ),
+ )
+ for i := 0; i < depth; i++ {
+ blocs = append(blocs,
+ Bloc(blockn(i),
+ Valu(ptrn(i), OpAddr, ptrType, 0, nil, "sb"),
+ Valu(booln(i), OpIsNonNil, TypeBool, 0, nil, ptrn(i)),
+ If(booln(i), blockn(i+1), "exit"),
+ ),
+ )
+ }
+ blocs = append(blocs,
+ Bloc(blockn(depth), Goto("exit")),
+ Bloc("exit", Exit("mem")),
+ )
+
+ c := NewConfig("amd64", DummyFrontend{b}, nil, true)
+ fun := Fun(c, "entry", blocs...)
+
+ CheckFunc(fun.f)
+ b.SetBytes(int64(depth)) // helps for eyeballing linearity
+ b.ResetTimer()
+ b.ReportAllocs()
+
+ for i := 0; i < b.N; i++ {
+ nilcheckelim(fun.f)
+ }
+}
+
+func blockn(n int) string { return "b" + strconv.Itoa(n) }
+func ptrn(n int) string { return "p" + strconv.Itoa(n) }
+func booln(n int) string { return "c" + strconv.Itoa(n) }
+
+func isNilCheck(b *Block) bool {
+ return b.Kind == BlockIf && b.Control.Op == OpIsNonNil
+}
+
+// TestNilcheckSimple verifies that a second repeated nilcheck is removed.
+func TestNilcheckSimple(t *testing.T) {
+ ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ c := NewConfig("amd64", DummyFrontend{t}, nil, true)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Goto("checkPtr")),
+ Bloc("checkPtr",
+ Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
+ Valu("bool1", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ If("bool1", "secondCheck", "exit")),
+ Bloc("secondCheck",
+ Valu("bool2", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ If("bool2", "extra", "exit")),
+ Bloc("extra",
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem")))
+
+ CheckFunc(fun.f)
+ nilcheckelim(fun.f)
+
+ // clean up the removed nil check
+ fuse(fun.f)
+ deadcode(fun.f)
+
+ CheckFunc(fun.f)
+ for _, b := range fun.f.Blocks {
+ if b == fun.blocks["secondCheck"] && isNilCheck(b) {
+ t.Errorf("secondCheck was not eliminated")
+ }
+ }
+}
+
+// TestNilcheckDomOrder ensures that the nil check elimination isn't dependant
+// on the order of the dominees.
+func TestNilcheckDomOrder(t *testing.T) {
+ ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ c := NewConfig("amd64", DummyFrontend{t}, nil, true)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Goto("checkPtr")),
+ Bloc("checkPtr",
+ Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
+ Valu("bool1", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ If("bool1", "secondCheck", "exit")),
+ Bloc("exit",
+ Exit("mem")),
+ Bloc("secondCheck",
+ Valu("bool2", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ If("bool2", "extra", "exit")),
+ Bloc("extra",
+ Goto("exit")))
+
+ CheckFunc(fun.f)
+ nilcheckelim(fun.f)
+
+ // clean up the removed nil check
+ fuse(fun.f)
+ deadcode(fun.f)
+
+ CheckFunc(fun.f)
+ for _, b := range fun.f.Blocks {
+ if b == fun.blocks["secondCheck"] && isNilCheck(b) {
+ t.Errorf("secondCheck was not eliminated")
+ }
+ }
+}
+
+// TestNilcheckAddr verifies that nilchecks of OpAddr constructed values are removed.
+func TestNilcheckAddr(t *testing.T) {
+ ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ c := NewConfig("amd64", DummyFrontend{t}, nil, true)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Goto("checkPtr")),
+ Bloc("checkPtr",
+ Valu("ptr1", OpAddr, ptrType, 0, nil, "sb"),
+ Valu("bool1", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ If("bool1", "extra", "exit")),
+ Bloc("extra",
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem")))
+
+ CheckFunc(fun.f)
+ nilcheckelim(fun.f)
+
+ // clean up the removed nil check
+ fuse(fun.f)
+ deadcode(fun.f)
+
+ CheckFunc(fun.f)
+ for _, b := range fun.f.Blocks {
+ if b == fun.blocks["checkPtr"] && isNilCheck(b) {
+ t.Errorf("checkPtr was not eliminated")
+ }
+ }
+}
+
+// TestNilcheckAddPtr verifies that nilchecks of OpAddPtr constructed values are removed.
+func TestNilcheckAddPtr(t *testing.T) {
+ ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ c := NewConfig("amd64", DummyFrontend{t}, nil, true)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Goto("checkPtr")),
+ Bloc("checkPtr",
+ Valu("off", OpConst64, TypeInt64, 20, nil),
+ Valu("ptr1", OpAddPtr, ptrType, 0, nil, "sb", "off"),
+ Valu("bool1", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ If("bool1", "extra", "exit")),
+ Bloc("extra",
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem")))
+
+ CheckFunc(fun.f)
+ nilcheckelim(fun.f)
+
+ // clean up the removed nil check
+ fuse(fun.f)
+ deadcode(fun.f)
+
+ CheckFunc(fun.f)
+ for _, b := range fun.f.Blocks {
+ if b == fun.blocks["checkPtr"] && isNilCheck(b) {
+ t.Errorf("checkPtr was not eliminated")
+ }
+ }
+}
+
+// TestNilcheckPhi tests that nil checks of phis, for which all values are known to be
+// non-nil are removed.
+func TestNilcheckPhi(t *testing.T) {
+ ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ c := NewConfig("amd64", DummyFrontend{t}, nil, true)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu("sp", OpSP, TypeInvalid, 0, nil),
+ Valu("baddr", OpAddr, TypeBool, 0, "b", "sp"),
+ Valu("bool1", OpLoad, TypeBool, 0, nil, "baddr", "mem"),
+ If("bool1", "b1", "b2")),
+ Bloc("b1",
+ Valu("ptr1", OpAddr, ptrType, 0, nil, "sb"),
+ Goto("checkPtr")),
+ Bloc("b2",
+ Valu("ptr2", OpAddr, ptrType, 0, nil, "sb"),
+ Goto("checkPtr")),
+ // both ptr1 and ptr2 are guaranteed non-nil here
+ Bloc("checkPtr",
+ Valu("phi", OpPhi, ptrType, 0, nil, "ptr1", "ptr2"),
+ Valu("bool2", OpIsNonNil, TypeBool, 0, nil, "phi"),
+ If("bool2", "extra", "exit")),
+ Bloc("extra",
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem")))
+
+ CheckFunc(fun.f)
+ nilcheckelim(fun.f)
+
+ // clean up the removed nil check
+ fuse(fun.f)
+ deadcode(fun.f)
+
+ CheckFunc(fun.f)
+ for _, b := range fun.f.Blocks {
+ if b == fun.blocks["checkPtr"] && isNilCheck(b) {
+ t.Errorf("checkPtr was not eliminated")
+ }
+ }
+}
+
+// TestNilcheckKeepRemove verifies that duplicate checks of the same pointer
+// are removed, but checks of different pointers are not.
+func TestNilcheckKeepRemove(t *testing.T) {
+ ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ c := NewConfig("amd64", DummyFrontend{t}, nil, true)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Goto("checkPtr")),
+ Bloc("checkPtr",
+ Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
+ Valu("bool1", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ If("bool1", "differentCheck", "exit")),
+ Bloc("differentCheck",
+ Valu("ptr2", OpLoad, ptrType, 0, nil, "sb", "mem"),
+ Valu("bool2", OpIsNonNil, TypeBool, 0, nil, "ptr2"),
+ If("bool2", "secondCheck", "exit")),
+ Bloc("secondCheck",
+ Valu("bool3", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ If("bool3", "extra", "exit")),
+ Bloc("extra",
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem")))
+
+ CheckFunc(fun.f)
+ nilcheckelim(fun.f)
+
+ // clean up the removed nil check
+ fuse(fun.f)
+ deadcode(fun.f)
+
+ CheckFunc(fun.f)
+ foundDifferentCheck := false
+ for _, b := range fun.f.Blocks {
+ if b == fun.blocks["secondCheck"] && isNilCheck(b) {
+ t.Errorf("secondCheck was not eliminated")
+ }
+ if b == fun.blocks["differentCheck"] && isNilCheck(b) {
+ foundDifferentCheck = true
+ }
+ }
+ if !foundDifferentCheck {
+ t.Errorf("removed differentCheck, but shouldn't have")
+ }
+}
+
+// TestNilcheckInFalseBranch tests that nil checks in the false branch of an nilcheck
+// block are *not* removed.
+func TestNilcheckInFalseBranch(t *testing.T) {
+ ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ c := NewConfig("amd64", DummyFrontend{t}, nil, true)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Goto("checkPtr")),
+ Bloc("checkPtr",
+ Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
+ Valu("bool1", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ If("bool1", "extra", "secondCheck")),
+ Bloc("secondCheck",
+ Valu("bool2", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ If("bool2", "extra", "thirdCheck")),
+ Bloc("thirdCheck",
+ Valu("bool3", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ If("bool3", "extra", "exit")),
+ Bloc("extra",
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem")))
+
+ CheckFunc(fun.f)
+ nilcheckelim(fun.f)
+
+ // clean up the removed nil check
+ fuse(fun.f)
+ deadcode(fun.f)
+
+ CheckFunc(fun.f)
+ foundSecondCheck := false
+ foundThirdCheck := false
+ for _, b := range fun.f.Blocks {
+ if b == fun.blocks["secondCheck"] && isNilCheck(b) {
+ foundSecondCheck = true
+ }
+ if b == fun.blocks["thirdCheck"] && isNilCheck(b) {
+ foundThirdCheck = true
+ }
+ }
+ if !foundSecondCheck {
+ t.Errorf("removed secondCheck, but shouldn't have [false branch]")
+ }
+ if !foundThirdCheck {
+ t.Errorf("removed thirdCheck, but shouldn't have [false branch]")
+ }
+}
+
+// TestNilcheckUser verifies that a user nil check that dominates a generated nil check
+// wil remove the generated nil check.
+func TestNilcheckUser(t *testing.T) {
+ ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ c := NewConfig("amd64", DummyFrontend{t}, nil, true)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Goto("checkPtr")),
+ Bloc("checkPtr",
+ Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
+ Valu("nilptr", OpConstNil, ptrType, 0, nil),
+ Valu("bool1", OpNeqPtr, TypeBool, 0, nil, "ptr1", "nilptr"),
+ If("bool1", "secondCheck", "exit")),
+ Bloc("secondCheck",
+ Valu("bool2", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ If("bool2", "extra", "exit")),
+ Bloc("extra",
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem")))
+
+ CheckFunc(fun.f)
+ // we need the opt here to rewrite the user nilcheck
+ opt(fun.f)
+ nilcheckelim(fun.f)
+
+ // clean up the removed nil check
+ fuse(fun.f)
+ deadcode(fun.f)
+
+ CheckFunc(fun.f)
+ for _, b := range fun.f.Blocks {
+ if b == fun.blocks["secondCheck"] && isNilCheck(b) {
+ t.Errorf("secondCheck was not eliminated")
+ }
+ }
+}
+
+// TestNilcheckBug reproduces a bug in nilcheckelim found by compiling math/big
+func TestNilcheckBug(t *testing.T) {
+ ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ c := NewConfig("amd64", DummyFrontend{t}, nil, true)
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Goto("checkPtr")),
+ Bloc("checkPtr",
+ Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
+ Valu("nilptr", OpConstNil, ptrType, 0, nil),
+ Valu("bool1", OpNeqPtr, TypeBool, 0, nil, "ptr1", "nilptr"),
+ If("bool1", "secondCheck", "couldBeNil")),
+ Bloc("couldBeNil",
+ Goto("secondCheck")),
+ Bloc("secondCheck",
+ Valu("bool2", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ If("bool2", "extra", "exit")),
+ Bloc("extra",
+ // prevent fuse from eliminating this block
+ Valu("store", OpStore, TypeMem, 8, nil, "ptr1", "nilptr", "mem"),
+ Goto("exit")),
+ Bloc("exit",
+ Valu("phi", OpPhi, TypeMem, 0, nil, "mem", "store"),
+ Exit("mem")))
+
+ CheckFunc(fun.f)
+ // we need the opt here to rewrite the user nilcheck
+ opt(fun.f)
+ nilcheckelim(fun.f)
+
+ // clean up the removed nil check
+ fuse(fun.f)
+ deadcode(fun.f)
+
+ CheckFunc(fun.f)
+ foundSecondCheck := false
+ for _, b := range fun.f.Blocks {
+ if b == fun.blocks["secondCheck"] && isNilCheck(b) {
+ foundSecondCheck = true
+ }
+ }
+ if !foundSecondCheck {
+ t.Errorf("secondCheck was eliminated, but shouldn't have")
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import "fmt"
+
+// An Op encodes the specific operation that a Value performs.
+// Opcodes' semantics can be modified by the type and aux fields of the Value.
+// For instance, OpAdd can be 32 or 64 bit, signed or unsigned, float or complex, depending on Value.Type.
+// Semantics of each op are described in the opcode files in gen/*Ops.go.
+// There is one file for generic (architecture-independent) ops and one file
+// for each architecture.
+type Op int32
+
+type opInfo struct {
+ name string
+ asm int
+ reg regInfo
+ auxType auxType
+ argLen int32 // the number of arugments, -1 if variable length
+ generic bool // this is a generic (arch-independent) opcode
+ rematerializeable bool // this op is rematerializeable
+ commutative bool // this operation is commutative (e.g. addition)
+}
+
+type inputInfo struct {
+ idx int // index in Args array
+ regs regMask // allowed input registers
+}
+
+type regInfo struct {
+ inputs []inputInfo // ordered in register allocation order
+ clobbers regMask
+ outputs []regMask // NOTE: values can only have 1 output for now.
+}
+
+type auxType int8
+
+const (
+ auxNone auxType = iota
+ auxBool // auxInt is 0/1 for false/true
+ auxInt8 // auxInt is an 8-bit integer
+ auxInt16 // auxInt is a 16-bit integer
+ auxInt32 // auxInt is a 32-bit integer
+ auxInt64 // auxInt is a 64-bit integer
+ auxFloat // auxInt is a float64 (encoded with math.Float64bits)
+ auxString // auxInt is a string
+ auxSym // aux is a symbol
+ auxSymOff // aux is a symbol, auxInt is an offset
+ auxSymValAndOff // aux is a symbol, auxInt is a ValAndOff
+)
+
+// A ValAndOff is used by the several opcodes. It holds
+// both a value and a pointer offset.
+// A ValAndOff is intended to be encoded into an AuxInt field.
+// The zero ValAndOff encodes a value of 0 and an offset of 0.
+// The high 32 bits hold a value.
+// The low 32 bits hold a pointer offset.
+type ValAndOff int64
+
+func (x ValAndOff) Val() int64 {
+ return int64(x) >> 32
+}
+func (x ValAndOff) Off() int64 {
+ return int64(int32(x))
+}
+func (x ValAndOff) Int64() int64 {
+ return int64(x)
+}
+func (x ValAndOff) String() string {
+ return fmt.Sprintf("val=%d,off=%d", x.Val(), x.Off())
+}
+
+// validVal reports whether the value can be used
+// as an argument to makeValAndOff.
+func validVal(val int64) bool {
+ return val == int64(int32(val))
+}
+
+// validOff reports whether the offset can be used
+// as an argument to makeValAndOff.
+func validOff(off int64) bool {
+ return off == int64(int32(off))
+}
+
+// validValAndOff reports whether we can fit the value and offset into
+// a ValAndOff value.
+func validValAndOff(val, off int64) bool {
+ if !validVal(val) {
+ return false
+ }
+ if !validOff(off) {
+ return false
+ }
+ return true
+}
+
+// makeValAndOff encodes a ValAndOff into an int64 suitable for storing in an AuxInt field.
+func makeValAndOff(val, off int64) int64 {
+ if !validValAndOff(val, off) {
+ panic("invalid makeValAndOff")
+ }
+ return ValAndOff(val<<32 + int64(uint32(off))).Int64()
+}
+
+func (x ValAndOff) canAdd(off int64) bool {
+ newoff := x.Off() + off
+ return newoff == int64(int32(newoff))
+}
+
+func (x ValAndOff) add(off int64) int64 {
+ if !x.canAdd(off) {
+ panic("invalid ValAndOff.add")
+ }
+ return makeValAndOff(x.Val(), x.Off()+off)
+}
--- /dev/null
+// autogenerated: do not edit!
+// generated from gen/*Ops.go
+
+package ssa
+
+import "cmd/internal/obj/x86"
+
+const (
+ BlockInvalid BlockKind = iota
+
+ BlockAMD64EQ
+ BlockAMD64NE
+ BlockAMD64LT
+ BlockAMD64LE
+ BlockAMD64GT
+ BlockAMD64GE
+ BlockAMD64ULT
+ BlockAMD64ULE
+ BlockAMD64UGT
+ BlockAMD64UGE
+ BlockAMD64EQF
+ BlockAMD64NEF
+ BlockAMD64ORD
+ BlockAMD64NAN
+
+ BlockPlain
+ BlockIf
+ BlockCall
+ BlockCheck
+ BlockRet
+ BlockRetJmp
+ BlockExit
+ BlockFirst
+ BlockDead
+)
+
+var blockString = [...]string{
+ BlockInvalid: "BlockInvalid",
+
+ BlockAMD64EQ: "EQ",
+ BlockAMD64NE: "NE",
+ BlockAMD64LT: "LT",
+ BlockAMD64LE: "LE",
+ BlockAMD64GT: "GT",
+ BlockAMD64GE: "GE",
+ BlockAMD64ULT: "ULT",
+ BlockAMD64ULE: "ULE",
+ BlockAMD64UGT: "UGT",
+ BlockAMD64UGE: "UGE",
+ BlockAMD64EQF: "EQF",
+ BlockAMD64NEF: "NEF",
+ BlockAMD64ORD: "ORD",
+ BlockAMD64NAN: "NAN",
+
+ BlockPlain: "Plain",
+ BlockIf: "If",
+ BlockCall: "Call",
+ BlockCheck: "Check",
+ BlockRet: "Ret",
+ BlockRetJmp: "RetJmp",
+ BlockExit: "Exit",
+ BlockFirst: "First",
+ BlockDead: "Dead",
+}
+
+func (k BlockKind) String() string { return blockString[k] }
+
+const (
+ OpInvalid Op = iota
+
+ OpAMD64ADDSS
+ OpAMD64ADDSD
+ OpAMD64SUBSS
+ OpAMD64SUBSD
+ OpAMD64MULSS
+ OpAMD64MULSD
+ OpAMD64DIVSS
+ OpAMD64DIVSD
+ OpAMD64MOVSSload
+ OpAMD64MOVSDload
+ OpAMD64MOVSSconst
+ OpAMD64MOVSDconst
+ OpAMD64MOVSSloadidx4
+ OpAMD64MOVSDloadidx8
+ OpAMD64MOVSSstore
+ OpAMD64MOVSDstore
+ OpAMD64MOVSSstoreidx4
+ OpAMD64MOVSDstoreidx8
+ OpAMD64ADDQ
+ OpAMD64ADDL
+ OpAMD64ADDW
+ OpAMD64ADDB
+ OpAMD64ADDQconst
+ OpAMD64ADDLconst
+ OpAMD64ADDWconst
+ OpAMD64ADDBconst
+ OpAMD64SUBQ
+ OpAMD64SUBL
+ OpAMD64SUBW
+ OpAMD64SUBB
+ OpAMD64SUBQconst
+ OpAMD64SUBLconst
+ OpAMD64SUBWconst
+ OpAMD64SUBBconst
+ OpAMD64MULQ
+ OpAMD64MULL
+ OpAMD64MULW
+ OpAMD64MULB
+ OpAMD64MULQconst
+ OpAMD64MULLconst
+ OpAMD64MULWconst
+ OpAMD64MULBconst
+ OpAMD64HMULQ
+ OpAMD64HMULL
+ OpAMD64HMULW
+ OpAMD64HMULB
+ OpAMD64HMULQU
+ OpAMD64HMULLU
+ OpAMD64HMULWU
+ OpAMD64HMULBU
+ OpAMD64AVGQU
+ OpAMD64DIVQ
+ OpAMD64DIVL
+ OpAMD64DIVW
+ OpAMD64DIVQU
+ OpAMD64DIVLU
+ OpAMD64DIVWU
+ OpAMD64MODQ
+ OpAMD64MODL
+ OpAMD64MODW
+ OpAMD64MODQU
+ OpAMD64MODLU
+ OpAMD64MODWU
+ OpAMD64ANDQ
+ OpAMD64ANDL
+ OpAMD64ANDW
+ OpAMD64ANDB
+ OpAMD64ANDQconst
+ OpAMD64ANDLconst
+ OpAMD64ANDWconst
+ OpAMD64ANDBconst
+ OpAMD64ORQ
+ OpAMD64ORL
+ OpAMD64ORW
+ OpAMD64ORB
+ OpAMD64ORQconst
+ OpAMD64ORLconst
+ OpAMD64ORWconst
+ OpAMD64ORBconst
+ OpAMD64XORQ
+ OpAMD64XORL
+ OpAMD64XORW
+ OpAMD64XORB
+ OpAMD64XORQconst
+ OpAMD64XORLconst
+ OpAMD64XORWconst
+ OpAMD64XORBconst
+ OpAMD64CMPQ
+ OpAMD64CMPL
+ OpAMD64CMPW
+ OpAMD64CMPB
+ OpAMD64CMPQconst
+ OpAMD64CMPLconst
+ OpAMD64CMPWconst
+ OpAMD64CMPBconst
+ OpAMD64UCOMISS
+ OpAMD64UCOMISD
+ OpAMD64TESTQ
+ OpAMD64TESTL
+ OpAMD64TESTW
+ OpAMD64TESTB
+ OpAMD64TESTQconst
+ OpAMD64TESTLconst
+ OpAMD64TESTWconst
+ OpAMD64TESTBconst
+ OpAMD64SHLQ
+ OpAMD64SHLL
+ OpAMD64SHLW
+ OpAMD64SHLB
+ OpAMD64SHLQconst
+ OpAMD64SHLLconst
+ OpAMD64SHLWconst
+ OpAMD64SHLBconst
+ OpAMD64SHRQ
+ OpAMD64SHRL
+ OpAMD64SHRW
+ OpAMD64SHRB
+ OpAMD64SHRQconst
+ OpAMD64SHRLconst
+ OpAMD64SHRWconst
+ OpAMD64SHRBconst
+ OpAMD64SARQ
+ OpAMD64SARL
+ OpAMD64SARW
+ OpAMD64SARB
+ OpAMD64SARQconst
+ OpAMD64SARLconst
+ OpAMD64SARWconst
+ OpAMD64SARBconst
+ OpAMD64ROLQconst
+ OpAMD64ROLLconst
+ OpAMD64ROLWconst
+ OpAMD64ROLBconst
+ OpAMD64NEGQ
+ OpAMD64NEGL
+ OpAMD64NEGW
+ OpAMD64NEGB
+ OpAMD64NOTQ
+ OpAMD64NOTL
+ OpAMD64NOTW
+ OpAMD64NOTB
+ OpAMD64SQRTSD
+ OpAMD64SBBQcarrymask
+ OpAMD64SBBLcarrymask
+ OpAMD64SETEQ
+ OpAMD64SETNE
+ OpAMD64SETL
+ OpAMD64SETLE
+ OpAMD64SETG
+ OpAMD64SETGE
+ OpAMD64SETB
+ OpAMD64SETBE
+ OpAMD64SETA
+ OpAMD64SETAE
+ OpAMD64SETEQF
+ OpAMD64SETNEF
+ OpAMD64SETORD
+ OpAMD64SETNAN
+ OpAMD64SETGF
+ OpAMD64SETGEF
+ OpAMD64MOVBQSX
+ OpAMD64MOVBQZX
+ OpAMD64MOVWQSX
+ OpAMD64MOVWQZX
+ OpAMD64MOVLQSX
+ OpAMD64MOVLQZX
+ OpAMD64MOVBconst
+ OpAMD64MOVWconst
+ OpAMD64MOVLconst
+ OpAMD64MOVQconst
+ OpAMD64CVTTSD2SL
+ OpAMD64CVTTSD2SQ
+ OpAMD64CVTTSS2SL
+ OpAMD64CVTTSS2SQ
+ OpAMD64CVTSL2SS
+ OpAMD64CVTSL2SD
+ OpAMD64CVTSQ2SS
+ OpAMD64CVTSQ2SD
+ OpAMD64CVTSD2SS
+ OpAMD64CVTSS2SD
+ OpAMD64PXOR
+ OpAMD64LEAQ
+ OpAMD64LEAQ1
+ OpAMD64LEAQ2
+ OpAMD64LEAQ4
+ OpAMD64LEAQ8
+ OpAMD64MOVBload
+ OpAMD64MOVBQSXload
+ OpAMD64MOVBQZXload
+ OpAMD64MOVWload
+ OpAMD64MOVWQSXload
+ OpAMD64MOVWQZXload
+ OpAMD64MOVLload
+ OpAMD64MOVLQSXload
+ OpAMD64MOVLQZXload
+ OpAMD64MOVQload
+ OpAMD64MOVBstore
+ OpAMD64MOVWstore
+ OpAMD64MOVLstore
+ OpAMD64MOVQstore
+ OpAMD64MOVOload
+ OpAMD64MOVOstore
+ OpAMD64MOVBloadidx1
+ OpAMD64MOVWloadidx2
+ OpAMD64MOVLloadidx4
+ OpAMD64MOVQloadidx8
+ OpAMD64MOVBstoreidx1
+ OpAMD64MOVWstoreidx2
+ OpAMD64MOVLstoreidx4
+ OpAMD64MOVQstoreidx8
+ OpAMD64MOVBstoreconst
+ OpAMD64MOVWstoreconst
+ OpAMD64MOVLstoreconst
+ OpAMD64MOVQstoreconst
+ OpAMD64MOVBstoreconstidx1
+ OpAMD64MOVWstoreconstidx2
+ OpAMD64MOVLstoreconstidx4
+ OpAMD64MOVQstoreconstidx8
+ OpAMD64DUFFZERO
+ OpAMD64MOVOconst
+ OpAMD64REPSTOSQ
+ OpAMD64CALLstatic
+ OpAMD64CALLclosure
+ OpAMD64CALLdefer
+ OpAMD64CALLgo
+ OpAMD64CALLinter
+ OpAMD64DUFFCOPY
+ OpAMD64REPMOVSQ
+ OpAMD64InvertFlags
+ OpAMD64LoweredGetG
+ OpAMD64LoweredGetClosurePtr
+ OpAMD64LoweredNilCheck
+ OpAMD64MOVQconvert
+ OpAMD64FlagEQ
+ OpAMD64FlagLT_ULT
+ OpAMD64FlagLT_UGT
+ OpAMD64FlagGT_UGT
+ OpAMD64FlagGT_ULT
+
+ OpAdd8
+ OpAdd16
+ OpAdd32
+ OpAdd64
+ OpAddPtr
+ OpAdd32F
+ OpAdd64F
+ OpSub8
+ OpSub16
+ OpSub32
+ OpSub64
+ OpSubPtr
+ OpSub32F
+ OpSub64F
+ OpMul8
+ OpMul16
+ OpMul32
+ OpMul64
+ OpMul32F
+ OpMul64F
+ OpDiv32F
+ OpDiv64F
+ OpHmul8
+ OpHmul8u
+ OpHmul16
+ OpHmul16u
+ OpHmul32
+ OpHmul32u
+ OpHmul64
+ OpHmul64u
+ OpAvg64u
+ OpDiv8
+ OpDiv8u
+ OpDiv16
+ OpDiv16u
+ OpDiv32
+ OpDiv32u
+ OpDiv64
+ OpDiv64u
+ OpMod8
+ OpMod8u
+ OpMod16
+ OpMod16u
+ OpMod32
+ OpMod32u
+ OpMod64
+ OpMod64u
+ OpAnd8
+ OpAnd16
+ OpAnd32
+ OpAnd64
+ OpOr8
+ OpOr16
+ OpOr32
+ OpOr64
+ OpXor8
+ OpXor16
+ OpXor32
+ OpXor64
+ OpLsh8x8
+ OpLsh8x16
+ OpLsh8x32
+ OpLsh8x64
+ OpLsh16x8
+ OpLsh16x16
+ OpLsh16x32
+ OpLsh16x64
+ OpLsh32x8
+ OpLsh32x16
+ OpLsh32x32
+ OpLsh32x64
+ OpLsh64x8
+ OpLsh64x16
+ OpLsh64x32
+ OpLsh64x64
+ OpRsh8x8
+ OpRsh8x16
+ OpRsh8x32
+ OpRsh8x64
+ OpRsh16x8
+ OpRsh16x16
+ OpRsh16x32
+ OpRsh16x64
+ OpRsh32x8
+ OpRsh32x16
+ OpRsh32x32
+ OpRsh32x64
+ OpRsh64x8
+ OpRsh64x16
+ OpRsh64x32
+ OpRsh64x64
+ OpRsh8Ux8
+ OpRsh8Ux16
+ OpRsh8Ux32
+ OpRsh8Ux64
+ OpRsh16Ux8
+ OpRsh16Ux16
+ OpRsh16Ux32
+ OpRsh16Ux64
+ OpRsh32Ux8
+ OpRsh32Ux16
+ OpRsh32Ux32
+ OpRsh32Ux64
+ OpRsh64Ux8
+ OpRsh64Ux16
+ OpRsh64Ux32
+ OpRsh64Ux64
+ OpLrot8
+ OpLrot16
+ OpLrot32
+ OpLrot64
+ OpEq8
+ OpEq16
+ OpEq32
+ OpEq64
+ OpEqPtr
+ OpEqInter
+ OpEqSlice
+ OpEq32F
+ OpEq64F
+ OpNeq8
+ OpNeq16
+ OpNeq32
+ OpNeq64
+ OpNeqPtr
+ OpNeqInter
+ OpNeqSlice
+ OpNeq32F
+ OpNeq64F
+ OpLess8
+ OpLess8U
+ OpLess16
+ OpLess16U
+ OpLess32
+ OpLess32U
+ OpLess64
+ OpLess64U
+ OpLess32F
+ OpLess64F
+ OpLeq8
+ OpLeq8U
+ OpLeq16
+ OpLeq16U
+ OpLeq32
+ OpLeq32U
+ OpLeq64
+ OpLeq64U
+ OpLeq32F
+ OpLeq64F
+ OpGreater8
+ OpGreater8U
+ OpGreater16
+ OpGreater16U
+ OpGreater32
+ OpGreater32U
+ OpGreater64
+ OpGreater64U
+ OpGreater32F
+ OpGreater64F
+ OpGeq8
+ OpGeq8U
+ OpGeq16
+ OpGeq16U
+ OpGeq32
+ OpGeq32U
+ OpGeq64
+ OpGeq64U
+ OpGeq32F
+ OpGeq64F
+ OpNot
+ OpNeg8
+ OpNeg16
+ OpNeg32
+ OpNeg64
+ OpNeg32F
+ OpNeg64F
+ OpCom8
+ OpCom16
+ OpCom32
+ OpCom64
+ OpSqrt
+ OpPhi
+ OpCopy
+ OpConvert
+ OpConstBool
+ OpConstString
+ OpConstNil
+ OpConst8
+ OpConst16
+ OpConst32
+ OpConst64
+ OpConst32F
+ OpConst64F
+ OpConstInterface
+ OpConstSlice
+ OpInitMem
+ OpArg
+ OpAddr
+ OpSP
+ OpSB
+ OpFunc
+ OpLoad
+ OpStore
+ OpMove
+ OpZero
+ OpClosureCall
+ OpStaticCall
+ OpDeferCall
+ OpGoCall
+ OpInterCall
+ OpSignExt8to16
+ OpSignExt8to32
+ OpSignExt8to64
+ OpSignExt16to32
+ OpSignExt16to64
+ OpSignExt32to64
+ OpZeroExt8to16
+ OpZeroExt8to32
+ OpZeroExt8to64
+ OpZeroExt16to32
+ OpZeroExt16to64
+ OpZeroExt32to64
+ OpTrunc16to8
+ OpTrunc32to8
+ OpTrunc32to16
+ OpTrunc64to8
+ OpTrunc64to16
+ OpTrunc64to32
+ OpCvt32to32F
+ OpCvt32to64F
+ OpCvt64to32F
+ OpCvt64to64F
+ OpCvt32Fto32
+ OpCvt32Fto64
+ OpCvt64Fto32
+ OpCvt64Fto64
+ OpCvt32Fto64F
+ OpCvt64Fto32F
+ OpIsNonNil
+ OpIsInBounds
+ OpIsSliceInBounds
+ OpNilCheck
+ OpGetG
+ OpGetClosurePtr
+ OpArrayIndex
+ OpPtrIndex
+ OpOffPtr
+ OpSliceMake
+ OpSlicePtr
+ OpSliceLen
+ OpSliceCap
+ OpComplexMake
+ OpComplexReal
+ OpComplexImag
+ OpStringMake
+ OpStringPtr
+ OpStringLen
+ OpIMake
+ OpITab
+ OpIData
+ OpStructMake0
+ OpStructMake1
+ OpStructMake2
+ OpStructMake3
+ OpStructMake4
+ OpStructSelect
+ OpStoreReg
+ OpLoadReg
+ OpFwdRef
+ OpUnknown
+ OpVarDef
+ OpVarKill
+ OpVarLive
+)
+
+var opcodeTable = [...]opInfo{
+ {name: "OpInvalid"},
+
+ {
+ name: "ADDSS",
+ argLen: 2,
+ asm: x86.AADDSS,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ {1, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "ADDSD",
+ argLen: 2,
+ asm: x86.AADDSD,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ {1, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "SUBSS",
+ argLen: 2,
+ asm: x86.ASUBSS,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 2147418112}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14
+ {1, 2147418112}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14
+ },
+ clobbers: 2147483648, // .X15
+ outputs: []regMask{
+ 2147418112, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14
+ },
+ },
+ },
+ {
+ name: "SUBSD",
+ argLen: 2,
+ asm: x86.ASUBSD,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 2147418112}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14
+ {1, 2147418112}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14
+ },
+ clobbers: 2147483648, // .X15
+ outputs: []regMask{
+ 2147418112, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14
+ },
+ },
+ },
+ {
+ name: "MULSS",
+ argLen: 2,
+ asm: x86.AMULSS,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ {1, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "MULSD",
+ argLen: 2,
+ asm: x86.AMULSD,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ {1, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "DIVSS",
+ argLen: 2,
+ asm: x86.ADIVSS,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 2147418112}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14
+ {1, 2147418112}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14
+ },
+ clobbers: 2147483648, // .X15
+ outputs: []regMask{
+ 2147418112, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14
+ },
+ },
+ },
+ {
+ name: "DIVSD",
+ argLen: 2,
+ asm: x86.ADIVSD,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 2147418112}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14
+ {1, 2147418112}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14
+ },
+ clobbers: 2147483648, // .X15
+ outputs: []regMask{
+ 2147418112, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14
+ },
+ },
+ },
+ {
+ name: "MOVSSload",
+ auxType: auxSymOff,
+ argLen: 2,
+ asm: x86.AMOVSS,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "MOVSDload",
+ auxType: auxSymOff,
+ argLen: 2,
+ asm: x86.AMOVSD,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "MOVSSconst",
+ auxType: auxFloat,
+ argLen: 0,
+ rematerializeable: true,
+ asm: x86.AMOVSS,
+ reg: regInfo{
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "MOVSDconst",
+ auxType: auxFloat,
+ argLen: 0,
+ rematerializeable: true,
+ asm: x86.AMOVSD,
+ reg: regInfo{
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "MOVSSloadidx4",
+ auxType: auxSymOff,
+ argLen: 3,
+ asm: x86.AMOVSS,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "MOVSDloadidx8",
+ auxType: auxSymOff,
+ argLen: 3,
+ asm: x86.AMOVSD,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "MOVSSstore",
+ auxType: auxSymOff,
+ argLen: 3,
+ asm: x86.AMOVSS,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVSDstore",
+ auxType: auxSymOff,
+ argLen: 3,
+ asm: x86.AMOVSD,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVSSstoreidx4",
+ auxType: auxSymOff,
+ argLen: 4,
+ asm: x86.AMOVSS,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {2, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVSDstoreidx8",
+ auxType: auxSymOff,
+ argLen: 4,
+ asm: x86.AMOVSD,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {2, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "ADDQ",
+ argLen: 2,
+ asm: x86.AADDQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ADDL",
+ argLen: 2,
+ asm: x86.AADDL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ADDW",
+ argLen: 2,
+ asm: x86.AADDL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ADDB",
+ argLen: 2,
+ asm: x86.AADDL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ADDQconst",
+ auxType: auxInt64,
+ argLen: 1,
+ asm: x86.AADDQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ADDLconst",
+ auxType: auxInt32,
+ argLen: 1,
+ asm: x86.AADDL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ADDWconst",
+ auxType: auxInt16,
+ argLen: 1,
+ asm: x86.AADDL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ADDBconst",
+ auxType: auxInt8,
+ argLen: 1,
+ asm: x86.AADDL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SUBQ",
+ argLen: 2,
+ asm: x86.ASUBQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SUBL",
+ argLen: 2,
+ asm: x86.ASUBL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SUBW",
+ argLen: 2,
+ asm: x86.ASUBL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SUBB",
+ argLen: 2,
+ asm: x86.ASUBL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SUBQconst",
+ auxType: auxInt64,
+ argLen: 1,
+ asm: x86.ASUBQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SUBLconst",
+ auxType: auxInt32,
+ argLen: 1,
+ asm: x86.ASUBL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SUBWconst",
+ auxType: auxInt16,
+ argLen: 1,
+ asm: x86.ASUBL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SUBBconst",
+ auxType: auxInt8,
+ argLen: 1,
+ asm: x86.ASUBL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MULQ",
+ argLen: 2,
+ asm: x86.AIMULQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MULL",
+ argLen: 2,
+ asm: x86.AIMULL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MULW",
+ argLen: 2,
+ asm: x86.AIMULW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MULB",
+ argLen: 2,
+ asm: x86.AIMULW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MULQconst",
+ auxType: auxInt64,
+ argLen: 1,
+ asm: x86.AIMULQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MULLconst",
+ auxType: auxInt32,
+ argLen: 1,
+ asm: x86.AIMULL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MULWconst",
+ auxType: auxInt16,
+ argLen: 1,
+ asm: x86.AIMULW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MULBconst",
+ auxType: auxInt8,
+ argLen: 1,
+ asm: x86.AIMULW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "HMULQ",
+ argLen: 2,
+ asm: x86.AIMULQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934593, // .AX .FLAGS
+ outputs: []regMask{
+ 4, // .DX
+ },
+ },
+ },
+ {
+ name: "HMULL",
+ argLen: 2,
+ asm: x86.AIMULL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934593, // .AX .FLAGS
+ outputs: []regMask{
+ 4, // .DX
+ },
+ },
+ },
+ {
+ name: "HMULW",
+ argLen: 2,
+ asm: x86.AIMULW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934593, // .AX .FLAGS
+ outputs: []regMask{
+ 4, // .DX
+ },
+ },
+ },
+ {
+ name: "HMULB",
+ argLen: 2,
+ asm: x86.AIMULB,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934593, // .AX .FLAGS
+ outputs: []regMask{
+ 4, // .DX
+ },
+ },
+ },
+ {
+ name: "HMULQU",
+ argLen: 2,
+ asm: x86.AMULQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934593, // .AX .FLAGS
+ outputs: []regMask{
+ 4, // .DX
+ },
+ },
+ },
+ {
+ name: "HMULLU",
+ argLen: 2,
+ asm: x86.AMULL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934593, // .AX .FLAGS
+ outputs: []regMask{
+ 4, // .DX
+ },
+ },
+ },
+ {
+ name: "HMULWU",
+ argLen: 2,
+ asm: x86.AMULW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934593, // .AX .FLAGS
+ outputs: []regMask{
+ 4, // .DX
+ },
+ },
+ },
+ {
+ name: "HMULBU",
+ argLen: 2,
+ asm: x86.AMULB,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934593, // .AX .FLAGS
+ outputs: []regMask{
+ 4, // .DX
+ },
+ },
+ },
+ {
+ name: "AVGQU",
+ argLen: 2,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "DIVQ",
+ argLen: 2,
+ asm: x86.AIDIVQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65531}, // .AX .CX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934596, // .DX .FLAGS
+ outputs: []regMask{
+ 1, // .AX
+ },
+ },
+ },
+ {
+ name: "DIVL",
+ argLen: 2,
+ asm: x86.AIDIVL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65531}, // .AX .CX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934596, // .DX .FLAGS
+ outputs: []regMask{
+ 1, // .AX
+ },
+ },
+ },
+ {
+ name: "DIVW",
+ argLen: 2,
+ asm: x86.AIDIVW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65531}, // .AX .CX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934596, // .DX .FLAGS
+ outputs: []regMask{
+ 1, // .AX
+ },
+ },
+ },
+ {
+ name: "DIVQU",
+ argLen: 2,
+ asm: x86.ADIVQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65531}, // .AX .CX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934596, // .DX .FLAGS
+ outputs: []regMask{
+ 1, // .AX
+ },
+ },
+ },
+ {
+ name: "DIVLU",
+ argLen: 2,
+ asm: x86.ADIVL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65531}, // .AX .CX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934596, // .DX .FLAGS
+ outputs: []regMask{
+ 1, // .AX
+ },
+ },
+ },
+ {
+ name: "DIVWU",
+ argLen: 2,
+ asm: x86.ADIVW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65531}, // .AX .CX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934596, // .DX .FLAGS
+ outputs: []regMask{
+ 1, // .AX
+ },
+ },
+ },
+ {
+ name: "MODQ",
+ argLen: 2,
+ asm: x86.AIDIVQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65531}, // .AX .CX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934593, // .AX .FLAGS
+ outputs: []regMask{
+ 4, // .DX
+ },
+ },
+ },
+ {
+ name: "MODL",
+ argLen: 2,
+ asm: x86.AIDIVL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65531}, // .AX .CX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934593, // .AX .FLAGS
+ outputs: []regMask{
+ 4, // .DX
+ },
+ },
+ },
+ {
+ name: "MODW",
+ argLen: 2,
+ asm: x86.AIDIVW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65531}, // .AX .CX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934593, // .AX .FLAGS
+ outputs: []regMask{
+ 4, // .DX
+ },
+ },
+ },
+ {
+ name: "MODQU",
+ argLen: 2,
+ asm: x86.ADIVQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65531}, // .AX .CX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934593, // .AX .FLAGS
+ outputs: []regMask{
+ 4, // .DX
+ },
+ },
+ },
+ {
+ name: "MODLU",
+ argLen: 2,
+ asm: x86.ADIVL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65531}, // .AX .CX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934593, // .AX .FLAGS
+ outputs: []regMask{
+ 4, // .DX
+ },
+ },
+ },
+ {
+ name: "MODWU",
+ argLen: 2,
+ asm: x86.ADIVW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 1}, // .AX
+ {1, 65531}, // .AX .CX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934593, // .AX .FLAGS
+ outputs: []regMask{
+ 4, // .DX
+ },
+ },
+ },
+ {
+ name: "ANDQ",
+ argLen: 2,
+ asm: x86.AANDQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ANDL",
+ argLen: 2,
+ asm: x86.AANDL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ANDW",
+ argLen: 2,
+ asm: x86.AANDL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ANDB",
+ argLen: 2,
+ asm: x86.AANDL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ANDQconst",
+ auxType: auxInt64,
+ argLen: 1,
+ asm: x86.AANDQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ANDLconst",
+ auxType: auxInt32,
+ argLen: 1,
+ asm: x86.AANDL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ANDWconst",
+ auxType: auxInt16,
+ argLen: 1,
+ asm: x86.AANDL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ANDBconst",
+ auxType: auxInt8,
+ argLen: 1,
+ asm: x86.AANDL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ORQ",
+ argLen: 2,
+ asm: x86.AORQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ORL",
+ argLen: 2,
+ asm: x86.AORL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ORW",
+ argLen: 2,
+ asm: x86.AORL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ORB",
+ argLen: 2,
+ asm: x86.AORL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ORQconst",
+ auxType: auxInt64,
+ argLen: 1,
+ asm: x86.AORQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ORLconst",
+ auxType: auxInt32,
+ argLen: 1,
+ asm: x86.AORL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ORWconst",
+ auxType: auxInt16,
+ argLen: 1,
+ asm: x86.AORL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ORBconst",
+ auxType: auxInt8,
+ argLen: 1,
+ asm: x86.AORL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "XORQ",
+ argLen: 2,
+ asm: x86.AXORQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "XORL",
+ argLen: 2,
+ asm: x86.AXORL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "XORW",
+ argLen: 2,
+ asm: x86.AXORL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "XORB",
+ argLen: 2,
+ asm: x86.AXORL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "XORQconst",
+ auxType: auxInt64,
+ argLen: 1,
+ asm: x86.AXORQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "XORLconst",
+ auxType: auxInt32,
+ argLen: 1,
+ asm: x86.AXORL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "XORWconst",
+ auxType: auxInt16,
+ argLen: 1,
+ asm: x86.AXORL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "XORBconst",
+ auxType: auxInt8,
+ argLen: 1,
+ asm: x86.AXORL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "CMPQ",
+ argLen: 2,
+ asm: x86.ACMPQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "CMPL",
+ argLen: 2,
+ asm: x86.ACMPL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "CMPW",
+ argLen: 2,
+ asm: x86.ACMPW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "CMPB",
+ argLen: 2,
+ asm: x86.ACMPB,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "CMPQconst",
+ auxType: auxInt64,
+ argLen: 1,
+ asm: x86.ACMPQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "CMPLconst",
+ auxType: auxInt32,
+ argLen: 1,
+ asm: x86.ACMPL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "CMPWconst",
+ auxType: auxInt16,
+ argLen: 1,
+ asm: x86.ACMPW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "CMPBconst",
+ auxType: auxInt8,
+ argLen: 1,
+ asm: x86.ACMPB,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "UCOMISS",
+ argLen: 2,
+ asm: x86.AUCOMISS,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ {1, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "UCOMISD",
+ argLen: 2,
+ asm: x86.AUCOMISD,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ {1, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "TESTQ",
+ argLen: 2,
+ asm: x86.ATESTQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "TESTL",
+ argLen: 2,
+ asm: x86.ATESTL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "TESTW",
+ argLen: 2,
+ asm: x86.ATESTW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "TESTB",
+ argLen: 2,
+ asm: x86.ATESTB,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "TESTQconst",
+ auxType: auxInt64,
+ argLen: 1,
+ asm: x86.ATESTQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "TESTLconst",
+ auxType: auxInt32,
+ argLen: 1,
+ asm: x86.ATESTL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "TESTWconst",
+ auxType: auxInt16,
+ argLen: 1,
+ asm: x86.ATESTW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "TESTBconst",
+ auxType: auxInt8,
+ argLen: 1,
+ asm: x86.ATESTB,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 8589934592, // .FLAGS
+ },
+ },
+ },
+ {
+ name: "SHLQ",
+ argLen: 2,
+ asm: x86.ASHLQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 2}, // .CX
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65517, // .AX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SHLL",
+ argLen: 2,
+ asm: x86.ASHLL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 2}, // .CX
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65517, // .AX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SHLW",
+ argLen: 2,
+ asm: x86.ASHLL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 2}, // .CX
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65517, // .AX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SHLB",
+ argLen: 2,
+ asm: x86.ASHLL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 2}, // .CX
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65517, // .AX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SHLQconst",
+ auxType: auxInt64,
+ argLen: 1,
+ asm: x86.ASHLQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SHLLconst",
+ auxType: auxInt32,
+ argLen: 1,
+ asm: x86.ASHLL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SHLWconst",
+ auxType: auxInt16,
+ argLen: 1,
+ asm: x86.ASHLL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SHLBconst",
+ auxType: auxInt8,
+ argLen: 1,
+ asm: x86.ASHLL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SHRQ",
+ argLen: 2,
+ asm: x86.ASHRQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 2}, // .CX
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65517, // .AX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SHRL",
+ argLen: 2,
+ asm: x86.ASHRL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 2}, // .CX
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65517, // .AX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SHRW",
+ argLen: 2,
+ asm: x86.ASHRW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 2}, // .CX
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65517, // .AX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SHRB",
+ argLen: 2,
+ asm: x86.ASHRB,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 2}, // .CX
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65517, // .AX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SHRQconst",
+ auxType: auxInt64,
+ argLen: 1,
+ asm: x86.ASHRQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SHRLconst",
+ auxType: auxInt32,
+ argLen: 1,
+ asm: x86.ASHRL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SHRWconst",
+ auxType: auxInt16,
+ argLen: 1,
+ asm: x86.ASHRW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SHRBconst",
+ auxType: auxInt8,
+ argLen: 1,
+ asm: x86.ASHRB,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SARQ",
+ argLen: 2,
+ asm: x86.ASARQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 2}, // .CX
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65517, // .AX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SARL",
+ argLen: 2,
+ asm: x86.ASARL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 2}, // .CX
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65517, // .AX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SARW",
+ argLen: 2,
+ asm: x86.ASARW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 2}, // .CX
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65517, // .AX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SARB",
+ argLen: 2,
+ asm: x86.ASARB,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 2}, // .CX
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65517, // .AX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SARQconst",
+ auxType: auxInt64,
+ argLen: 1,
+ asm: x86.ASARQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SARLconst",
+ auxType: auxInt32,
+ argLen: 1,
+ asm: x86.ASARL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SARWconst",
+ auxType: auxInt16,
+ argLen: 1,
+ asm: x86.ASARW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SARBconst",
+ auxType: auxInt8,
+ argLen: 1,
+ asm: x86.ASARB,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ROLQconst",
+ auxType: auxInt64,
+ argLen: 1,
+ asm: x86.AROLQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ROLLconst",
+ auxType: auxInt32,
+ argLen: 1,
+ asm: x86.AROLL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ROLWconst",
+ auxType: auxInt16,
+ argLen: 1,
+ asm: x86.AROLW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "ROLBconst",
+ auxType: auxInt8,
+ argLen: 1,
+ asm: x86.AROLB,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "NEGQ",
+ argLen: 1,
+ asm: x86.ANEGQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "NEGL",
+ argLen: 1,
+ asm: x86.ANEGL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "NEGW",
+ argLen: 1,
+ asm: x86.ANEGL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "NEGB",
+ argLen: 1,
+ asm: x86.ANEGL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "NOTQ",
+ argLen: 1,
+ asm: x86.ANOTQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "NOTL",
+ argLen: 1,
+ asm: x86.ANOTL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "NOTW",
+ argLen: 1,
+ asm: x86.ANOTL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "NOTB",
+ argLen: 1,
+ asm: x86.ANOTL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SQRTSD",
+ argLen: 1,
+ asm: x86.ASQRTSD,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "SBBQcarrymask",
+ argLen: 1,
+ asm: x86.ASBBQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SBBLcarrymask",
+ argLen: 1,
+ asm: x86.ASBBL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SETEQ",
+ argLen: 1,
+ asm: x86.ASETEQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SETNE",
+ argLen: 1,
+ asm: x86.ASETNE,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SETL",
+ argLen: 1,
+ asm: x86.ASETLT,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SETLE",
+ argLen: 1,
+ asm: x86.ASETLE,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SETG",
+ argLen: 1,
+ asm: x86.ASETGT,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SETGE",
+ argLen: 1,
+ asm: x86.ASETGE,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SETB",
+ argLen: 1,
+ asm: x86.ASETCS,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SETBE",
+ argLen: 1,
+ asm: x86.ASETLS,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SETA",
+ argLen: 1,
+ asm: x86.ASETHI,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SETAE",
+ argLen: 1,
+ asm: x86.ASETCC,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SETEQF",
+ argLen: 1,
+ asm: x86.ASETEQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ clobbers: 8589934593, // .AX .FLAGS
+ outputs: []regMask{
+ 65518, // .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SETNEF",
+ argLen: 1,
+ asm: x86.ASETNE,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ clobbers: 8589934593, // .AX .FLAGS
+ outputs: []regMask{
+ 65518, // .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SETORD",
+ argLen: 1,
+ asm: x86.ASETPC,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SETNAN",
+ argLen: 1,
+ asm: x86.ASETPS,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SETGF",
+ argLen: 1,
+ asm: x86.ASETHI,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "SETGEF",
+ argLen: 1,
+ asm: x86.ASETCC,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 8589934592}, // .FLAGS
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVBQSX",
+ argLen: 1,
+ asm: x86.AMOVBQSX,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVBQZX",
+ argLen: 1,
+ asm: x86.AMOVBQZX,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVWQSX",
+ argLen: 1,
+ asm: x86.AMOVWQSX,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVWQZX",
+ argLen: 1,
+ asm: x86.AMOVWQZX,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVLQSX",
+ argLen: 1,
+ asm: x86.AMOVLQSX,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVLQZX",
+ argLen: 1,
+ asm: x86.AMOVLQZX,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVBconst",
+ auxType: auxInt8,
+ argLen: 0,
+ rematerializeable: true,
+ asm: x86.AMOVB,
+ reg: regInfo{
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVWconst",
+ auxType: auxInt16,
+ argLen: 0,
+ rematerializeable: true,
+ asm: x86.AMOVW,
+ reg: regInfo{
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVLconst",
+ auxType: auxInt32,
+ argLen: 0,
+ rematerializeable: true,
+ asm: x86.AMOVL,
+ reg: regInfo{
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVQconst",
+ auxType: auxInt64,
+ argLen: 0,
+ rematerializeable: true,
+ asm: x86.AMOVQ,
+ reg: regInfo{
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "CVTTSD2SL",
+ argLen: 1,
+ asm: x86.ACVTTSD2SL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "CVTTSD2SQ",
+ argLen: 1,
+ asm: x86.ACVTTSD2SQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "CVTTSS2SL",
+ argLen: 1,
+ asm: x86.ACVTTSS2SL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "CVTTSS2SQ",
+ argLen: 1,
+ asm: x86.ACVTTSS2SQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "CVTSL2SS",
+ argLen: 1,
+ asm: x86.ACVTSL2SS,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65519}, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "CVTSL2SD",
+ argLen: 1,
+ asm: x86.ACVTSL2SD,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65519}, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "CVTSQ2SS",
+ argLen: 1,
+ asm: x86.ACVTSQ2SS,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65519}, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "CVTSQ2SD",
+ argLen: 1,
+ asm: x86.ACVTSQ2SD,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65519}, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "CVTSD2SS",
+ argLen: 1,
+ asm: x86.ACVTSD2SS,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "CVTSS2SD",
+ argLen: 1,
+ asm: x86.ACVTSS2SD,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "PXOR",
+ argLen: 2,
+ asm: x86.APXOR,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ {1, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "LEAQ",
+ auxType: auxSymOff,
+ argLen: 1,
+ rematerializeable: true,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "LEAQ1",
+ auxType: auxSymOff,
+ argLen: 2,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "LEAQ2",
+ auxType: auxSymOff,
+ argLen: 2,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "LEAQ4",
+ auxType: auxSymOff,
+ argLen: 2,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "LEAQ8",
+ auxType: auxSymOff,
+ argLen: 2,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVBload",
+ auxType: auxSymOff,
+ argLen: 2,
+ asm: x86.AMOVBLZX,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVBQSXload",
+ auxType: auxSymOff,
+ argLen: 2,
+ asm: x86.AMOVBQSX,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVBQZXload",
+ auxType: auxSymOff,
+ argLen: 2,
+ asm: x86.AMOVBQZX,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVWload",
+ auxType: auxSymOff,
+ argLen: 2,
+ asm: x86.AMOVWLZX,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVWQSXload",
+ auxType: auxSymOff,
+ argLen: 2,
+ asm: x86.AMOVWQSX,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVWQZXload",
+ auxType: auxSymOff,
+ argLen: 2,
+ asm: x86.AMOVWQZX,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVLload",
+ auxType: auxSymOff,
+ argLen: 2,
+ asm: x86.AMOVL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVLQSXload",
+ auxType: auxSymOff,
+ argLen: 2,
+ asm: x86.AMOVLQSX,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVLQZXload",
+ auxType: auxSymOff,
+ argLen: 2,
+ asm: x86.AMOVLQZX,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVQload",
+ auxType: auxSymOff,
+ argLen: 2,
+ asm: x86.AMOVQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVBstore",
+ auxType: auxSymOff,
+ argLen: 3,
+ asm: x86.AMOVB,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVWstore",
+ auxType: auxSymOff,
+ argLen: 3,
+ asm: x86.AMOVW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVLstore",
+ auxType: auxSymOff,
+ argLen: 3,
+ asm: x86.AMOVL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVQstore",
+ auxType: auxSymOff,
+ argLen: 3,
+ asm: x86.AMOVQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVOload",
+ auxType: auxSymOff,
+ argLen: 2,
+ asm: x86.AMOVUPS,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "MOVOstore",
+ auxType: auxSymOff,
+ argLen: 3,
+ asm: x86.AMOVUPS,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 4294901760}, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVBloadidx1",
+ auxType: auxSymOff,
+ argLen: 3,
+ asm: x86.AMOVBLZX,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVWloadidx2",
+ auxType: auxSymOff,
+ argLen: 3,
+ asm: x86.AMOVWLZX,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVLloadidx4",
+ auxType: auxSymOff,
+ argLen: 3,
+ asm: x86.AMOVL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVQloadidx8",
+ auxType: auxSymOff,
+ argLen: 3,
+ asm: x86.AMOVQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "MOVBstoreidx1",
+ auxType: auxSymOff,
+ argLen: 4,
+ asm: x86.AMOVB,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {2, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVWstoreidx2",
+ auxType: auxSymOff,
+ argLen: 4,
+ asm: x86.AMOVW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {2, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVLstoreidx4",
+ auxType: auxSymOff,
+ argLen: 4,
+ asm: x86.AMOVL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {2, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVQstoreidx8",
+ auxType: auxSymOff,
+ argLen: 4,
+ asm: x86.AMOVQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {2, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVBstoreconst",
+ auxType: auxSymValAndOff,
+ argLen: 2,
+ asm: x86.AMOVB,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVWstoreconst",
+ auxType: auxSymValAndOff,
+ argLen: 2,
+ asm: x86.AMOVW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVLstoreconst",
+ auxType: auxSymValAndOff,
+ argLen: 2,
+ asm: x86.AMOVL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVQstoreconst",
+ auxType: auxSymValAndOff,
+ argLen: 2,
+ asm: x86.AMOVQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVBstoreconstidx1",
+ auxType: auxSymValAndOff,
+ argLen: 3,
+ asm: x86.AMOVB,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVWstoreconstidx2",
+ auxType: auxSymValAndOff,
+ argLen: 3,
+ asm: x86.AMOVW,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVLstoreconstidx4",
+ auxType: auxSymValAndOff,
+ argLen: 3,
+ asm: x86.AMOVL,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "MOVQstoreconstidx8",
+ auxType: auxSymValAndOff,
+ argLen: 3,
+ asm: x86.AMOVQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ {0, 4295032831}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .SB
+ },
+ },
+ },
+ {
+ name: "DUFFZERO",
+ auxType: auxInt64,
+ argLen: 3,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 128}, // .DI
+ {1, 65536}, // .X0
+ },
+ clobbers: 8589934720, // .DI .FLAGS
+ },
+ },
+ {
+ name: "MOVOconst",
+ argLen: 0,
+ rematerializeable: true,
+ reg: regInfo{
+ outputs: []regMask{
+ 4294901760, // .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15
+ },
+ },
+ },
+ {
+ name: "REPSTOSQ",
+ argLen: 4,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 128}, // .DI
+ {1, 2}, // .CX
+ {2, 1}, // .AX
+ },
+ clobbers: 130, // .CX .DI
+ },
+ },
+ {
+ name: "CALLstatic",
+ auxType: auxSymOff,
+ argLen: 1,
+ reg: regInfo{
+ clobbers: 12884901871, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15 .FLAGS
+ },
+ },
+ {
+ name: "CALLclosure",
+ auxType: auxInt64,
+ argLen: 3,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 4}, // .DX
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 12884901871, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15 .FLAGS
+ },
+ },
+ {
+ name: "CALLdefer",
+ auxType: auxInt64,
+ argLen: 1,
+ reg: regInfo{
+ clobbers: 12884901871, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15 .FLAGS
+ },
+ },
+ {
+ name: "CALLgo",
+ auxType: auxInt64,
+ argLen: 1,
+ reg: regInfo{
+ clobbers: 12884901871, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15 .FLAGS
+ },
+ },
+ {
+ name: "CALLinter",
+ auxType: auxInt64,
+ argLen: 2,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65519}, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 12884901871, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15 .X0 .X1 .X2 .X3 .X4 .X5 .X6 .X7 .X8 .X9 .X10 .X11 .X12 .X13 .X14 .X15 .FLAGS
+ },
+ },
+ {
+ name: "DUFFCOPY",
+ auxType: auxInt64,
+ argLen: 3,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 128}, // .DI
+ {1, 64}, // .SI
+ },
+ clobbers: 8590000320, // .SI .DI .X0 .FLAGS
+ },
+ },
+ {
+ name: "REPMOVSQ",
+ argLen: 4,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 128}, // .DI
+ {1, 64}, // .SI
+ {2, 2}, // .CX
+ },
+ clobbers: 194, // .CX .SI .DI
+ },
+ },
+ {
+ name: "InvertFlags",
+ argLen: 1,
+ reg: regInfo{},
+ },
+ {
+ name: "LoweredGetG",
+ argLen: 1,
+ reg: regInfo{
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "LoweredGetClosurePtr",
+ argLen: 0,
+ reg: regInfo{
+ outputs: []regMask{
+ 4, // .DX
+ },
+ },
+ },
+ {
+ name: "LoweredNilCheck",
+ argLen: 2,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ clobbers: 8589934592, // .FLAGS
+ },
+ },
+ {
+ name: "MOVQconvert",
+ argLen: 2,
+ asm: x86.AMOVQ,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {0, 65535}, // .AX .CX .DX .BX .SP .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ outputs: []regMask{
+ 65519, // .AX .CX .DX .BX .BP .SI .DI .R8 .R9 .R10 .R11 .R12 .R13 .R14 .R15
+ },
+ },
+ },
+ {
+ name: "FlagEQ",
+ argLen: 0,
+ reg: regInfo{},
+ },
+ {
+ name: "FlagLT_ULT",
+ argLen: 0,
+ reg: regInfo{},
+ },
+ {
+ name: "FlagLT_UGT",
+ argLen: 0,
+ reg: regInfo{},
+ },
+ {
+ name: "FlagGT_UGT",
+ argLen: 0,
+ reg: regInfo{},
+ },
+ {
+ name: "FlagGT_ULT",
+ argLen: 0,
+ reg: regInfo{},
+ },
+
+ {
+ name: "Add8",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Add16",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Add32",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Add64",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "AddPtr",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Add32F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Add64F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Sub8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Sub16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Sub32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Sub64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "SubPtr",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Sub32F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Sub64F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Mul8",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Mul16",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Mul32",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Mul64",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Mul32F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Mul64F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Div32F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Div64F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Hmul8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Hmul8u",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Hmul16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Hmul16u",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Hmul32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Hmul32u",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Hmul64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Hmul64u",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Avg64u",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Div8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Div8u",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Div16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Div16u",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Div32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Div32u",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Div64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Div64u",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Mod8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Mod8u",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Mod16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Mod16u",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Mod32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Mod32u",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Mod64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Mod64u",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "And8",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "And16",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "And32",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "And64",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Or8",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Or16",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Or32",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Or64",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Xor8",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Xor16",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Xor32",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Xor64",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Lsh8x8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Lsh8x16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Lsh8x32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Lsh8x64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Lsh16x8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Lsh16x16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Lsh16x32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Lsh16x64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Lsh32x8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Lsh32x16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Lsh32x32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Lsh32x64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Lsh64x8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Lsh64x16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Lsh64x32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Lsh64x64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh8x8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh8x16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh8x32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh8x64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh16x8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh16x16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh16x32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh16x64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh32x8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh32x16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh32x32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh32x64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh64x8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh64x16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh64x32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh64x64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh8Ux8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh8Ux16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh8Ux32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh8Ux64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh16Ux8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh16Ux16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh16Ux32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh16Ux64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh32Ux8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh32Ux16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh32Ux32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh32Ux64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh64Ux8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh64Ux16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh64Ux32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Rsh64Ux64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Lrot8",
+ auxType: auxInt64,
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Lrot16",
+ auxType: auxInt64,
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Lrot32",
+ auxType: auxInt64,
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Lrot64",
+ auxType: auxInt64,
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Eq8",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Eq16",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Eq32",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Eq64",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "EqPtr",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "EqInter",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "EqSlice",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Eq32F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Eq64F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Neq8",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Neq16",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Neq32",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "Neq64",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "NeqPtr",
+ argLen: 2,
+ commutative: true,
+ generic: true,
+ },
+ {
+ name: "NeqInter",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "NeqSlice",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Neq32F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Neq64F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Less8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Less8U",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Less16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Less16U",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Less32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Less32U",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Less64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Less64U",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Less32F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Less64F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Leq8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Leq8U",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Leq16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Leq16U",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Leq32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Leq32U",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Leq64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Leq64U",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Leq32F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Leq64F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Greater8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Greater8U",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Greater16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Greater16U",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Greater32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Greater32U",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Greater64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Greater64U",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Greater32F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Greater64F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Geq8",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Geq8U",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Geq16",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Geq16U",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Geq32",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Geq32U",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Geq64",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Geq64U",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Geq32F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Geq64F",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Not",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Neg8",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Neg16",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Neg32",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Neg64",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Neg32F",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Neg64F",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Com8",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Com16",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Com32",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Com64",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Sqrt",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Phi",
+ argLen: -1,
+ generic: true,
+ },
+ {
+ name: "Copy",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Convert",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "ConstBool",
+ auxType: auxBool,
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "ConstString",
+ auxType: auxString,
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "ConstNil",
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "Const8",
+ auxType: auxInt8,
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "Const16",
+ auxType: auxInt16,
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "Const32",
+ auxType: auxInt32,
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "Const64",
+ auxType: auxInt64,
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "Const32F",
+ auxType: auxFloat,
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "Const64F",
+ auxType: auxFloat,
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "ConstInterface",
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "ConstSlice",
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "InitMem",
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "Arg",
+ auxType: auxSymOff,
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "Addr",
+ auxType: auxSym,
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "SP",
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "SB",
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "Func",
+ auxType: auxSym,
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "Load",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "Store",
+ auxType: auxInt64,
+ argLen: 3,
+ generic: true,
+ },
+ {
+ name: "Move",
+ auxType: auxInt64,
+ argLen: 3,
+ generic: true,
+ },
+ {
+ name: "Zero",
+ auxType: auxInt64,
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "ClosureCall",
+ auxType: auxInt64,
+ argLen: 3,
+ generic: true,
+ },
+ {
+ name: "StaticCall",
+ auxType: auxSymOff,
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "DeferCall",
+ auxType: auxInt64,
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "GoCall",
+ auxType: auxInt64,
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "InterCall",
+ auxType: auxInt64,
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "SignExt8to16",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "SignExt8to32",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "SignExt8to64",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "SignExt16to32",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "SignExt16to64",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "SignExt32to64",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "ZeroExt8to16",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "ZeroExt8to32",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "ZeroExt8to64",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "ZeroExt16to32",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "ZeroExt16to64",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "ZeroExt32to64",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Trunc16to8",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Trunc32to8",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Trunc32to16",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Trunc64to8",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Trunc64to16",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Trunc64to32",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Cvt32to32F",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Cvt32to64F",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Cvt64to32F",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Cvt64to64F",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Cvt32Fto32",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Cvt32Fto64",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Cvt64Fto32",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Cvt64Fto64",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Cvt32Fto64F",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "Cvt64Fto32F",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "IsNonNil",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "IsInBounds",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "IsSliceInBounds",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "NilCheck",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "GetG",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "GetClosurePtr",
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "ArrayIndex",
+ auxType: auxInt64,
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "PtrIndex",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "OffPtr",
+ auxType: auxInt64,
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "SliceMake",
+ argLen: 3,
+ generic: true,
+ },
+ {
+ name: "SlicePtr",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "SliceLen",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "SliceCap",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "ComplexMake",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "ComplexReal",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "ComplexImag",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "StringMake",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "StringPtr",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "StringLen",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "IMake",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "ITab",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "IData",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "StructMake0",
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "StructMake1",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "StructMake2",
+ argLen: 2,
+ generic: true,
+ },
+ {
+ name: "StructMake3",
+ argLen: 3,
+ generic: true,
+ },
+ {
+ name: "StructMake4",
+ argLen: 4,
+ generic: true,
+ },
+ {
+ name: "StructSelect",
+ auxType: auxInt64,
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "StoreReg",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "LoadReg",
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "FwdRef",
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "Unknown",
+ argLen: 0,
+ generic: true,
+ },
+ {
+ name: "VarDef",
+ auxType: auxSym,
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "VarKill",
+ auxType: auxSym,
+ argLen: 1,
+ generic: true,
+ },
+ {
+ name: "VarLive",
+ auxType: auxSym,
+ argLen: 1,
+ generic: true,
+ },
+}
+
+func (o Op) Asm() int { return opcodeTable[o].asm }
+func (o Op) String() string { return opcodeTable[o].name }
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// machine-independent optimization
+func opt(f *Func) {
+ applyRewrite(f, rewriteBlockgeneric, rewriteValuegeneric)
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+package ssa
+
+import (
+ "fmt"
+ "testing"
+)
+
+const (
+ blockCount = 1000
+ passCount = 15000
+)
+
+type passFunc func(*Func)
+
+func BenchmarkDSEPass(b *testing.B) { benchFnPass(b, dse, blockCount, genFunction) }
+func BenchmarkDSEPassBlock(b *testing.B) { benchFnBlock(b, dse, genFunction) }
+func BenchmarkCSEPass(b *testing.B) { benchFnPass(b, cse, blockCount, genFunction) }
+func BenchmarkCSEPassBlock(b *testing.B) { benchFnBlock(b, cse, genFunction) }
+func BenchmarkDeadcodePass(b *testing.B) { benchFnPass(b, deadcode, blockCount, genFunction) }
+func BenchmarkDeadcodePassBlock(b *testing.B) { benchFnBlock(b, deadcode, genFunction) }
+
+func multi(f *Func) {
+ cse(f)
+ dse(f)
+ deadcode(f)
+}
+func BenchmarkMultiPass(b *testing.B) { benchFnPass(b, multi, blockCount, genFunction) }
+func BenchmarkMultiPassBlock(b *testing.B) { benchFnBlock(b, multi, genFunction) }
+
+// benchFnPass runs passFunc b.N times across a single function.
+func benchFnPass(b *testing.B, fn passFunc, size int, bg blockGen) {
+ b.ReportAllocs()
+ c := NewConfig("amd64", DummyFrontend{b}, nil, true)
+ fun := Fun(c, "entry", bg(size)...)
+
+ CheckFunc(fun.f)
+ b.ResetTimer()
+ for i := 0; i < b.N; i++ {
+ fn(fun.f)
+ b.StopTimer()
+ CheckFunc(fun.f)
+ b.StartTimer()
+ }
+}
+
+// benchFnPass runs passFunc across a function with b.N blocks.
+func benchFnBlock(b *testing.B, fn passFunc, bg blockGen) {
+ b.ReportAllocs()
+ c := NewConfig("amd64", DummyFrontend{b}, nil, true)
+ fun := Fun(c, "entry", bg(b.N)...)
+
+ CheckFunc(fun.f)
+ b.ResetTimer()
+ for i := 0; i < passCount; i++ {
+ fn(fun.f)
+ }
+ b.StopTimer()
+}
+
+func genFunction(size int) []bloc {
+ var blocs []bloc
+ elemType := &TypeImpl{Size_: 8, Name: "testtype"}
+ ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr", Elem_: elemType} // dummy for testing
+
+ valn := func(s string, m, n int) string { return fmt.Sprintf("%s%d-%d", s, m, n) }
+ blocs = append(blocs,
+ Bloc("entry",
+ Valu(valn("store", 0, 4), OpInitMem, TypeMem, 0, nil),
+ Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Goto(blockn(1)),
+ ),
+ )
+ for i := 1; i < size+1; i++ {
+ blocs = append(blocs, Bloc(blockn(i),
+ Valu(valn("v", i, 0), OpConstBool, TypeBool, 1, nil),
+ Valu(valn("addr", i, 1), OpAddr, ptrType, 0, nil, "sb"),
+ Valu(valn("addr", i, 2), OpAddr, ptrType, 0, nil, "sb"),
+ Valu(valn("addr", i, 3), OpAddr, ptrType, 0, nil, "sb"),
+ Valu(valn("zero", i, 1), OpZero, TypeMem, 8, nil, valn("addr", i, 3),
+ valn("store", i-1, 4)),
+ Valu(valn("store", i, 1), OpStore, TypeMem, 0, nil, valn("addr", i, 1),
+ valn("v", i, 0), valn("zero", i, 1)),
+ Valu(valn("store", i, 2), OpStore, TypeMem, 0, nil, valn("addr", i, 2),
+ valn("v", i, 0), valn("store", i, 1)),
+ Valu(valn("store", i, 3), OpStore, TypeMem, 0, nil, valn("addr", i, 1),
+ valn("v", i, 0), valn("store", i, 2)),
+ Valu(valn("store", i, 4), OpStore, TypeMem, 0, nil, valn("addr", i, 3),
+ valn("v", i, 0), valn("store", i, 3)),
+ Goto(blockn(i+1))))
+ }
+
+ blocs = append(blocs,
+ Bloc(blockn(size+1), Goto("exit")),
+ Bloc("exit", Exit("store0-4")),
+ )
+
+ return blocs
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// phielim eliminates redundant phi values from f.
+// A phi is redundant if its arguments are all equal. For
+// purposes of counting, ignore the phi itself. Both of
+// these phis are redundant:
+// v = phi(x,x,x)
+// v = phi(x,v,x,v)
+// We repeat this process to also catch situations like:
+// v = phi(x, phi(x, x), phi(x, v))
+// TODO: Can we also simplify cases like:
+// v = phi(v, w, x)
+// w = phi(v, w, x)
+// and would that be useful?
+func phielim(f *Func) {
+ for {
+ change := false
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ copyelimValue(v)
+ change = phielimValue(v) || change
+ }
+ }
+ if !change {
+ break
+ }
+ }
+}
+
+func phielimValue(v *Value) bool {
+ if v.Op != OpPhi {
+ return false
+ }
+
+ // If there are two distinct args of v which
+ // are not v itself, then the phi must remain.
+ // Otherwise, we can replace it with a copy.
+ var w *Value
+ for i, x := range v.Args {
+ if b := v.Block.Preds[i]; b.Kind == BlockFirst && b.Succs[1] == v.Block {
+ // This branch is never taken so we can just eliminate it.
+ continue
+ }
+ if x == v {
+ continue
+ }
+ if x == w {
+ continue
+ }
+ if w != nil {
+ return false
+ }
+ w = x
+ }
+
+ if w == nil {
+ // v references only itself. It must be in
+ // a dead code loop. Don't bother modifying it.
+ return false
+ }
+ v.Op = OpCopy
+ v.SetArgs1(w)
+ return true
+}
--- /dev/null
+package ssa
+
+// phiopt eliminates boolean Phis based on the previous if.
+//
+// Main use case is to transform:
+// x := false
+// if b {
+// x = true
+// }
+// into x = b.
+//
+// In SSA code this appears as
+//
+// b0
+// If b -> b1 b2
+// b1
+// Plain -> b2
+// b2
+// x = (OpPhi (ConstBool [true]) (ConstBool [false]))
+//
+// In this case we can replace x with a copy of b.
+func phiopt(f *Func) {
+ for _, b := range f.Blocks {
+ if len(b.Preds) != 2 || len(b.Values) == 0 {
+ continue
+ }
+
+ pb0, b0 := b, b.Preds[0]
+ for b0.Kind != BlockIf && len(b0.Preds) == 1 {
+ pb0, b0 = b0, b0.Preds[0]
+ }
+ if b0.Kind != BlockIf {
+ continue
+ }
+ pb1, b1 := b, b.Preds[1]
+ for b1.Kind != BlockIf && len(b1.Preds) == 1 {
+ pb1, b1 = b1, b1.Preds[0]
+ }
+ if b1 != b0 {
+ continue
+ }
+ // b0 is the if block giving the boolean value.
+
+ var reverse bool
+ if b0.Succs[0] == pb0 && b0.Succs[1] == pb1 {
+ reverse = false
+ } else if b0.Succs[0] == pb1 && b0.Succs[1] == pb0 {
+ reverse = true
+ } else {
+ b.Fatalf("invalid predecessors\n")
+ }
+
+ for _, v := range b.Values {
+ if v.Op != OpPhi || !v.Type.IsBoolean() || v.Args[0].Op != OpConstBool || v.Args[1].Op != OpConstBool {
+ continue
+ }
+
+ ok, isCopy := false, false
+ if v.Args[0].AuxInt == 1 && v.Args[1].AuxInt == 0 {
+ ok, isCopy = true, !reverse
+ } else if v.Args[0].AuxInt == 0 && v.Args[1].AuxInt == 1 {
+ ok, isCopy = true, reverse
+ }
+
+ // (Phi (ConstBool [x]) (ConstBool [x])) is already handled by opt / phielim.
+
+ if ok && isCopy {
+ if f.pass.debug > 0 {
+ f.Config.Warnl(int(b.Line), "converted OpPhi to OpCopy")
+ }
+ v.reset(OpCopy)
+ v.AddArg(b0.Control)
+ continue
+ }
+ if ok && !isCopy {
+ if f.pass.debug > 0 {
+ f.Config.Warnl(int(b.Line), "converted OpPhi to OpNot")
+ }
+ v.reset(OpNot)
+ v.AddArg(b0.Control)
+ continue
+ }
+ }
+ }
+
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import (
+ "bytes"
+ "fmt"
+ "io"
+)
+
+func printFunc(f *Func) {
+ f.Logf("%s", f)
+}
+
+func (f *Func) String() string {
+ var buf bytes.Buffer
+ p := stringFuncPrinter{w: &buf}
+ fprintFunc(p, f)
+ return buf.String()
+}
+
+type funcPrinter interface {
+ header(f *Func)
+ startBlock(b *Block, reachable bool)
+ endBlock(b *Block)
+ value(v *Value, live bool)
+ startDepCycle()
+ endDepCycle()
+ named(n LocalSlot, vals []*Value)
+}
+
+type stringFuncPrinter struct {
+ w io.Writer
+}
+
+func (p stringFuncPrinter) header(f *Func) {
+ fmt.Fprint(p.w, f.Name)
+ fmt.Fprint(p.w, " ")
+ fmt.Fprintln(p.w, f.Type)
+}
+
+func (p stringFuncPrinter) startBlock(b *Block, reachable bool) {
+ fmt.Fprintf(p.w, " b%d:", b.ID)
+ if len(b.Preds) > 0 {
+ io.WriteString(p.w, " <-")
+ for _, pred := range b.Preds {
+ fmt.Fprintf(p.w, " b%d", pred.ID)
+ }
+ }
+ if !reachable {
+ fmt.Fprint(p.w, " DEAD")
+ }
+ io.WriteString(p.w, "\n")
+}
+
+func (p stringFuncPrinter) endBlock(b *Block) {
+ fmt.Fprintln(p.w, " "+b.LongString())
+}
+
+func (p stringFuncPrinter) value(v *Value, live bool) {
+ fmt.Fprint(p.w, " ")
+ //fmt.Fprint(p.w, v.Block.Func.Config.fe.Line(v.Line))
+ //fmt.Fprint(p.w, ": ")
+ fmt.Fprint(p.w, v.LongString())
+ if !live {
+ fmt.Fprint(p.w, " DEAD")
+ }
+ fmt.Fprintln(p.w)
+}
+
+func (p stringFuncPrinter) startDepCycle() {
+ fmt.Fprintln(p.w, "dependency cycle!")
+}
+
+func (p stringFuncPrinter) endDepCycle() {}
+
+func (p stringFuncPrinter) named(n LocalSlot, vals []*Value) {
+ fmt.Fprintf(p.w, "name %s: %v\n", n.Name(), vals)
+}
+
+func fprintFunc(p funcPrinter, f *Func) {
+ reachable, live := findlive(f)
+ p.header(f)
+ printed := make([]bool, f.NumValues())
+ for _, b := range f.Blocks {
+ p.startBlock(b, reachable[b.ID])
+
+ if f.scheduled {
+ // Order of Values has been decided - print in that order.
+ for _, v := range b.Values {
+ p.value(v, live[v.ID])
+ printed[v.ID] = true
+ }
+ p.endBlock(b)
+ continue
+ }
+
+ // print phis first since all value cycles contain a phi
+ n := 0
+ for _, v := range b.Values {
+ if v.Op != OpPhi {
+ continue
+ }
+ p.value(v, live[v.ID])
+ printed[v.ID] = true
+ n++
+ }
+
+ // print rest of values in dependency order
+ for n < len(b.Values) {
+ m := n
+ outer:
+ for _, v := range b.Values {
+ if printed[v.ID] {
+ continue
+ }
+ for _, w := range v.Args {
+ // w == nil shouldn't happen, but if it does,
+ // don't panic; we'll get a better diagnosis later.
+ if w != nil && w.Block == b && !printed[w.ID] {
+ continue outer
+ }
+ }
+ p.value(v, live[v.ID])
+ printed[v.ID] = true
+ n++
+ }
+ if m == n {
+ p.startDepCycle()
+ for _, v := range b.Values {
+ if printed[v.ID] {
+ continue
+ }
+ p.value(v, live[v.ID])
+ printed[v.ID] = true
+ n++
+ }
+ p.endDepCycle()
+ }
+ }
+
+ p.endBlock(b)
+ }
+ for name, vals := range f.NamedValues {
+ p.named(name, vals)
+ }
+}
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// rangeMask represents the possible relations between a pair of variables.
+type rangeMask uint
+
+const (
+ lt rangeMask = 1 << iota
+ eq
+ gt
+)
+
+// typeMask represents the universe of a variable pair in which
+// a set of relations is known.
+// For example, information learned for unsigned pairs cannot
+// be transfered to signed pairs because the same bit representation
+// can mean something else.
+type typeMask uint
+
+const (
+ signed typeMask = 1 << iota
+ unsigned
+ pointer
+)
+
+type typeRange struct {
+ t typeMask
+ r rangeMask
+}
+
+type control struct {
+ tm typeMask
+ a0, a1 ID
+}
+
+var (
+ reverseBits = [...]rangeMask{0, 4, 2, 6, 1, 5, 3, 7}
+
+ // maps what we learn when the positive branch is taken.
+ // For example:
+ // OpLess8: {signed, lt},
+ // v1 = (OpLess8 v2 v3).
+ // If v1 branch is taken than we learn that the rangeMaks
+ // can be at most lt.
+ typeRangeTable = map[Op]typeRange{
+ OpEq8: {signed | unsigned, eq},
+ OpEq16: {signed | unsigned, eq},
+ OpEq32: {signed | unsigned, eq},
+ OpEq64: {signed | unsigned, eq},
+ OpEqPtr: {pointer, eq},
+
+ OpNeq8: {signed | unsigned, lt | gt},
+ OpNeq16: {signed | unsigned, lt | gt},
+ OpNeq32: {signed | unsigned, lt | gt},
+ OpNeq64: {signed | unsigned, lt | gt},
+ OpNeqPtr: {pointer, lt | gt},
+
+ OpLess8: {signed, lt},
+ OpLess8U: {unsigned, lt},
+ OpLess16: {signed, lt},
+ OpLess16U: {unsigned, lt},
+ OpLess32: {signed, lt},
+ OpLess32U: {unsigned, lt},
+ OpLess64: {signed, lt},
+ OpLess64U: {unsigned, lt},
+
+ OpLeq8: {signed, lt | eq},
+ OpLeq8U: {unsigned, lt | eq},
+ OpLeq16: {signed, lt | eq},
+ OpLeq16U: {unsigned, lt | eq},
+ OpLeq32: {signed, lt | eq},
+ OpLeq32U: {unsigned, lt | eq},
+ OpLeq64: {signed, lt | eq},
+ OpLeq64U: {unsigned, lt | eq},
+
+ OpGeq8: {signed, eq | gt},
+ OpGeq8U: {unsigned, eq | gt},
+ OpGeq16: {signed, eq | gt},
+ OpGeq16U: {unsigned, eq | gt},
+ OpGeq32: {signed, eq | gt},
+ OpGeq32U: {unsigned, eq | gt},
+ OpGeq64: {signed, eq | gt},
+ OpGeq64U: {unsigned, eq | gt},
+
+ OpGreater8: {signed, gt},
+ OpGreater8U: {unsigned, gt},
+ OpGreater16: {signed, gt},
+ OpGreater16U: {unsigned, gt},
+ OpGreater32: {signed, gt},
+ OpGreater32U: {unsigned, gt},
+ OpGreater64: {signed, gt},
+ OpGreater64U: {unsigned, gt},
+
+ // TODO: OpIsInBounds actually test 0 <= a < b. This means
+ // that the positive branch learns signed/LT and unsigned/LT
+ // but the negative branch only learns unsigned/GE.
+ OpIsInBounds: {unsigned, lt},
+ OpIsSliceInBounds: {unsigned, lt | eq},
+ }
+)
+
+// prove removes redundant BlockIf controls that can be inferred in a straight line.
+//
+// By far, the most common redundant control are generated by bounds checking.
+// For example for the code:
+//
+// a[i] = 4
+// foo(a[i])
+//
+// The compiler will generate the following code:
+//
+// if i >= len(a) {
+// panic("not in bounds")
+// }
+// a[i] = 4
+// if i >= len(a) {
+// panic("not in bounds")
+// }
+// foo(a[i])
+//
+// The second comparison i >= len(a) is clearly redundant because if the
+// else branch of the first comparison is executed, we already know that i < len(a).
+// The code for the second panic can be removed.
+func prove(f *Func) {
+ idom := dominators(f)
+ sdom := newSparseTree(f, idom)
+
+ // current node state
+ type walkState int
+ const (
+ descend walkState = iota
+ simplify
+ )
+ // work maintains the DFS stack.
+ type bp struct {
+ block *Block // current handled block
+ state walkState // what's to do
+ saved []typeRange // save previous map entries modified by node
+ }
+ work := make([]bp, 0, 256)
+ work = append(work, bp{
+ block: f.Entry,
+ state: descend,
+ })
+
+ // mask keep tracks of restrictions for each pair of values in
+ // the dominators for the current node.
+ // Invariant: a0.ID <= a1.ID
+ // For example {unsigned, a0, a1} -> eq|gt means that from
+ // predecessors we know that a0 must be greater or equal to
+ // a1.
+ mask := make(map[control]rangeMask)
+
+ // DFS on the dominator tree.
+ for len(work) > 0 {
+ node := work[len(work)-1]
+ work = work[:len(work)-1]
+
+ switch node.state {
+ case descend:
+ parent := idom[node.block.ID]
+ tr := getRestrict(sdom, parent, node.block)
+ saved := updateRestrictions(mask, parent, tr)
+
+ work = append(work, bp{
+ block: node.block,
+ state: simplify,
+ saved: saved,
+ })
+
+ for s := sdom.Child(node.block); s != nil; s = sdom.Sibling(s) {
+ work = append(work, bp{
+ block: s,
+ state: descend,
+ })
+ }
+
+ case simplify:
+ simplifyBlock(mask, node.block)
+ restoreRestrictions(mask, idom[node.block.ID], node.saved)
+ }
+ }
+}
+
+// getRestrict returns the range restrictions added by p
+// when reaching b. p is the immediate dominator or b.
+func getRestrict(sdom sparseTree, p *Block, b *Block) typeRange {
+ if p == nil || p.Kind != BlockIf {
+ return typeRange{}
+ }
+ tr, has := typeRangeTable[p.Control.Op]
+ if !has {
+ return typeRange{}
+ }
+ // If p and p.Succs[0] are dominators it means that every path
+ // from entry to b passes through p and p.Succs[0]. We care that
+ // no path from entry to b passes through p.Succs[1]. If p.Succs[0]
+ // has one predecessor then (apart from the degenerate case),
+ // there is no path from entry that can reach b through p.Succs[1].
+ // TODO: how about p->yes->b->yes, i.e. a loop in yes.
+ if sdom.isAncestorEq(p.Succs[0], b) && len(p.Succs[0].Preds) == 1 {
+ return tr
+ } else if sdom.isAncestorEq(p.Succs[1], b) && len(p.Succs[1].Preds) == 1 {
+ tr.r = (lt | eq | gt) ^ tr.r
+ return tr
+ }
+ return typeRange{}
+}
+
+// updateRestrictions updates restrictions from the previous block (p) based on tr.
+// normally tr was calculated with getRestrict.
+func updateRestrictions(mask map[control]rangeMask, p *Block, tr typeRange) []typeRange {
+ if tr.t == 0 {
+ return nil
+ }
+
+ // p modifies the restrictions for (a0, a1).
+ // save and return the previous state.
+ a0 := p.Control.Args[0]
+ a1 := p.Control.Args[1]
+ if a0.ID > a1.ID {
+ tr.r = reverseBits[tr.r]
+ a0, a1 = a1, a0
+ }
+
+ saved := make([]typeRange, 0, 2)
+ for t := typeMask(1); t <= tr.t; t <<= 1 {
+ if t&tr.t == 0 {
+ continue
+ }
+
+ i := control{t, a0.ID, a1.ID}
+ oldRange, ok := mask[i]
+ if !ok {
+ if a1 != a0 {
+ oldRange = lt | eq | gt
+ } else { // sometimes happens after cse
+ oldRange = eq
+ }
+ }
+ // if i was not already in the map we save the full range
+ // so that when we restore it we properly keep track of it.
+ saved = append(saved, typeRange{t, oldRange})
+ // mask[i] contains the possible relations between a0 and a1.
+ // When we branched from parent we learned that the possible
+ // relations cannot be more than tr.r. We compute the new set of
+ // relations as the intersection betwee the old and the new set.
+ mask[i] = oldRange & tr.r
+ }
+ return saved
+}
+
+func restoreRestrictions(mask map[control]rangeMask, p *Block, saved []typeRange) {
+ if p == nil || p.Kind != BlockIf || len(saved) == 0 {
+ return
+ }
+
+ a0 := p.Control.Args[0].ID
+ a1 := p.Control.Args[1].ID
+ if a0 > a1 {
+ a0, a1 = a1, a0
+ }
+
+ for _, tr := range saved {
+ i := control{tr.t, a0, a1}
+ if tr.r != lt|eq|gt {
+ mask[i] = tr.r
+ } else {
+ delete(mask, i)
+ }
+ }
+}
+
+// simplifyBlock simplifies block known the restrictions in mask.
+func simplifyBlock(mask map[control]rangeMask, b *Block) {
+ if b.Kind != BlockIf {
+ return
+ }
+
+ tr, has := typeRangeTable[b.Control.Op]
+ if !has {
+ return
+ }
+
+ succ := -1
+ a0 := b.Control.Args[0].ID
+ a1 := b.Control.Args[1].ID
+ if a0 > a1 {
+ tr.r = reverseBits[tr.r]
+ a0, a1 = a1, a0
+ }
+
+ for t := typeMask(1); t <= tr.t; t <<= 1 {
+ if t&tr.t == 0 {
+ continue
+ }
+
+ // tr.r represents in which case the positive branch is taken.
+ // m.r represents which cases are possible because of previous relations.
+ // If the set of possible relations m.r is included in the set of relations
+ // need to take the positive branch (or negative) then that branch will
+ // always be taken.
+ // For shortcut, if m.r == 0 then this block is dead code.
+ i := control{t, a0, a1}
+ m := mask[i]
+ if m != 0 && tr.r&m == m {
+ if b.Func.pass.debug > 0 {
+ b.Func.Config.Warnl(int(b.Line), "Proved %s", b.Control.Op)
+ }
+ b.Logf("proved positive branch of %s, block %s in %s\n", b.Control, b, b.Func.Name)
+ succ = 0
+ break
+ }
+ if m != 0 && ((lt|eq|gt)^tr.r)&m == m {
+ if b.Func.pass.debug > 0 {
+ b.Func.Config.Warnl(int(b.Line), "Disproved %s", b.Control.Op)
+ }
+ b.Logf("proved negative branch of %s, block %s in %s\n", b.Control, b, b.Func.Name)
+ succ = 1
+ break
+ }
+ }
+
+ if succ == -1 {
+ // HACK: If the first argument of IsInBounds or IsSliceInBounds
+ // is a constant and we already know that constant is smaller (or equal)
+ // to the upper bound than this is proven. Most useful in cases such as:
+ // if len(a) <= 1 { return }
+ // do something with a[1]
+ c := b.Control
+ if (c.Op == OpIsInBounds || c.Op == OpIsSliceInBounds) &&
+ c.Args[0].Op == OpConst64 && c.Args[0].AuxInt >= 0 {
+ m := mask[control{signed, a0, a1}]
+ if m != 0 && tr.r&m == m {
+ if b.Func.pass.debug > 0 {
+ b.Func.Config.Warnl(int(b.Line), "Proved constant %s", c.Op)
+ }
+ succ = 0
+ }
+ }
+ }
+
+ if succ != -1 {
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0], b.Succs[1] = b.Succs[succ], b.Succs[1-succ]
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Register allocation.
+//
+// We use a version of a linear scan register allocator. We treat the
+// whole function as a single long basic block and run through
+// it using a greedy register allocator. Then all merge edges
+// (those targeting a block with len(Preds)>1) are processed to
+// shuffle data into the place that the target of the edge expects.
+//
+// The greedy allocator moves values into registers just before they
+// are used, spills registers only when necessary, and spills the
+// value whose next use is farthest in the future.
+//
+// The register allocator requires that a block is not scheduled until
+// at least one of its predecessors have been scheduled. The most recent
+// such predecessor provides the starting register state for a block.
+//
+// It also requires that there are no critical edges (critical =
+// comes from a block with >1 successor and goes to a block with >1
+// predecessor). This makes it easy to add fixup code on merge edges -
+// the source of a merge edge has only one successor, so we can add
+// fixup code to the end of that block.
+
+// Spilling
+//
+// For every value, we generate a spill immediately after the value itself.
+// x = Op y z : AX
+// x2 = StoreReg x
+// While AX still holds x, any uses of x will use that value. When AX is needed
+// for another value, we simply reuse AX. Spill code has already been generated
+// so there is no code generated at "spill" time. When x is referenced
+// subsequently, we issue a load to restore x to a register using x2 as
+// its argument:
+// x3 = Restore x2 : CX
+// x3 can then be used wherever x is referenced again.
+// If the spill (x2) is never used, it will be removed at the end of regalloc.
+//
+// Phi values are special, as always. We define two kinds of phis, those
+// where the merge happens in a register (a "register" phi) and those where
+// the merge happens in a stack location (a "stack" phi).
+//
+// A register phi must have the phi and all of its inputs allocated to the
+// same register. Register phis are spilled similarly to regular ops:
+// b1: y = ... : AX b2: z = ... : AX
+// goto b3 goto b3
+// b3: x = phi(y, z) : AX
+// x2 = StoreReg x
+//
+// A stack phi must have the phi and all of its inputs allocated to the same
+// stack location. Stack phis start out life already spilled - each phi
+// input must be a store (using StoreReg) at the end of the corresponding
+// predecessor block.
+// b1: y = ... : AX b2: z = ... : BX
+// y2 = StoreReg y z2 = StoreReg z
+// goto b3 goto b3
+// b3: x = phi(y2, z2)
+// The stack allocator knows that StoreReg args of stack-allocated phis
+// must be allocated to the same stack slot as the phi that uses them.
+// x is now a spilled value and a restore must appear before its first use.
+
+// TODO
+
+// Use an affinity graph to mark two values which should use the
+// same register. This affinity graph will be used to prefer certain
+// registers for allocation. This affinity helps eliminate moves that
+// are required for phi implementations and helps generate allocations
+// for 2-register architectures.
+
+// Note: regalloc generates a not-quite-SSA output. If we have:
+//
+// b1: x = ... : AX
+// x2 = StoreReg x
+// ... AX gets reused for something else ...
+// if ... goto b3 else b4
+//
+// b3: x3 = LoadReg x2 : BX b4: x4 = LoadReg x2 : CX
+// ... use x3 ... ... use x4 ...
+//
+// b2: ... use x3 ...
+//
+// If b3 is the primary predecessor of b2, then we use x3 in b2 and
+// add a x4:CX->BX copy at the end of b4.
+// But the definition of x3 doesn't dominate b2. We should really
+// insert a dummy phi at the start of b2 (x5=phi(x3,x4):BX) to keep
+// SSA form. For now, we ignore this problem as remaining in strict
+// SSA form isn't needed after regalloc. We'll just leave the use
+// of x3 not dominated by the definition of x3, and the CX->BX copy
+// will have no use (so don't run deadcode after regalloc!).
+// TODO: maybe we should introduce these extra phis?
+
+package ssa
+
+import (
+ "cmd/internal/obj"
+ "fmt"
+ "unsafe"
+)
+
+const regDebug = false // TODO: compiler flag
+const logSpills = false
+
+// regalloc performs register allocation on f. It sets f.RegAlloc
+// to the resulting allocation.
+func regalloc(f *Func) {
+ var s regAllocState
+ s.init(f)
+ s.regalloc(f)
+}
+
+type register uint8
+
+const noRegister register = 255
+
+type regMask uint64
+
+func (m regMask) String() string {
+ s := ""
+ for r := register(0); r < numRegs; r++ {
+ if m>>r&1 == 0 {
+ continue
+ }
+ if s != "" {
+ s += " "
+ }
+ s += fmt.Sprintf("r%d", r)
+ }
+ return s
+}
+
+// TODO: make arch-dependent
+var numRegs register = 64
+
+var registers = [...]Register{
+ Register{0, "AX"},
+ Register{1, "CX"},
+ Register{2, "DX"},
+ Register{3, "BX"},
+ Register{4, "SP"},
+ Register{5, "BP"},
+ Register{6, "SI"},
+ Register{7, "DI"},
+ Register{8, "R8"},
+ Register{9, "R9"},
+ Register{10, "R10"},
+ Register{11, "R11"},
+ Register{12, "R12"},
+ Register{13, "R13"},
+ Register{14, "R14"},
+ Register{15, "R15"},
+ Register{16, "X0"},
+ Register{17, "X1"},
+ Register{18, "X2"},
+ Register{19, "X3"},
+ Register{20, "X4"},
+ Register{21, "X5"},
+ Register{22, "X6"},
+ Register{23, "X7"},
+ Register{24, "X8"},
+ Register{25, "X9"},
+ Register{26, "X10"},
+ Register{27, "X11"},
+ Register{28, "X12"},
+ Register{29, "X13"},
+ Register{30, "X14"},
+ Register{31, "X15"},
+ Register{32, "SB"}, // pseudo-register for global base pointer (aka %rip)
+
+ // TODO: make arch-dependent
+}
+
+// countRegs returns the number of set bits in the register mask.
+func countRegs(r regMask) int {
+ n := 0
+ for r != 0 {
+ n += int(r & 1)
+ r >>= 1
+ }
+ return n
+}
+
+// pickReg picks an arbitrary register from the register mask.
+func pickReg(r regMask) register {
+ // pick the lowest one
+ if r == 0 {
+ panic("can't pick a register from an empty set")
+ }
+ for i := register(0); ; i++ {
+ if r&1 != 0 {
+ return i
+ }
+ r >>= 1
+ }
+}
+
+type use struct {
+ dist int32 // distance from start of the block to a use of a value
+ next *use // linked list of uses of a value in nondecreasing dist order
+}
+
+type valState struct {
+ regs regMask // the set of registers holding a Value (usually just one)
+ uses *use // list of uses in this block
+ spill *Value // spilled copy of the Value
+ spillUsed bool
+ needReg bool // cached value of !v.Type.IsMemory() && !v.Type.IsVoid() && !.v.Type.IsFlags()
+ rematerializeable bool // cached value of v.rematerializeable()
+ desired register // register we want value to be in, if any
+ avoid regMask // registers to avoid if we can
+}
+
+type regState struct {
+ v *Value // Original (preregalloc) Value stored in this register.
+ c *Value // A Value equal to v which is currently in a register. Might be v or a copy of it.
+ // If a register is unused, v==c==nil
+}
+
+type regAllocState struct {
+ f *Func
+
+ // for each block, its primary predecessor.
+ // A predecessor of b is primary if it is the closest
+ // predecessor that appears before b in the layout order.
+ // We record the index in the Preds list where the primary predecessor sits.
+ primary []int32
+
+ // live values at the end of each block. live[b.ID] is a list of value IDs
+ // which are live at the end of b, together with a count of how many instructions
+ // forward to the next use.
+ live [][]liveInfo
+
+ // current state of each (preregalloc) Value
+ values []valState
+
+ // For each Value, map from its value ID back to the
+ // preregalloc Value it was derived from.
+ orig []*Value
+
+ // current state of each register
+ regs []regState
+
+ // registers that contain values which can't be kicked out
+ nospill regMask
+
+ // mask of registers currently in use
+ used regMask
+
+ // current block we're working on
+ curBlock *Block
+
+ // cache of use records
+ freeUseRecords *use
+
+ // endRegs[blockid] is the register state at the end of each block.
+ // encoded as a set of endReg records.
+ endRegs [][]endReg
+
+ // startRegs[blockid] is the register state at the start of merge blocks.
+ // saved state does not include the state of phi ops in the block.
+ startRegs [][]startReg
+
+ // spillLive[blockid] is the set of live spills at the end of each block
+ spillLive [][]ID
+}
+
+type endReg struct {
+ r register
+ v *Value // pre-regalloc value held in this register (TODO: can we use ID here?)
+ c *Value // cached version of the value
+}
+
+type startReg struct {
+ r register
+ vid ID // pre-regalloc value needed in this register
+}
+
+// freeReg frees up register r. Any current user of r is kicked out.
+func (s *regAllocState) freeReg(r register) {
+ v := s.regs[r].v
+ if v == nil {
+ s.f.Fatalf("tried to free an already free register %d\n", r)
+ }
+
+ // Mark r as unused.
+ if regDebug {
+ fmt.Printf("freeReg %s (dump %s/%s)\n", registers[r].Name(), v, s.regs[r].c)
+ }
+ s.regs[r] = regState{}
+ s.values[v.ID].regs &^= regMask(1) << r
+ s.used &^= regMask(1) << r
+}
+
+// freeRegs frees up all registers listed in m.
+func (s *regAllocState) freeRegs(m regMask) {
+ for m&s.used != 0 {
+ s.freeReg(pickReg(m & s.used))
+ }
+}
+
+// setOrig records that c's original value is the same as
+// v's original value.
+func (s *regAllocState) setOrig(c *Value, v *Value) {
+ for int(c.ID) >= len(s.orig) {
+ s.orig = append(s.orig, nil)
+ }
+ if s.orig[c.ID] != nil {
+ s.f.Fatalf("orig value set twice %s %s", c, v)
+ }
+ s.orig[c.ID] = s.orig[v.ID]
+}
+
+// assignReg assigns register r to hold c, a copy of v.
+// r must be unused.
+func (s *regAllocState) assignReg(r register, v *Value, c *Value) {
+ if regDebug {
+ fmt.Printf("assignReg %s %s/%s\n", registers[r].Name(), v, c)
+ }
+ if s.regs[r].v != nil {
+ s.f.Fatalf("tried to assign register %d to %s/%s but it is already used by %s", r, v, c, s.regs[r].v)
+ }
+
+ // Update state.
+ s.regs[r] = regState{v, c}
+ s.values[v.ID].regs |= regMask(1) << r
+ s.used |= regMask(1) << r
+ s.f.setHome(c, ®isters[r])
+}
+
+// allocReg chooses a register for v from the set of registers in mask.
+// If there is no unused register, a Value will be kicked out of
+// a register to make room.
+func (s *regAllocState) allocReg(v *Value, mask regMask) register {
+ mask &^= s.nospill
+ if mask == 0 {
+ s.f.Fatalf("no register available")
+ }
+
+ // Pick an unused register if one is available.
+ if mask&^s.used != 0 {
+ mask &^= s.used
+
+ // Use desired register if we can.
+ d := s.values[v.ID].desired
+ if d != noRegister && mask>>d&1 != 0 {
+ mask = regMask(1) << d
+ }
+
+ // Avoid avoidable registers if we can.
+ if mask&^s.values[v.ID].avoid != 0 {
+ mask &^= s.values[v.ID].avoid
+ }
+
+ return pickReg(mask)
+ }
+
+ // Pick a value to spill. Spill the value with the
+ // farthest-in-the-future use.
+ // TODO: Prefer registers with already spilled Values?
+ // TODO: Modify preference using affinity graph.
+ // TODO: if a single value is in multiple registers, spill one of them
+ // before spilling a value in just a single register.
+
+ // SP and SB are allocated specially. No regular value should
+ // be allocated to them.
+ mask &^= 1<<4 | 1<<32
+
+ // Find a register to spill. We spill the register containing the value
+ // whose next use is as far in the future as possible.
+ // https://en.wikipedia.org/wiki/Page_replacement_algorithm#The_theoretically_optimal_page_replacement_algorithm
+ var r register
+ maxuse := int32(-1)
+ for t := register(0); t < numRegs; t++ {
+ if mask>>t&1 == 0 {
+ continue
+ }
+ v := s.regs[t].v
+ if n := s.values[v.ID].uses.dist; n > maxuse {
+ // v's next use is farther in the future than any value
+ // we've seen so far. A new best spill candidate.
+ r = t
+ maxuse = n
+ }
+ }
+ if maxuse == -1 {
+ s.f.Unimplementedf("couldn't find register to spill")
+ }
+ s.freeReg(r)
+ return r
+}
+
+// allocValToReg allocates v to a register selected from regMask and
+// returns the register copy of v. Any previous user is kicked out and spilled
+// (if necessary). Load code is added at the current pc. If nospill is set the
+// allocated register is marked nospill so the assignment cannot be
+// undone until the caller allows it by clearing nospill. Returns a
+// *Value which is either v or a copy of v allocated to the chosen register.
+func (s *regAllocState) allocValToReg(v *Value, mask regMask, nospill bool, line int32) *Value {
+ vi := &s.values[v.ID]
+
+ // Check if v is already in a requested register.
+ if mask&vi.regs != 0 {
+ r := pickReg(mask & vi.regs)
+ if s.regs[r].v != v || s.regs[r].c == nil {
+ panic("bad register state")
+ }
+ if nospill {
+ s.nospill |= regMask(1) << r
+ }
+ return s.regs[r].c
+ }
+
+ if v.Op != OpSP {
+ mask &^= 1 << 4 // dont' spill SP
+ }
+ if v.Op != OpSB {
+ mask &^= 1 << 32 // don't spill SB
+ }
+ mask &^= s.reserved()
+
+ // Allocate a register.
+ r := s.allocReg(v, mask)
+
+ // Allocate v to the new register.
+ var c *Value
+ if vi.regs != 0 {
+ // Copy from a register that v is already in.
+ r2 := pickReg(vi.regs)
+ if s.regs[r2].v != v {
+ panic("bad register state")
+ }
+ c = s.curBlock.NewValue1(line, OpCopy, v.Type, s.regs[r2].c)
+ } else if v.rematerializeable() {
+ // Rematerialize instead of loading from the spill location.
+ c = v.copyInto(s.curBlock)
+ } else {
+ switch {
+ // Load v from its spill location.
+ case vi.spill != nil:
+ if logSpills {
+ fmt.Println("regalloc: load spill")
+ }
+ c = s.curBlock.NewValue1(line, OpLoadReg, v.Type, vi.spill)
+ vi.spillUsed = true
+ default:
+ s.f.Fatalf("attempt to load unspilled value %v", v.LongString())
+ }
+ }
+ s.setOrig(c, v)
+ s.assignReg(r, v, c)
+ if nospill {
+ s.nospill |= regMask(1) << r
+ }
+ return c
+}
+
+func (s *regAllocState) init(f *Func) {
+ if numRegs > noRegister || numRegs > register(unsafe.Sizeof(regMask(0))*8) {
+ panic("too many registers")
+ }
+
+ s.f = f
+ s.regs = make([]regState, numRegs)
+ s.values = make([]valState, f.NumValues())
+ s.orig = make([]*Value, f.NumValues())
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ if !v.Type.IsMemory() && !v.Type.IsVoid() && !v.Type.IsFlags() {
+ s.values[v.ID].needReg = true
+ s.values[v.ID].rematerializeable = v.rematerializeable()
+ s.values[v.ID].desired = noRegister
+ s.orig[v.ID] = v
+ }
+ }
+ }
+ s.computeLive()
+
+ // Compute block order. This array allows us to distinguish forward edges
+ // from backward edges and compute how far they go.
+ blockOrder := make([]int32, f.NumBlocks())
+ for i, b := range f.Blocks {
+ blockOrder[b.ID] = int32(i)
+ }
+
+ // Compute primary predecessors.
+ s.primary = make([]int32, f.NumBlocks())
+ for _, b := range f.Blocks {
+ best := -1
+ for i, p := range b.Preds {
+ if blockOrder[p.ID] >= blockOrder[b.ID] {
+ continue // backward edge
+ }
+ if best == -1 || blockOrder[p.ID] > blockOrder[b.Preds[best].ID] {
+ best = i
+ }
+ }
+ s.primary[b.ID] = int32(best)
+ }
+
+ s.endRegs = make([][]endReg, f.NumBlocks())
+ s.startRegs = make([][]startReg, f.NumBlocks())
+ s.spillLive = make([][]ID, f.NumBlocks())
+}
+
+// Adds a use record for id at distance dist from the start of the block.
+// All calls to addUse must happen with nonincreasing dist.
+func (s *regAllocState) addUse(id ID, dist int32) {
+ r := s.freeUseRecords
+ if r != nil {
+ s.freeUseRecords = r.next
+ } else {
+ r = &use{}
+ }
+ r.dist = dist
+ r.next = s.values[id].uses
+ s.values[id].uses = r
+ if r.next != nil && dist > r.next.dist {
+ s.f.Fatalf("uses added in wrong order")
+ }
+}
+
+// advanceUses advances the uses of v's args from the state before v to the state after v.
+// Any values which have no more uses are deallocated from registers.
+func (s *regAllocState) advanceUses(v *Value) {
+ for _, a := range v.Args {
+ if !s.values[a.ID].needReg {
+ continue
+ }
+ ai := &s.values[a.ID]
+ r := ai.uses
+ ai.uses = r.next
+ if r.next == nil {
+ // Value is dead, free all registers that hold it.
+ s.freeRegs(ai.regs)
+ }
+ r.next = s.freeUseRecords
+ s.freeUseRecords = r
+ }
+}
+
+// Sets the state of the registers to that encoded in regs.
+func (s *regAllocState) setState(regs []endReg) {
+ s.freeRegs(s.used)
+ for _, x := range regs {
+ s.assignReg(x.r, x.v, x.c)
+ }
+}
+
+// compatRegs returns the set of registers which can store a type t.
+func (s *regAllocState) compatRegs(t Type) regMask {
+ var m regMask
+ if t.IsFloat() {
+ m = 0xffff << 16 // X0-X15
+ } else {
+ m = 0xffef << 0 // AX-R15, except SP
+ }
+ return m &^ s.reserved()
+}
+
+func (s *regAllocState) regalloc(f *Func) {
+ liveSet := f.newSparseSet(f.NumValues())
+ defer f.retSparseSet(liveSet)
+ var oldSched []*Value
+ var phis []*Value
+ var phiRegs []register
+ var args []*Value
+
+ if f.Entry != f.Blocks[0] {
+ f.Fatalf("entry block must be first")
+ }
+
+ for _, b := range f.Blocks {
+ s.curBlock = b
+
+ // Initialize liveSet and uses fields for this block.
+ // Walk backwards through the block doing liveness analysis.
+ liveSet.clear()
+ for _, e := range s.live[b.ID] {
+ s.addUse(e.ID, int32(len(b.Values))+e.dist) // pseudo-uses from beyond end of block
+ liveSet.add(e.ID)
+ }
+ if v := b.Control; v != nil && s.values[v.ID].needReg {
+ s.addUse(v.ID, int32(len(b.Values))) // psuedo-use by control value
+ liveSet.add(v.ID)
+ }
+ for i := len(b.Values) - 1; i >= 0; i-- {
+ v := b.Values[i]
+ liveSet.remove(v.ID)
+ if v.Op == OpPhi {
+ // Remove v from the live set, but don't add
+ // any inputs. This is the state the len(b.Preds)>1
+ // case below desires; it wants to process phis specially.
+ continue
+ }
+ for _, a := range v.Args {
+ if !s.values[a.ID].needReg {
+ continue
+ }
+ s.addUse(a.ID, int32(i))
+ liveSet.add(a.ID)
+ }
+ }
+ if regDebug {
+ fmt.Printf("uses for %s:%s\n", s.f.Name, b)
+ for i := range s.values {
+ vi := &s.values[i]
+ u := vi.uses
+ if u == nil {
+ continue
+ }
+ fmt.Printf(" v%d:", i)
+ for u != nil {
+ fmt.Printf(" %d", u.dist)
+ u = u.next
+ }
+ fmt.Println()
+ }
+ }
+
+ // Make a copy of the block schedule so we can generate a new one in place.
+ // We make a separate copy for phis and regular values.
+ nphi := 0
+ for _, v := range b.Values {
+ if v.Op != OpPhi {
+ break
+ }
+ nphi++
+ }
+ phis = append(phis[:0], b.Values[:nphi]...)
+ oldSched = append(oldSched[:0], b.Values[nphi:]...)
+ b.Values = b.Values[:0]
+
+ // Initialize start state of block.
+ if b == f.Entry {
+ // Regalloc state is empty to start.
+ if nphi > 0 {
+ f.Fatalf("phis in entry block")
+ }
+ } else if len(b.Preds) == 1 {
+ // Start regalloc state with the end state of the previous block.
+ s.setState(s.endRegs[b.Preds[0].ID])
+ if nphi > 0 {
+ f.Fatalf("phis in single-predecessor block")
+ }
+ // Drop any values which are no longer live.
+ // This may happen because at the end of p, a value may be
+ // live but only used by some other successor of p.
+ for r := register(0); r < numRegs; r++ {
+ v := s.regs[r].v
+ if v != nil && !liveSet.contains(v.ID) {
+ s.freeReg(r)
+ }
+ }
+ } else {
+ // This is the complicated case. We have more than one predecessor,
+ // which means we may have Phi ops.
+
+ // Copy phi ops into new schedule.
+ b.Values = append(b.Values, phis...)
+
+ // Start with the final register state of the primary predecessor
+ idx := s.primary[b.ID]
+ if idx < 0 {
+ f.Fatalf("block with no primary predecessor %s", b)
+ }
+ p := b.Preds[idx]
+ s.setState(s.endRegs[p.ID])
+
+ if regDebug {
+ fmt.Printf("starting merge block %s with end state of %s:\n", b, p)
+ for _, x := range s.endRegs[p.ID] {
+ fmt.Printf(" %s: orig:%s cache:%s\n", registers[x.r].Name(), x.v, x.c)
+ }
+ }
+
+ // Decide on registers for phi ops. Use the registers determined
+ // by the primary predecessor if we can.
+ // TODO: pick best of (already processed) predecessors?
+ // Majority vote? Deepest nesting level?
+ phiRegs = phiRegs[:0]
+ var phiUsed regMask
+ for _, v := range phis {
+ if !s.values[v.ID].needReg {
+ phiRegs = append(phiRegs, noRegister)
+ continue
+ }
+ a := v.Args[idx]
+ m := s.values[a.ID].regs &^ phiUsed
+ var r register
+ if m != 0 {
+ r = pickReg(m)
+ s.freeReg(r)
+ phiUsed |= regMask(1) << r
+ phiRegs = append(phiRegs, r)
+ } else {
+ phiRegs = append(phiRegs, noRegister)
+ }
+ }
+
+ // Second pass - deallocate any phi inputs which are now dead.
+ for _, v := range phis {
+ if !s.values[v.ID].needReg {
+ continue
+ }
+ a := v.Args[idx]
+ if !liveSet.contains(a.ID) {
+ // Input is dead beyond the phi, deallocate
+ // anywhere else it might live.
+ s.freeRegs(s.values[a.ID].regs)
+ }
+ }
+
+ // Third pass - pick registers for phis whose inputs
+ // were not in a register.
+ for i, v := range phis {
+ if !s.values[v.ID].needReg {
+ continue
+ }
+ if phiRegs[i] != noRegister {
+ continue
+ }
+ m := s.compatRegs(v.Type) &^ phiUsed &^ s.used
+ if m != 0 {
+ r := pickReg(m)
+ phiRegs[i] = r
+ phiUsed |= regMask(1) << r
+ }
+ }
+
+ // Set registers for phis. Add phi spill code.
+ for i, v := range phis {
+ if !s.values[v.ID].needReg {
+ continue
+ }
+ r := phiRegs[i]
+ if r == noRegister {
+ // stack-based phi
+ // Spills will be inserted in all the predecessors below.
+ s.values[v.ID].spill = v // v starts life spilled
+ s.values[v.ID].spillUsed = true // use is guaranteed
+ continue
+ }
+ // register-based phi
+ s.assignReg(r, v, v)
+ // Spill the phi in case we need to restore it later.
+ spill := b.NewValue1(v.Line, OpStoreReg, v.Type, v)
+ s.setOrig(spill, v)
+ s.values[v.ID].spill = spill
+ s.values[v.ID].spillUsed = false
+ }
+
+ // Save the starting state for use by merge edges.
+ var regList []startReg
+ for r := register(0); r < numRegs; r++ {
+ v := s.regs[r].v
+ if v == nil {
+ continue
+ }
+ if phiUsed>>r&1 != 0 {
+ // Skip registers that phis used, we'll handle those
+ // specially during merge edge processing.
+ continue
+ }
+ regList = append(regList, startReg{r, v.ID})
+ }
+ s.startRegs[b.ID] = regList
+
+ if regDebug {
+ fmt.Printf("after phis\n")
+ for _, x := range s.startRegs[b.ID] {
+ fmt.Printf(" %s: v%d\n", registers[x.r].Name(), x.vid)
+ }
+ }
+ }
+
+ // Compute preferred registers for each value using a backwards pass.
+ // Note that we do this phase after startRegs is set above, so that
+ // we get the right behavior for a block which branches to itself.
+ for _, succ := range b.Succs {
+ // TODO: prioritize likely successor.
+ for _, x := range s.startRegs[succ.ID] {
+ v := s.orig[x.vid]
+ s.values[v.ID].desired = x.r
+ }
+ // Process phi ops in succ
+ i := -1
+ for j, p := range succ.Preds {
+ if p == b {
+ i = j
+ break
+ }
+ }
+ if i == -1 {
+ s.f.Fatalf("can't find predecssor %s of %s\n", b, succ)
+ }
+ for _, v := range succ.Values {
+ if v.Op != OpPhi {
+ break
+ }
+ if !s.values[v.ID].needReg {
+ continue
+ }
+ r, ok := s.f.getHome(v.ID).(*Register)
+ if !ok {
+ continue
+ }
+ a := s.orig[v.Args[i].ID]
+ s.values[a.ID].desired = register(r.Num)
+ }
+ }
+
+ // Set avoid fields to help desired register availability.
+ liveSet.clear()
+ for _, e := range s.live[b.ID] {
+ liveSet.add(e.ID)
+ }
+ if v := b.Control; v != nil && s.values[v.ID].needReg {
+ liveSet.add(v.ID)
+ }
+ for i := len(oldSched) - 1; i >= 0; i-- {
+ v := oldSched[i]
+ liveSet.remove(v.ID)
+
+ r := s.values[v.ID].desired
+ if r != noRegister {
+ m := regMask(1) << r
+ // All live values should avoid this register so
+ // it will be available at this point.
+ for _, w := range liveSet.contents() {
+ s.values[w].avoid |= m
+ }
+ }
+
+ for _, a := range v.Args {
+ if !s.values[a.ID].needReg {
+ continue
+ }
+ liveSet.add(a.ID)
+ }
+ }
+
+ // Process all the non-phi values.
+ for _, v := range oldSched {
+ if regDebug {
+ fmt.Printf(" processing %s\n", v.LongString())
+ }
+ if v.Op == OpPhi {
+ f.Fatalf("phi %s not at start of block", v)
+ }
+ if v.Op == OpSP {
+ s.assignReg(4, v, v) // TODO: arch-dependent
+ b.Values = append(b.Values, v)
+ s.advanceUses(v)
+ continue
+ }
+ if v.Op == OpSB {
+ s.assignReg(32, v, v) // TODO: arch-dependent
+ b.Values = append(b.Values, v)
+ s.advanceUses(v)
+ continue
+ }
+ if v.Op == OpArg {
+ // Args are "pre-spilled" values. We don't allocate
+ // any register here. We just set up the spill pointer to
+ // point at itself and any later user will restore it to use it.
+ s.values[v.ID].spill = v
+ s.values[v.ID].spillUsed = true // use is guaranteed
+ b.Values = append(b.Values, v)
+ s.advanceUses(v)
+ continue
+ }
+ regspec := opcodeTable[v.Op].reg
+ if len(regspec.inputs) == 0 && len(regspec.outputs) == 0 {
+ // No register allocation required (or none specified yet)
+ s.freeRegs(regspec.clobbers)
+ b.Values = append(b.Values, v)
+ continue
+ }
+
+ if s.values[v.ID].rematerializeable {
+ // Value is rematerializeable, don't issue it here.
+ // It will get issued just before each use (see
+ // allocValueToReg).
+ s.advanceUses(v)
+ continue
+ }
+
+ // Move arguments to registers. Process in an ordering defined
+ // by the register specification (most constrained first).
+ args = append(args[:0], v.Args...)
+ for _, i := range regspec.inputs {
+ if i.regs == flagRegMask {
+ // TODO: remove flag input from regspec.inputs.
+ continue
+ }
+ args[i.idx] = s.allocValToReg(v.Args[i.idx], i.regs, true, v.Line)
+ }
+
+ // Now that all args are in regs, we're ready to issue the value itself.
+ // Before we pick a register for the output value, allow input registers
+ // to be deallocated. We do this here so that the output can use the
+ // same register as a dying input.
+ s.nospill = 0
+ s.advanceUses(v) // frees any registers holding args that are no longer live
+
+ // Dump any registers which will be clobbered
+ s.freeRegs(regspec.clobbers)
+
+ // Pick register for output.
+ var mask regMask
+ if s.values[v.ID].needReg {
+ mask = regspec.outputs[0] &^ s.reserved()
+ if mask>>33&1 != 0 {
+ s.f.Fatalf("bad mask %s\n", v.LongString())
+ }
+ }
+ if mask != 0 {
+ r := s.allocReg(v, mask)
+ s.assignReg(r, v, v)
+ }
+
+ // Issue the Value itself.
+ for i, a := range args {
+ v.Args[i] = a // use register version of arguments
+ }
+ b.Values = append(b.Values, v)
+
+ // Issue a spill for this value. We issue spills unconditionally,
+ // then at the end of regalloc delete the ones we never use.
+ // TODO: schedule the spill at a point that dominates all restores.
+ // The restore may be off in an unlikely branch somewhere and it
+ // would be better to have the spill in that unlikely branch as well.
+ // v := ...
+ // if unlikely {
+ // f()
+ // }
+ // It would be good to have both spill and restore inside the IF.
+ if s.values[v.ID].needReg {
+ spill := b.NewValue1(v.Line, OpStoreReg, v.Type, v)
+ s.setOrig(spill, v)
+ s.values[v.ID].spill = spill
+ s.values[v.ID].spillUsed = false
+ }
+ }
+
+ if v := b.Control; v != nil && s.values[v.ID].needReg {
+ if regDebug {
+ fmt.Printf(" processing control %s\n", v.LongString())
+ }
+ // Load control value into reg.
+ // TODO: regspec for block control values, instead of using
+ // register set from the control op's output.
+ s.allocValToReg(v, opcodeTable[v.Op].reg.outputs[0], false, b.Line)
+ // Remove this use from the uses list.
+ vi := &s.values[v.ID]
+ u := vi.uses
+ vi.uses = u.next
+ if u.next == nil {
+ s.freeRegs(vi.regs) // value is dead
+ }
+ u.next = s.freeUseRecords
+ s.freeUseRecords = u
+ }
+
+ // Save end-of-block register state.
+ // First count how many, this cuts allocations in half.
+ k := 0
+ for r := register(0); r < numRegs; r++ {
+ v := s.regs[r].v
+ if v == nil {
+ continue
+ }
+ k++
+ }
+ regList := make([]endReg, 0, k)
+ for r := register(0); r < numRegs; r++ {
+ v := s.regs[r].v
+ if v == nil {
+ continue
+ }
+ regList = append(regList, endReg{r, v, s.regs[r].c})
+ }
+ s.endRegs[b.ID] = regList
+
+ // Check. TODO: remove
+ {
+ liveSet.clear()
+ for _, x := range s.live[b.ID] {
+ liveSet.add(x.ID)
+ }
+ for r := register(0); r < numRegs; r++ {
+ v := s.regs[r].v
+ if v == nil {
+ continue
+ }
+ if !liveSet.contains(v.ID) {
+ s.f.Fatalf("val %s is in reg but not live at end of %s", v, b)
+ }
+ }
+ }
+
+ // If a value is live at the end of the block and
+ // isn't in a register, remember that its spill location
+ // is live. We need to remember this information so that
+ // the liveness analysis in stackalloc is correct.
+ for _, e := range s.live[b.ID] {
+ if s.values[e.ID].regs != 0 {
+ // in a register, we'll use that source for the merge.
+ continue
+ }
+ spill := s.values[e.ID].spill
+ if spill == nil {
+ // rematerializeable values will have spill==nil.
+ continue
+ }
+ s.spillLive[b.ID] = append(s.spillLive[b.ID], spill.ID)
+ s.values[e.ID].spillUsed = true
+ }
+
+ // Clear any final uses.
+ // All that is left should be the pseudo-uses added for values which
+ // are live at the end of b.
+ for _, e := range s.live[b.ID] {
+ u := s.values[e.ID].uses
+ if u == nil {
+ f.Fatalf("live at end, no uses v%d", e.ID)
+ }
+ if u.next != nil {
+ f.Fatalf("live at end, too many uses v%d", e.ID)
+ }
+ s.values[e.ID].uses = nil
+ u.next = s.freeUseRecords
+ s.freeUseRecords = u
+ }
+ }
+
+ // Erase any spills we never used
+ for i := range s.values {
+ vi := s.values[i]
+ if vi.spillUsed {
+ if logSpills {
+ fmt.Println("regalloc: spilled value")
+ }
+ continue
+ }
+ spill := vi.spill
+ if spill == nil {
+ // Constants, SP, SB, ...
+ continue
+ }
+ f.freeValue(spill)
+ }
+ for _, b := range f.Blocks {
+ i := 0
+ for _, v := range b.Values {
+ if v.Op == OpInvalid {
+ continue
+ }
+ b.Values[i] = v
+ i++
+ }
+ b.Values = b.Values[:i]
+ // TODO: zero b.Values[i:], recycle Values
+ // Not important now because this is the last phase that manipulates Values
+ }
+
+ // Anything that didn't get a register gets a stack location here.
+ // (StoreReg, stack-based phis, inputs, ...)
+ stacklive := stackalloc(s.f, s.spillLive)
+
+ // Fix up all merge edges.
+ s.shuffle(stacklive)
+}
+
+// shuffle fixes up all the merge edges (those going into blocks of indegree > 1).
+func (s *regAllocState) shuffle(stacklive [][]ID) {
+ var e edgeState
+ e.s = s
+ e.cache = map[ID][]*Value{}
+ e.contents = map[Location]contentRecord{}
+ if regDebug {
+ fmt.Printf("shuffle %s\n", s.f.Name)
+ fmt.Println(s.f.String())
+ }
+
+ for _, b := range s.f.Blocks {
+ if len(b.Preds) <= 1 {
+ continue
+ }
+ e.b = b
+ for i, p := range b.Preds {
+ e.p = p
+ e.setup(i, s.endRegs[p.ID], s.startRegs[b.ID], stacklive[p.ID])
+ e.process()
+ }
+ }
+}
+
+type edgeState struct {
+ s *regAllocState
+ p, b *Block // edge goes from p->b.
+
+ // for each pre-regalloc value, a list of equivalent cached values
+ cache map[ID][]*Value
+
+ // map from location to the value it contains
+ contents map[Location]contentRecord
+
+ // desired destination locations
+ destinations []dstRecord
+ extra []dstRecord
+
+ usedRegs regMask // registers currently holding something
+ uniqueRegs regMask // registers holding the only copy of a value
+ finalRegs regMask // registers holding final target
+}
+
+type contentRecord struct {
+ vid ID // pre-regalloc value
+ c *Value // cached value
+ final bool // this is a satisfied destination
+}
+
+type dstRecord struct {
+ loc Location // register or stack slot
+ vid ID // pre-regalloc value it should contain
+ splice **Value // place to store reference to the generating instruction
+}
+
+// setup initializes the edge state for shuffling.
+func (e *edgeState) setup(idx int, srcReg []endReg, dstReg []startReg, stacklive []ID) {
+ if regDebug {
+ fmt.Printf("edge %s->%s\n", e.p, e.b)
+ }
+
+ // Clear state.
+ for k := range e.cache {
+ delete(e.cache, k)
+ }
+ for k := range e.contents {
+ delete(e.contents, k)
+ }
+ e.usedRegs = 0
+ e.uniqueRegs = 0
+ e.finalRegs = 0
+
+ // Live registers can be sources.
+ for _, x := range srcReg {
+ e.set(®isters[x.r], x.v.ID, x.c, false)
+ }
+ // So can all of the spill locations.
+ for _, spillID := range stacklive {
+ v := e.s.orig[spillID]
+ spill := e.s.values[v.ID].spill
+ e.set(e.s.f.getHome(spillID), v.ID, spill, false)
+ }
+
+ // Figure out all the destinations we need.
+ dsts := e.destinations[:0]
+ for _, x := range dstReg {
+ dsts = append(dsts, dstRecord{®isters[x.r], x.vid, nil})
+ }
+ // Phis need their args to end up in a specific location.
+ for _, v := range e.b.Values {
+ if v.Op != OpPhi {
+ break
+ }
+ loc := e.s.f.getHome(v.ID)
+ if loc == nil {
+ continue
+ }
+ dsts = append(dsts, dstRecord{loc, v.Args[idx].ID, &v.Args[idx]})
+ }
+ e.destinations = dsts
+
+ if regDebug {
+ for vid, a := range e.cache {
+ for _, c := range a {
+ fmt.Printf("src %s: v%d cache=%s\n", e.s.f.getHome(c.ID).Name(), vid, c)
+ }
+ }
+ for _, d := range e.destinations {
+ fmt.Printf("dst %s: v%d\n", d.loc.Name(), d.vid)
+ }
+ }
+}
+
+// process generates code to move all the values to the right destination locations.
+func (e *edgeState) process() {
+ dsts := e.destinations
+
+ // Process the destinations until they are all satisfied.
+ for len(dsts) > 0 {
+ i := 0
+ for _, d := range dsts {
+ if !e.processDest(d.loc, d.vid, d.splice) {
+ // Failed - save for next iteration.
+ dsts[i] = d
+ i++
+ }
+ }
+ if i < len(dsts) {
+ // Made some progress. Go around again.
+ dsts = dsts[:i]
+
+ // Append any extras destinations we generated.
+ dsts = append(dsts, e.extra...)
+ e.extra = e.extra[:0]
+ continue
+ }
+
+ // We made no progress. That means that any
+ // remaining unsatisfied moves are in simple cycles.
+ // For example, A -> B -> C -> D -> A.
+ // A ----> B
+ // ^ |
+ // | |
+ // | v
+ // D <---- C
+
+ // To break the cycle, we pick an unused register, say R,
+ // and put a copy of B there.
+ // A ----> B
+ // ^ |
+ // | |
+ // | v
+ // D <---- C <---- R=copyofB
+ // When we resume the outer loop, the A->B move can now proceed,
+ // and eventually the whole cycle completes.
+
+ // Copy any cycle location to a temp register. This duplicates
+ // one of the cycle entries, allowing the just duplicated value
+ // to be overwritten and the cycle to proceed.
+ loc := dsts[0].loc
+ vid := e.contents[loc].vid
+ c := e.contents[loc].c
+ r := e.findRegFor(c.Type)
+ if regDebug {
+ fmt.Printf("breaking cycle with v%d in %s:%s\n", vid, loc.Name(), c)
+ }
+ if _, isReg := loc.(*Register); isReg {
+ c = e.p.NewValue1(c.Line, OpCopy, c.Type, c)
+ } else {
+ c = e.p.NewValue1(c.Line, OpLoadReg, c.Type, c)
+ }
+ e.set(r, vid, c, false)
+ }
+}
+
+// processDest generates code to put value vid into location loc. Returns true
+// if progress was made.
+func (e *edgeState) processDest(loc Location, vid ID, splice **Value) bool {
+ occupant := e.contents[loc]
+ if occupant.vid == vid {
+ // Value is already in the correct place.
+ e.contents[loc] = contentRecord{vid, occupant.c, true}
+ if splice != nil {
+ *splice = occupant.c
+ }
+ // Note: if splice==nil then c will appear dead. This is
+ // non-SSA formed code, so be careful after this pass not to run
+ // deadcode elimination.
+ return true
+ }
+
+ // Check if we're allowed to clobber the destination location.
+ if len(e.cache[occupant.vid]) == 1 && !e.s.values[occupant.vid].rematerializeable {
+ // We can't overwrite the last copy
+ // of a value that needs to survive.
+ return false
+ }
+
+ // Copy from a source of v, register preferred.
+ v := e.s.orig[vid]
+ var c *Value
+ var src Location
+ if regDebug {
+ fmt.Printf("moving v%d to %s\n", vid, loc.Name())
+ fmt.Printf("sources of v%d:", vid)
+ }
+ for _, w := range e.cache[vid] {
+ h := e.s.f.getHome(w.ID)
+ if regDebug {
+ fmt.Printf(" %s:%s", h.Name(), w)
+ }
+ _, isreg := h.(*Register)
+ if src == nil || isreg {
+ c = w
+ src = h
+ }
+ }
+ if regDebug {
+ if src != nil {
+ fmt.Printf(" [use %s]\n", src.Name())
+ } else {
+ fmt.Printf(" [no source]\n")
+ }
+ }
+ _, dstReg := loc.(*Register)
+ var x *Value
+ if c == nil {
+ if !e.s.values[vid].rematerializeable {
+ e.s.f.Fatalf("can't find source for %s->%s: v%d\n", e.p, e.b, vid)
+ }
+ if dstReg {
+ x = v.copyInto(e.p)
+ } else {
+ // Rematerialize into stack slot. Need a free
+ // register to accomplish this.
+ e.erase(loc) // see pre-clobber comment below
+ r := e.findRegFor(v.Type)
+ x = v.copyInto(e.p)
+ e.set(r, vid, x, false)
+ // Make sure we spill with the size of the slot, not the
+ // size of x (which might be wider due to our dropping
+ // of narrowing conversions).
+ x = e.p.NewValue1(x.Line, OpStoreReg, loc.(LocalSlot).Type, x)
+ }
+ } else {
+ // Emit move from src to dst.
+ _, srcReg := src.(*Register)
+ if srcReg {
+ if dstReg {
+ x = e.p.NewValue1(c.Line, OpCopy, c.Type, c)
+ } else {
+ x = e.p.NewValue1(c.Line, OpStoreReg, loc.(LocalSlot).Type, c)
+ }
+ } else {
+ if dstReg {
+ x = e.p.NewValue1(c.Line, OpLoadReg, c.Type, c)
+ } else {
+ // mem->mem. Use temp register.
+
+ // Pre-clobber destination. This avoids the
+ // following situation:
+ // - v is currently held in R0 and stacktmp0.
+ // - We want to copy stacktmp1 to stacktmp0.
+ // - We choose R0 as the temporary register.
+ // During the copy, both R0 and stacktmp0 are
+ // clobbered, losing both copies of v. Oops!
+ // Erasing the destination early means R0 will not
+ // be chosen as the temp register, as it will then
+ // be the last copy of v.
+ e.erase(loc)
+
+ r := e.findRegFor(c.Type)
+ t := e.p.NewValue1(c.Line, OpLoadReg, c.Type, c)
+ e.set(r, vid, t, false)
+ x = e.p.NewValue1(c.Line, OpStoreReg, loc.(LocalSlot).Type, t)
+ }
+ }
+ }
+ e.set(loc, vid, x, true)
+ if splice != nil {
+ *splice = x
+ }
+ return true
+}
+
+// set changes the contents of location loc to hold the given value and its cached representative.
+func (e *edgeState) set(loc Location, vid ID, c *Value, final bool) {
+ e.s.f.setHome(c, loc)
+ e.erase(loc)
+ e.contents[loc] = contentRecord{vid, c, final}
+ a := e.cache[vid]
+ a = append(a, c)
+ e.cache[vid] = a
+ if r, ok := loc.(*Register); ok {
+ e.usedRegs |= regMask(1) << uint(r.Num)
+ if final {
+ e.finalRegs |= regMask(1) << uint(r.Num)
+ }
+ if len(a) == 1 {
+ e.uniqueRegs |= regMask(1) << uint(r.Num)
+ }
+ if len(a) == 2 {
+ if t, ok := e.s.f.getHome(a[0].ID).(*Register); ok {
+ e.uniqueRegs &^= regMask(1) << uint(t.Num)
+ }
+ }
+ }
+ if regDebug {
+ fmt.Printf("%s\n", c.LongString())
+ fmt.Printf("v%d now available in %s:%s\n", vid, loc.Name(), c)
+ }
+}
+
+// erase removes any user of loc.
+func (e *edgeState) erase(loc Location) {
+ cr := e.contents[loc]
+ if cr.c == nil {
+ return
+ }
+ vid := cr.vid
+
+ if cr.final {
+ // Add a destination to move this value back into place.
+ // Make sure it gets added to the tail of the destination queue
+ // so we make progress on other moves first.
+ e.extra = append(e.extra, dstRecord{loc, cr.vid, nil})
+ }
+
+ // Remove c from the list of cached values.
+ a := e.cache[vid]
+ for i, c := range a {
+ if e.s.f.getHome(c.ID) == loc {
+ if regDebug {
+ fmt.Printf("v%d no longer available in %s:%s\n", vid, loc.Name(), c)
+ }
+ a[i], a = a[len(a)-1], a[:len(a)-1]
+ break
+ }
+ }
+ e.cache[vid] = a
+
+ // Update register masks.
+ if r, ok := loc.(*Register); ok {
+ e.usedRegs &^= regMask(1) << uint(r.Num)
+ if cr.final {
+ e.finalRegs &^= regMask(1) << uint(r.Num)
+ }
+ }
+ if len(a) == 1 {
+ if r, ok := e.s.f.getHome(a[0].ID).(*Register); ok {
+ e.uniqueRegs |= regMask(1) << uint(r.Num)
+ }
+ }
+}
+
+// findRegFor finds a register we can use to make a temp copy of type typ.
+func (e *edgeState) findRegFor(typ Type) Location {
+ // Which registers are possibilities.
+ var m regMask
+ if typ.IsFloat() {
+ m = e.s.compatRegs(e.s.f.Config.fe.TypeFloat64())
+ } else {
+ m = e.s.compatRegs(e.s.f.Config.fe.TypeInt64())
+ }
+
+ // Pick a register. In priority order:
+ // 1) an unused register
+ // 2) a non-unique register not holding a final value
+ // 3) a non-unique register
+ x := m &^ e.usedRegs
+ if x != 0 {
+ return ®isters[pickReg(x)]
+ }
+ x = m &^ e.uniqueRegs &^ e.finalRegs
+ if x != 0 {
+ return ®isters[pickReg(x)]
+ }
+ x = m &^ e.uniqueRegs
+ if x != 0 {
+ return ®isters[pickReg(x)]
+ }
+
+ // No register is available. Allocate a temp location to spill a register to.
+ // The type of the slot is immaterial - it will not be live across
+ // any safepoint. Just use a type big enough to hold any register.
+ typ = e.s.f.Config.fe.TypeInt64()
+ t := LocalSlot{e.s.f.Config.fe.Auto(typ), typ, 0}
+ // TODO: reuse these slots.
+
+ // Pick a register to spill.
+ for vid, a := range e.cache {
+ for _, c := range a {
+ if r, ok := e.s.f.getHome(c.ID).(*Register); ok && m>>uint(r.Num)&1 != 0 {
+ x := e.p.NewValue1(c.Line, OpStoreReg, c.Type, c)
+ e.set(t, vid, x, false)
+ if regDebug {
+ fmt.Printf(" SPILL %s->%s %s\n", r.Name(), t.Name(), x.LongString())
+ }
+ // r will now be overwritten by the caller. At some point
+ // later, the newly saved value will be moved back to its
+ // final destination in processDest.
+ return r
+ }
+ }
+ }
+
+ fmt.Printf("m:%d unique:%d final:%d\n", m, e.uniqueRegs, e.finalRegs)
+ for vid, a := range e.cache {
+ for _, c := range a {
+ fmt.Printf("v%d: %s %s\n", vid, c, e.s.f.getHome(c.ID).Name())
+ }
+ }
+ e.s.f.Fatalf("can't find empty register on edge %s->%s", e.p, e.b)
+ return nil
+}
+
+func (v *Value) rematerializeable() bool {
+ if !opcodeTable[v.Op].rematerializeable {
+ return false
+ }
+ for _, a := range v.Args {
+ // SP and SB (generated by OpSP and OpSB) are always available.
+ if a.Op != OpSP && a.Op != OpSB {
+ return false
+ }
+ }
+ return true
+}
+
+type liveInfo struct {
+ ID ID // ID of variable
+ dist int32 // # of instructions before next use
+}
+
+// computeLive computes a map from block ID to a list of value IDs live at the end
+// of that block. Together with the value ID is a count of how many instructions
+// to the next use of that value. The resulting map is stored at s.live.
+// TODO: this could be quadratic if lots of variables are live across lots of
+// basic blocks. Figure out a way to make this function (or, more precisely, the user
+// of this function) require only linear size & time.
+func (s *regAllocState) computeLive() {
+ f := s.f
+ s.live = make([][]liveInfo, f.NumBlocks())
+ var phis []*Value
+
+ live := newSparseMap(f.NumValues())
+ t := newSparseMap(f.NumValues())
+
+ // Instead of iterating over f.Blocks, iterate over their postordering.
+ // Liveness information flows backward, so starting at the end
+ // increases the probability that we will stabilize quickly.
+ // TODO: Do a better job yet. Here's one possibility:
+ // Calculate the dominator tree and locate all strongly connected components.
+ // If a value is live in one block of an SCC, it is live in all.
+ // Walk the dominator tree from end to beginning, just once, treating SCC
+ // components as single blocks, duplicated calculated liveness information
+ // out to all of them.
+ po := postorder(f)
+ for {
+ changed := false
+
+ for _, b := range po {
+ // Start with known live values at the end of the block.
+ // Add len(b.Values) to adjust from end-of-block distance
+ // to beginning-of-block distance.
+ live.clear()
+ for _, e := range s.live[b.ID] {
+ live.set(e.ID, e.dist+int32(len(b.Values)))
+ }
+
+ // Mark control value as live
+ if b.Control != nil && s.values[b.Control.ID].needReg {
+ live.set(b.Control.ID, int32(len(b.Values)))
+ }
+
+ // Propagate backwards to the start of the block
+ // Assumes Values have been scheduled.
+ phis := phis[:0]
+ for i := len(b.Values) - 1; i >= 0; i-- {
+ v := b.Values[i]
+ live.remove(v.ID)
+ if v.Op == OpPhi {
+ // save phi ops for later
+ phis = append(phis, v)
+ continue
+ }
+ for _, a := range v.Args {
+ if s.values[a.ID].needReg {
+ live.set(a.ID, int32(i))
+ }
+ }
+ }
+
+ // For each predecessor of b, expand its list of live-at-end values.
+ // invariant: live contains the values live at the start of b (excluding phi inputs)
+ for i, p := range b.Preds {
+ // Compute additional distance for the edge.
+ const normalEdge = 10
+ const likelyEdge = 1
+ const unlikelyEdge = 100
+ // Note: delta must be at least 1 to distinguish the control
+ // value use from the first user in a successor block.
+ delta := int32(normalEdge)
+ if len(p.Succs) == 2 {
+ if p.Succs[0] == b && p.Likely == BranchLikely ||
+ p.Succs[1] == b && p.Likely == BranchUnlikely {
+ delta = likelyEdge
+ }
+ if p.Succs[0] == b && p.Likely == BranchUnlikely ||
+ p.Succs[1] == b && p.Likely == BranchLikely {
+ delta = unlikelyEdge
+ }
+ }
+
+ // Start t off with the previously known live values at the end of p.
+ t.clear()
+ for _, e := range s.live[p.ID] {
+ t.set(e.ID, e.dist)
+ }
+ update := false
+
+ // Add new live values from scanning this block.
+ for _, e := range live.contents() {
+ d := e.val + delta
+ if !t.contains(e.key) || d < t.get(e.key) {
+ update = true
+ t.set(e.key, d)
+ }
+ }
+ // Also add the correct arg from the saved phi values.
+ // All phis are at distance delta (we consider them
+ // simultaneously happening at the start of the block).
+ for _, v := range phis {
+ id := v.Args[i].ID
+ if s.values[id].needReg && !t.contains(id) || delta < t.get(id) {
+ update = true
+ t.set(id, delta)
+ }
+ }
+
+ if !update {
+ continue
+ }
+ // The live set has changed, update it.
+ l := s.live[p.ID][:0]
+ if cap(l) < t.size() {
+ l = make([]liveInfo, 0, t.size())
+ }
+ for _, e := range t.contents() {
+ l = append(l, liveInfo{e.key, e.val})
+ }
+ s.live[p.ID] = l
+ changed = true
+ }
+ }
+
+ if !changed {
+ break
+ }
+ }
+ if regDebug {
+ fmt.Println("live values at end of each block")
+ for _, b := range f.Blocks {
+ fmt.Printf(" %s:", b)
+ for _, x := range s.live[b.ID] {
+ fmt.Printf(" v%d", x.ID)
+ }
+ fmt.Println()
+ }
+ }
+}
+
+// reserved returns a mask of reserved registers.
+func (s *regAllocState) reserved() regMask {
+ var m regMask
+ if obj.Framepointer_enabled != 0 {
+ m |= 1 << 5 // BP
+ }
+ if s.f.Config.ctxt.Flag_dynlink {
+ m |= 1 << 15 // R15
+ }
+ return m
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import "testing"
+
+func TestLiveControlOps(t *testing.T) {
+ c := testConfig(t)
+ f := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("x", OpAMD64MOVBconst, TypeInt8, 1, nil),
+ Valu("y", OpAMD64MOVBconst, TypeInt8, 2, nil),
+ Valu("a", OpAMD64TESTB, TypeFlags, 0, nil, "x", "y"),
+ Valu("b", OpAMD64TESTB, TypeFlags, 0, nil, "y", "x"),
+ Eq("a", "if", "exit"),
+ ),
+ Bloc("if",
+ Eq("b", "plain", "exit"),
+ ),
+ Bloc("plain",
+ Goto("exit"),
+ ),
+ Bloc("exit",
+ Exit("mem"),
+ ),
+ )
+ flagalloc(f.f)
+ regalloc(f.f)
+ checkFunc(f.f)
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import (
+ "fmt"
+ "math"
+)
+
+func applyRewrite(f *Func, rb func(*Block) bool, rv func(*Value, *Config) bool) {
+ // repeat rewrites until we find no more rewrites
+ var curb *Block
+ var curv *Value
+ defer func() {
+ if curb != nil {
+ curb.Fatalf("panic during rewrite of block %s\n", curb.LongString())
+ }
+ if curv != nil {
+ curv.Fatalf("panic during rewrite of value %s\n", curv.LongString())
+ // TODO(khr): print source location also
+ }
+ }()
+ config := f.Config
+ for {
+ change := false
+ for _, b := range f.Blocks {
+ if b.Kind == BlockDead {
+ continue
+ }
+ if b.Control != nil && b.Control.Op == OpCopy {
+ for b.Control.Op == OpCopy {
+ b.Control = b.Control.Args[0]
+ }
+ }
+ curb = b
+ if rb(b) {
+ change = true
+ }
+ curb = nil
+ for _, v := range b.Values {
+ copyelimValue(v)
+ change = phielimValue(v) || change
+
+ // apply rewrite function
+ curv = v
+ if rv(v, config) {
+ change = true
+ }
+ curv = nil
+ }
+ }
+ if !change {
+ return
+ }
+ }
+}
+
+// Common functions called from rewriting rules
+
+func is64BitFloat(t Type) bool {
+ return t.Size() == 8 && t.IsFloat()
+}
+
+func is32BitFloat(t Type) bool {
+ return t.Size() == 4 && t.IsFloat()
+}
+
+func is64BitInt(t Type) bool {
+ return t.Size() == 8 && t.IsInteger()
+}
+
+func is32BitInt(t Type) bool {
+ return t.Size() == 4 && t.IsInteger()
+}
+
+func is16BitInt(t Type) bool {
+ return t.Size() == 2 && t.IsInteger()
+}
+
+func is8BitInt(t Type) bool {
+ return t.Size() == 1 && t.IsInteger()
+}
+
+func isPtr(t Type) bool {
+ return t.IsPtr()
+}
+
+func isSigned(t Type) bool {
+ return t.IsSigned()
+}
+
+func typeSize(t Type) int64 {
+ return t.Size()
+}
+
+// addOff adds two int64 offsets. Fails if wraparound happens.
+func addOff(x, y int64) int64 {
+ z := x + y
+ // x and y have same sign and z has a different sign => overflow
+ if x^y >= 0 && x^z < 0 {
+ panic(fmt.Sprintf("offset overflow %d %d", x, y))
+ }
+ return z
+}
+
+// mergeSym merges two symbolic offsets. There is no real merging of
+// offsets, we just pick the non-nil one.
+func mergeSym(x, y interface{}) interface{} {
+ if x == nil {
+ return y
+ }
+ if y == nil {
+ return x
+ }
+ panic(fmt.Sprintf("mergeSym with two non-nil syms %s %s", x, y))
+ return nil
+}
+func canMergeSym(x, y interface{}) bool {
+ return x == nil || y == nil
+}
+
+func inBounds8(idx, len int64) bool { return int8(idx) >= 0 && int8(idx) < int8(len) }
+func inBounds16(idx, len int64) bool { return int16(idx) >= 0 && int16(idx) < int16(len) }
+func inBounds32(idx, len int64) bool { return int32(idx) >= 0 && int32(idx) < int32(len) }
+func inBounds64(idx, len int64) bool { return idx >= 0 && idx < len }
+func sliceInBounds32(idx, len int64) bool { return int32(idx) >= 0 && int32(idx) <= int32(len) }
+func sliceInBounds64(idx, len int64) bool { return idx >= 0 && idx <= len }
+
+// nlz returns the number of leading zeros.
+func nlz(x int64) int64 {
+ // log2(0) == 1, so nlz(0) == 64
+ return 63 - log2(x)
+}
+
+// ntz returns the number of trailing zeros.
+func ntz(x int64) int64 {
+ return 64 - nlz(^x&(x-1))
+}
+
+// nlo returns the number of leading ones.
+func nlo(x int64) int64 {
+ return nlz(^x)
+}
+
+// nto returns the number of trailing ones.
+func nto(x int64) int64 {
+ return ntz(^x)
+}
+
+// log2 returns logarithm in base of uint64(n), with log2(0) = -1.
+func log2(n int64) (l int64) {
+ l = -1
+ x := uint64(n)
+ for ; x >= 0x8000; x >>= 16 {
+ l += 16
+ }
+ if x >= 0x80 {
+ x >>= 8
+ l += 8
+ }
+ if x >= 0x8 {
+ x >>= 4
+ l += 4
+ }
+ if x >= 0x2 {
+ x >>= 2
+ l += 2
+ }
+ if x >= 0x1 {
+ l++
+ }
+ return
+}
+
+// isPowerOfTwo reports whether n is a power of 2.
+func isPowerOfTwo(n int64) bool {
+ return n > 0 && n&(n-1) == 0
+}
+
+// is32Bit reports whether n can be represented as a signed 32 bit integer.
+func is32Bit(n int64) bool {
+ return n == int64(int32(n))
+}
+
+// b2i translates a boolean value to 0 or 1 for assigning to auxInt.
+func b2i(b bool) int64 {
+ if b {
+ return 1
+ }
+ return 0
+}
+
+// f2i is used in the rules for storing a float in AuxInt.
+func f2i(f float64) int64 {
+ return int64(math.Float64bits(f))
+}
+
+// uaddOvf returns true if unsigned a+b would overflow.
+func uaddOvf(a, b int64) bool {
+ return uint64(a)+uint64(b) < uint64(a)
+}
+
+// isSamePtr reports whether p1 and p2 point to the same address.
+func isSamePtr(p1, p2 *Value) bool {
+ if p1 == p2 {
+ return true
+ }
+ // Aux isn't used in OffPtr, and AuxInt isn't currently used in
+ // Addr, but this still works as the values will be null/0
+ return (p1.Op == OpOffPtr || p1.Op == OpAddr) && p1.Op == p2.Op &&
+ p1.Aux == p2.Aux && p1.AuxInt == p2.AuxInt &&
+ p1.Args[0] == p2.Args[0]
+}
+
+// DUFFZERO consists of repeated blocks of 4 MOVUPSs + ADD,
+// See runtime/mkduff.go.
+const (
+ dzBlocks = 16 // number of MOV/ADD blocks
+ dzBlockLen = 4 // number of clears per block
+ dzBlockSize = 19 // size of instructions in a single block
+ dzMovSize = 4 // size of single MOV instruction w/ offset
+ dzAddSize = 4 // size of single ADD instruction
+ dzClearStep = 16 // number of bytes cleared by each MOV instruction
+
+ dzTailLen = 4 // number of final STOSQ instructions
+ dzTailSize = 2 // size of single STOSQ instruction
+
+ dzClearLen = dzClearStep * dzBlockLen // bytes cleared by one block
+ dzSize = dzBlocks * dzBlockSize
+)
+
+func duffStart(size int64) int64 {
+ x, _ := duff(size)
+ return x
+}
+func duffAdj(size int64) int64 {
+ _, x := duff(size)
+ return x
+}
+
+// duff returns the offset (from duffzero, in bytes) and pointer adjust (in bytes)
+// required to use the duffzero mechanism for a block of the given size.
+func duff(size int64) (int64, int64) {
+ if size < 32 || size > 1024 || size%dzClearStep != 0 {
+ panic("bad duffzero size")
+ }
+ // TODO: arch-dependent
+ steps := size / dzClearStep
+ blocks := steps / dzBlockLen
+ steps %= dzBlockLen
+ off := dzBlockSize * (dzBlocks - blocks)
+ var adj int64
+ if steps != 0 {
+ off -= dzAddSize
+ off -= dzMovSize * steps
+ adj -= dzClearStep * (dzBlockLen - steps)
+ }
+ return off, adj
+}
--- /dev/null
+// autogenerated from gen/AMD64.rules: do not edit!
+// generated with: cd gen; go run *.go
+
+package ssa
+
+import "math"
+
+var _ = math.MinInt8 // in case not otherwise used
+func rewriteValueAMD64(v *Value, config *Config) bool {
+ switch v.Op {
+ case OpAMD64ADDB:
+ return rewriteValueAMD64_OpAMD64ADDB(v, config)
+ case OpAMD64ADDBconst:
+ return rewriteValueAMD64_OpAMD64ADDBconst(v, config)
+ case OpAMD64ADDL:
+ return rewriteValueAMD64_OpAMD64ADDL(v, config)
+ case OpAMD64ADDLconst:
+ return rewriteValueAMD64_OpAMD64ADDLconst(v, config)
+ case OpAMD64ADDQ:
+ return rewriteValueAMD64_OpAMD64ADDQ(v, config)
+ case OpAMD64ADDQconst:
+ return rewriteValueAMD64_OpAMD64ADDQconst(v, config)
+ case OpAMD64ADDW:
+ return rewriteValueAMD64_OpAMD64ADDW(v, config)
+ case OpAMD64ADDWconst:
+ return rewriteValueAMD64_OpAMD64ADDWconst(v, config)
+ case OpAMD64ANDB:
+ return rewriteValueAMD64_OpAMD64ANDB(v, config)
+ case OpAMD64ANDBconst:
+ return rewriteValueAMD64_OpAMD64ANDBconst(v, config)
+ case OpAMD64ANDL:
+ return rewriteValueAMD64_OpAMD64ANDL(v, config)
+ case OpAMD64ANDLconst:
+ return rewriteValueAMD64_OpAMD64ANDLconst(v, config)
+ case OpAMD64ANDQ:
+ return rewriteValueAMD64_OpAMD64ANDQ(v, config)
+ case OpAMD64ANDQconst:
+ return rewriteValueAMD64_OpAMD64ANDQconst(v, config)
+ case OpAMD64ANDW:
+ return rewriteValueAMD64_OpAMD64ANDW(v, config)
+ case OpAMD64ANDWconst:
+ return rewriteValueAMD64_OpAMD64ANDWconst(v, config)
+ case OpAdd16:
+ return rewriteValueAMD64_OpAdd16(v, config)
+ case OpAdd32:
+ return rewriteValueAMD64_OpAdd32(v, config)
+ case OpAdd32F:
+ return rewriteValueAMD64_OpAdd32F(v, config)
+ case OpAdd64:
+ return rewriteValueAMD64_OpAdd64(v, config)
+ case OpAdd64F:
+ return rewriteValueAMD64_OpAdd64F(v, config)
+ case OpAdd8:
+ return rewriteValueAMD64_OpAdd8(v, config)
+ case OpAddPtr:
+ return rewriteValueAMD64_OpAddPtr(v, config)
+ case OpAddr:
+ return rewriteValueAMD64_OpAddr(v, config)
+ case OpAnd16:
+ return rewriteValueAMD64_OpAnd16(v, config)
+ case OpAnd32:
+ return rewriteValueAMD64_OpAnd32(v, config)
+ case OpAnd64:
+ return rewriteValueAMD64_OpAnd64(v, config)
+ case OpAnd8:
+ return rewriteValueAMD64_OpAnd8(v, config)
+ case OpAvg64u:
+ return rewriteValueAMD64_OpAvg64u(v, config)
+ case OpAMD64CMPB:
+ return rewriteValueAMD64_OpAMD64CMPB(v, config)
+ case OpAMD64CMPBconst:
+ return rewriteValueAMD64_OpAMD64CMPBconst(v, config)
+ case OpAMD64CMPL:
+ return rewriteValueAMD64_OpAMD64CMPL(v, config)
+ case OpAMD64CMPLconst:
+ return rewriteValueAMD64_OpAMD64CMPLconst(v, config)
+ case OpAMD64CMPQ:
+ return rewriteValueAMD64_OpAMD64CMPQ(v, config)
+ case OpAMD64CMPQconst:
+ return rewriteValueAMD64_OpAMD64CMPQconst(v, config)
+ case OpAMD64CMPW:
+ return rewriteValueAMD64_OpAMD64CMPW(v, config)
+ case OpAMD64CMPWconst:
+ return rewriteValueAMD64_OpAMD64CMPWconst(v, config)
+ case OpClosureCall:
+ return rewriteValueAMD64_OpClosureCall(v, config)
+ case OpCom16:
+ return rewriteValueAMD64_OpCom16(v, config)
+ case OpCom32:
+ return rewriteValueAMD64_OpCom32(v, config)
+ case OpCom64:
+ return rewriteValueAMD64_OpCom64(v, config)
+ case OpCom8:
+ return rewriteValueAMD64_OpCom8(v, config)
+ case OpConst16:
+ return rewriteValueAMD64_OpConst16(v, config)
+ case OpConst32:
+ return rewriteValueAMD64_OpConst32(v, config)
+ case OpConst32F:
+ return rewriteValueAMD64_OpConst32F(v, config)
+ case OpConst64:
+ return rewriteValueAMD64_OpConst64(v, config)
+ case OpConst64F:
+ return rewriteValueAMD64_OpConst64F(v, config)
+ case OpConst8:
+ return rewriteValueAMD64_OpConst8(v, config)
+ case OpConstBool:
+ return rewriteValueAMD64_OpConstBool(v, config)
+ case OpConstNil:
+ return rewriteValueAMD64_OpConstNil(v, config)
+ case OpConvert:
+ return rewriteValueAMD64_OpConvert(v, config)
+ case OpCvt32Fto32:
+ return rewriteValueAMD64_OpCvt32Fto32(v, config)
+ case OpCvt32Fto64:
+ return rewriteValueAMD64_OpCvt32Fto64(v, config)
+ case OpCvt32Fto64F:
+ return rewriteValueAMD64_OpCvt32Fto64F(v, config)
+ case OpCvt32to32F:
+ return rewriteValueAMD64_OpCvt32to32F(v, config)
+ case OpCvt32to64F:
+ return rewriteValueAMD64_OpCvt32to64F(v, config)
+ case OpCvt64Fto32:
+ return rewriteValueAMD64_OpCvt64Fto32(v, config)
+ case OpCvt64Fto32F:
+ return rewriteValueAMD64_OpCvt64Fto32F(v, config)
+ case OpCvt64Fto64:
+ return rewriteValueAMD64_OpCvt64Fto64(v, config)
+ case OpCvt64to32F:
+ return rewriteValueAMD64_OpCvt64to32F(v, config)
+ case OpCvt64to64F:
+ return rewriteValueAMD64_OpCvt64to64F(v, config)
+ case OpDeferCall:
+ return rewriteValueAMD64_OpDeferCall(v, config)
+ case OpDiv16:
+ return rewriteValueAMD64_OpDiv16(v, config)
+ case OpDiv16u:
+ return rewriteValueAMD64_OpDiv16u(v, config)
+ case OpDiv32:
+ return rewriteValueAMD64_OpDiv32(v, config)
+ case OpDiv32F:
+ return rewriteValueAMD64_OpDiv32F(v, config)
+ case OpDiv32u:
+ return rewriteValueAMD64_OpDiv32u(v, config)
+ case OpDiv64:
+ return rewriteValueAMD64_OpDiv64(v, config)
+ case OpDiv64F:
+ return rewriteValueAMD64_OpDiv64F(v, config)
+ case OpDiv64u:
+ return rewriteValueAMD64_OpDiv64u(v, config)
+ case OpDiv8:
+ return rewriteValueAMD64_OpDiv8(v, config)
+ case OpDiv8u:
+ return rewriteValueAMD64_OpDiv8u(v, config)
+ case OpEq16:
+ return rewriteValueAMD64_OpEq16(v, config)
+ case OpEq32:
+ return rewriteValueAMD64_OpEq32(v, config)
+ case OpEq32F:
+ return rewriteValueAMD64_OpEq32F(v, config)
+ case OpEq64:
+ return rewriteValueAMD64_OpEq64(v, config)
+ case OpEq64F:
+ return rewriteValueAMD64_OpEq64F(v, config)
+ case OpEq8:
+ return rewriteValueAMD64_OpEq8(v, config)
+ case OpEqPtr:
+ return rewriteValueAMD64_OpEqPtr(v, config)
+ case OpGeq16:
+ return rewriteValueAMD64_OpGeq16(v, config)
+ case OpGeq16U:
+ return rewriteValueAMD64_OpGeq16U(v, config)
+ case OpGeq32:
+ return rewriteValueAMD64_OpGeq32(v, config)
+ case OpGeq32F:
+ return rewriteValueAMD64_OpGeq32F(v, config)
+ case OpGeq32U:
+ return rewriteValueAMD64_OpGeq32U(v, config)
+ case OpGeq64:
+ return rewriteValueAMD64_OpGeq64(v, config)
+ case OpGeq64F:
+ return rewriteValueAMD64_OpGeq64F(v, config)
+ case OpGeq64U:
+ return rewriteValueAMD64_OpGeq64U(v, config)
+ case OpGeq8:
+ return rewriteValueAMD64_OpGeq8(v, config)
+ case OpGeq8U:
+ return rewriteValueAMD64_OpGeq8U(v, config)
+ case OpGetClosurePtr:
+ return rewriteValueAMD64_OpGetClosurePtr(v, config)
+ case OpGetG:
+ return rewriteValueAMD64_OpGetG(v, config)
+ case OpGoCall:
+ return rewriteValueAMD64_OpGoCall(v, config)
+ case OpGreater16:
+ return rewriteValueAMD64_OpGreater16(v, config)
+ case OpGreater16U:
+ return rewriteValueAMD64_OpGreater16U(v, config)
+ case OpGreater32:
+ return rewriteValueAMD64_OpGreater32(v, config)
+ case OpGreater32F:
+ return rewriteValueAMD64_OpGreater32F(v, config)
+ case OpGreater32U:
+ return rewriteValueAMD64_OpGreater32U(v, config)
+ case OpGreater64:
+ return rewriteValueAMD64_OpGreater64(v, config)
+ case OpGreater64F:
+ return rewriteValueAMD64_OpGreater64F(v, config)
+ case OpGreater64U:
+ return rewriteValueAMD64_OpGreater64U(v, config)
+ case OpGreater8:
+ return rewriteValueAMD64_OpGreater8(v, config)
+ case OpGreater8U:
+ return rewriteValueAMD64_OpGreater8U(v, config)
+ case OpHmul16:
+ return rewriteValueAMD64_OpHmul16(v, config)
+ case OpHmul16u:
+ return rewriteValueAMD64_OpHmul16u(v, config)
+ case OpHmul32:
+ return rewriteValueAMD64_OpHmul32(v, config)
+ case OpHmul32u:
+ return rewriteValueAMD64_OpHmul32u(v, config)
+ case OpHmul64:
+ return rewriteValueAMD64_OpHmul64(v, config)
+ case OpHmul64u:
+ return rewriteValueAMD64_OpHmul64u(v, config)
+ case OpHmul8:
+ return rewriteValueAMD64_OpHmul8(v, config)
+ case OpHmul8u:
+ return rewriteValueAMD64_OpHmul8u(v, config)
+ case OpITab:
+ return rewriteValueAMD64_OpITab(v, config)
+ case OpInterCall:
+ return rewriteValueAMD64_OpInterCall(v, config)
+ case OpIsInBounds:
+ return rewriteValueAMD64_OpIsInBounds(v, config)
+ case OpIsNonNil:
+ return rewriteValueAMD64_OpIsNonNil(v, config)
+ case OpIsSliceInBounds:
+ return rewriteValueAMD64_OpIsSliceInBounds(v, config)
+ case OpAMD64LEAQ:
+ return rewriteValueAMD64_OpAMD64LEAQ(v, config)
+ case OpAMD64LEAQ1:
+ return rewriteValueAMD64_OpAMD64LEAQ1(v, config)
+ case OpAMD64LEAQ2:
+ return rewriteValueAMD64_OpAMD64LEAQ2(v, config)
+ case OpAMD64LEAQ4:
+ return rewriteValueAMD64_OpAMD64LEAQ4(v, config)
+ case OpAMD64LEAQ8:
+ return rewriteValueAMD64_OpAMD64LEAQ8(v, config)
+ case OpLeq16:
+ return rewriteValueAMD64_OpLeq16(v, config)
+ case OpLeq16U:
+ return rewriteValueAMD64_OpLeq16U(v, config)
+ case OpLeq32:
+ return rewriteValueAMD64_OpLeq32(v, config)
+ case OpLeq32F:
+ return rewriteValueAMD64_OpLeq32F(v, config)
+ case OpLeq32U:
+ return rewriteValueAMD64_OpLeq32U(v, config)
+ case OpLeq64:
+ return rewriteValueAMD64_OpLeq64(v, config)
+ case OpLeq64F:
+ return rewriteValueAMD64_OpLeq64F(v, config)
+ case OpLeq64U:
+ return rewriteValueAMD64_OpLeq64U(v, config)
+ case OpLeq8:
+ return rewriteValueAMD64_OpLeq8(v, config)
+ case OpLeq8U:
+ return rewriteValueAMD64_OpLeq8U(v, config)
+ case OpLess16:
+ return rewriteValueAMD64_OpLess16(v, config)
+ case OpLess16U:
+ return rewriteValueAMD64_OpLess16U(v, config)
+ case OpLess32:
+ return rewriteValueAMD64_OpLess32(v, config)
+ case OpLess32F:
+ return rewriteValueAMD64_OpLess32F(v, config)
+ case OpLess32U:
+ return rewriteValueAMD64_OpLess32U(v, config)
+ case OpLess64:
+ return rewriteValueAMD64_OpLess64(v, config)
+ case OpLess64F:
+ return rewriteValueAMD64_OpLess64F(v, config)
+ case OpLess64U:
+ return rewriteValueAMD64_OpLess64U(v, config)
+ case OpLess8:
+ return rewriteValueAMD64_OpLess8(v, config)
+ case OpLess8U:
+ return rewriteValueAMD64_OpLess8U(v, config)
+ case OpLoad:
+ return rewriteValueAMD64_OpLoad(v, config)
+ case OpLrot16:
+ return rewriteValueAMD64_OpLrot16(v, config)
+ case OpLrot32:
+ return rewriteValueAMD64_OpLrot32(v, config)
+ case OpLrot64:
+ return rewriteValueAMD64_OpLrot64(v, config)
+ case OpLrot8:
+ return rewriteValueAMD64_OpLrot8(v, config)
+ case OpLsh16x16:
+ return rewriteValueAMD64_OpLsh16x16(v, config)
+ case OpLsh16x32:
+ return rewriteValueAMD64_OpLsh16x32(v, config)
+ case OpLsh16x64:
+ return rewriteValueAMD64_OpLsh16x64(v, config)
+ case OpLsh16x8:
+ return rewriteValueAMD64_OpLsh16x8(v, config)
+ case OpLsh32x16:
+ return rewriteValueAMD64_OpLsh32x16(v, config)
+ case OpLsh32x32:
+ return rewriteValueAMD64_OpLsh32x32(v, config)
+ case OpLsh32x64:
+ return rewriteValueAMD64_OpLsh32x64(v, config)
+ case OpLsh32x8:
+ return rewriteValueAMD64_OpLsh32x8(v, config)
+ case OpLsh64x16:
+ return rewriteValueAMD64_OpLsh64x16(v, config)
+ case OpLsh64x32:
+ return rewriteValueAMD64_OpLsh64x32(v, config)
+ case OpLsh64x64:
+ return rewriteValueAMD64_OpLsh64x64(v, config)
+ case OpLsh64x8:
+ return rewriteValueAMD64_OpLsh64x8(v, config)
+ case OpLsh8x16:
+ return rewriteValueAMD64_OpLsh8x16(v, config)
+ case OpLsh8x32:
+ return rewriteValueAMD64_OpLsh8x32(v, config)
+ case OpLsh8x64:
+ return rewriteValueAMD64_OpLsh8x64(v, config)
+ case OpLsh8x8:
+ return rewriteValueAMD64_OpLsh8x8(v, config)
+ case OpAMD64MOVBQSX:
+ return rewriteValueAMD64_OpAMD64MOVBQSX(v, config)
+ case OpAMD64MOVBQZX:
+ return rewriteValueAMD64_OpAMD64MOVBQZX(v, config)
+ case OpAMD64MOVBload:
+ return rewriteValueAMD64_OpAMD64MOVBload(v, config)
+ case OpAMD64MOVBloadidx1:
+ return rewriteValueAMD64_OpAMD64MOVBloadidx1(v, config)
+ case OpAMD64MOVBstore:
+ return rewriteValueAMD64_OpAMD64MOVBstore(v, config)
+ case OpAMD64MOVBstoreconst:
+ return rewriteValueAMD64_OpAMD64MOVBstoreconst(v, config)
+ case OpAMD64MOVBstoreconstidx1:
+ return rewriteValueAMD64_OpAMD64MOVBstoreconstidx1(v, config)
+ case OpAMD64MOVBstoreidx1:
+ return rewriteValueAMD64_OpAMD64MOVBstoreidx1(v, config)
+ case OpAMD64MOVLQSX:
+ return rewriteValueAMD64_OpAMD64MOVLQSX(v, config)
+ case OpAMD64MOVLQZX:
+ return rewriteValueAMD64_OpAMD64MOVLQZX(v, config)
+ case OpAMD64MOVLload:
+ return rewriteValueAMD64_OpAMD64MOVLload(v, config)
+ case OpAMD64MOVLloadidx4:
+ return rewriteValueAMD64_OpAMD64MOVLloadidx4(v, config)
+ case OpAMD64MOVLstore:
+ return rewriteValueAMD64_OpAMD64MOVLstore(v, config)
+ case OpAMD64MOVLstoreconst:
+ return rewriteValueAMD64_OpAMD64MOVLstoreconst(v, config)
+ case OpAMD64MOVLstoreconstidx4:
+ return rewriteValueAMD64_OpAMD64MOVLstoreconstidx4(v, config)
+ case OpAMD64MOVLstoreidx4:
+ return rewriteValueAMD64_OpAMD64MOVLstoreidx4(v, config)
+ case OpAMD64MOVOload:
+ return rewriteValueAMD64_OpAMD64MOVOload(v, config)
+ case OpAMD64MOVOstore:
+ return rewriteValueAMD64_OpAMD64MOVOstore(v, config)
+ case OpAMD64MOVQload:
+ return rewriteValueAMD64_OpAMD64MOVQload(v, config)
+ case OpAMD64MOVQloadidx8:
+ return rewriteValueAMD64_OpAMD64MOVQloadidx8(v, config)
+ case OpAMD64MOVQstore:
+ return rewriteValueAMD64_OpAMD64MOVQstore(v, config)
+ case OpAMD64MOVQstoreconst:
+ return rewriteValueAMD64_OpAMD64MOVQstoreconst(v, config)
+ case OpAMD64MOVQstoreconstidx8:
+ return rewriteValueAMD64_OpAMD64MOVQstoreconstidx8(v, config)
+ case OpAMD64MOVQstoreidx8:
+ return rewriteValueAMD64_OpAMD64MOVQstoreidx8(v, config)
+ case OpAMD64MOVSDload:
+ return rewriteValueAMD64_OpAMD64MOVSDload(v, config)
+ case OpAMD64MOVSDloadidx8:
+ return rewriteValueAMD64_OpAMD64MOVSDloadidx8(v, config)
+ case OpAMD64MOVSDstore:
+ return rewriteValueAMD64_OpAMD64MOVSDstore(v, config)
+ case OpAMD64MOVSDstoreidx8:
+ return rewriteValueAMD64_OpAMD64MOVSDstoreidx8(v, config)
+ case OpAMD64MOVSSload:
+ return rewriteValueAMD64_OpAMD64MOVSSload(v, config)
+ case OpAMD64MOVSSloadidx4:
+ return rewriteValueAMD64_OpAMD64MOVSSloadidx4(v, config)
+ case OpAMD64MOVSSstore:
+ return rewriteValueAMD64_OpAMD64MOVSSstore(v, config)
+ case OpAMD64MOVSSstoreidx4:
+ return rewriteValueAMD64_OpAMD64MOVSSstoreidx4(v, config)
+ case OpAMD64MOVWQSX:
+ return rewriteValueAMD64_OpAMD64MOVWQSX(v, config)
+ case OpAMD64MOVWQZX:
+ return rewriteValueAMD64_OpAMD64MOVWQZX(v, config)
+ case OpAMD64MOVWload:
+ return rewriteValueAMD64_OpAMD64MOVWload(v, config)
+ case OpAMD64MOVWloadidx2:
+ return rewriteValueAMD64_OpAMD64MOVWloadidx2(v, config)
+ case OpAMD64MOVWstore:
+ return rewriteValueAMD64_OpAMD64MOVWstore(v, config)
+ case OpAMD64MOVWstoreconst:
+ return rewriteValueAMD64_OpAMD64MOVWstoreconst(v, config)
+ case OpAMD64MOVWstoreconstidx2:
+ return rewriteValueAMD64_OpAMD64MOVWstoreconstidx2(v, config)
+ case OpAMD64MOVWstoreidx2:
+ return rewriteValueAMD64_OpAMD64MOVWstoreidx2(v, config)
+ case OpAMD64MULB:
+ return rewriteValueAMD64_OpAMD64MULB(v, config)
+ case OpAMD64MULBconst:
+ return rewriteValueAMD64_OpAMD64MULBconst(v, config)
+ case OpAMD64MULL:
+ return rewriteValueAMD64_OpAMD64MULL(v, config)
+ case OpAMD64MULLconst:
+ return rewriteValueAMD64_OpAMD64MULLconst(v, config)
+ case OpAMD64MULQ:
+ return rewriteValueAMD64_OpAMD64MULQ(v, config)
+ case OpAMD64MULQconst:
+ return rewriteValueAMD64_OpAMD64MULQconst(v, config)
+ case OpAMD64MULW:
+ return rewriteValueAMD64_OpAMD64MULW(v, config)
+ case OpAMD64MULWconst:
+ return rewriteValueAMD64_OpAMD64MULWconst(v, config)
+ case OpMod16:
+ return rewriteValueAMD64_OpMod16(v, config)
+ case OpMod16u:
+ return rewriteValueAMD64_OpMod16u(v, config)
+ case OpMod32:
+ return rewriteValueAMD64_OpMod32(v, config)
+ case OpMod32u:
+ return rewriteValueAMD64_OpMod32u(v, config)
+ case OpMod64:
+ return rewriteValueAMD64_OpMod64(v, config)
+ case OpMod64u:
+ return rewriteValueAMD64_OpMod64u(v, config)
+ case OpMod8:
+ return rewriteValueAMD64_OpMod8(v, config)
+ case OpMod8u:
+ return rewriteValueAMD64_OpMod8u(v, config)
+ case OpMove:
+ return rewriteValueAMD64_OpMove(v, config)
+ case OpMul16:
+ return rewriteValueAMD64_OpMul16(v, config)
+ case OpMul32:
+ return rewriteValueAMD64_OpMul32(v, config)
+ case OpMul32F:
+ return rewriteValueAMD64_OpMul32F(v, config)
+ case OpMul64:
+ return rewriteValueAMD64_OpMul64(v, config)
+ case OpMul64F:
+ return rewriteValueAMD64_OpMul64F(v, config)
+ case OpMul8:
+ return rewriteValueAMD64_OpMul8(v, config)
+ case OpAMD64NEGB:
+ return rewriteValueAMD64_OpAMD64NEGB(v, config)
+ case OpAMD64NEGL:
+ return rewriteValueAMD64_OpAMD64NEGL(v, config)
+ case OpAMD64NEGQ:
+ return rewriteValueAMD64_OpAMD64NEGQ(v, config)
+ case OpAMD64NEGW:
+ return rewriteValueAMD64_OpAMD64NEGW(v, config)
+ case OpAMD64NOTB:
+ return rewriteValueAMD64_OpAMD64NOTB(v, config)
+ case OpAMD64NOTL:
+ return rewriteValueAMD64_OpAMD64NOTL(v, config)
+ case OpAMD64NOTQ:
+ return rewriteValueAMD64_OpAMD64NOTQ(v, config)
+ case OpAMD64NOTW:
+ return rewriteValueAMD64_OpAMD64NOTW(v, config)
+ case OpNeg16:
+ return rewriteValueAMD64_OpNeg16(v, config)
+ case OpNeg32:
+ return rewriteValueAMD64_OpNeg32(v, config)
+ case OpNeg32F:
+ return rewriteValueAMD64_OpNeg32F(v, config)
+ case OpNeg64:
+ return rewriteValueAMD64_OpNeg64(v, config)
+ case OpNeg64F:
+ return rewriteValueAMD64_OpNeg64F(v, config)
+ case OpNeg8:
+ return rewriteValueAMD64_OpNeg8(v, config)
+ case OpNeq16:
+ return rewriteValueAMD64_OpNeq16(v, config)
+ case OpNeq32:
+ return rewriteValueAMD64_OpNeq32(v, config)
+ case OpNeq32F:
+ return rewriteValueAMD64_OpNeq32F(v, config)
+ case OpNeq64:
+ return rewriteValueAMD64_OpNeq64(v, config)
+ case OpNeq64F:
+ return rewriteValueAMD64_OpNeq64F(v, config)
+ case OpNeq8:
+ return rewriteValueAMD64_OpNeq8(v, config)
+ case OpNeqPtr:
+ return rewriteValueAMD64_OpNeqPtr(v, config)
+ case OpNilCheck:
+ return rewriteValueAMD64_OpNilCheck(v, config)
+ case OpNot:
+ return rewriteValueAMD64_OpNot(v, config)
+ case OpAMD64ORB:
+ return rewriteValueAMD64_OpAMD64ORB(v, config)
+ case OpAMD64ORBconst:
+ return rewriteValueAMD64_OpAMD64ORBconst(v, config)
+ case OpAMD64ORL:
+ return rewriteValueAMD64_OpAMD64ORL(v, config)
+ case OpAMD64ORLconst:
+ return rewriteValueAMD64_OpAMD64ORLconst(v, config)
+ case OpAMD64ORQ:
+ return rewriteValueAMD64_OpAMD64ORQ(v, config)
+ case OpAMD64ORQconst:
+ return rewriteValueAMD64_OpAMD64ORQconst(v, config)
+ case OpAMD64ORW:
+ return rewriteValueAMD64_OpAMD64ORW(v, config)
+ case OpAMD64ORWconst:
+ return rewriteValueAMD64_OpAMD64ORWconst(v, config)
+ case OpOffPtr:
+ return rewriteValueAMD64_OpOffPtr(v, config)
+ case OpOr16:
+ return rewriteValueAMD64_OpOr16(v, config)
+ case OpOr32:
+ return rewriteValueAMD64_OpOr32(v, config)
+ case OpOr64:
+ return rewriteValueAMD64_OpOr64(v, config)
+ case OpOr8:
+ return rewriteValueAMD64_OpOr8(v, config)
+ case OpRsh16Ux16:
+ return rewriteValueAMD64_OpRsh16Ux16(v, config)
+ case OpRsh16Ux32:
+ return rewriteValueAMD64_OpRsh16Ux32(v, config)
+ case OpRsh16Ux64:
+ return rewriteValueAMD64_OpRsh16Ux64(v, config)
+ case OpRsh16Ux8:
+ return rewriteValueAMD64_OpRsh16Ux8(v, config)
+ case OpRsh16x16:
+ return rewriteValueAMD64_OpRsh16x16(v, config)
+ case OpRsh16x32:
+ return rewriteValueAMD64_OpRsh16x32(v, config)
+ case OpRsh16x64:
+ return rewriteValueAMD64_OpRsh16x64(v, config)
+ case OpRsh16x8:
+ return rewriteValueAMD64_OpRsh16x8(v, config)
+ case OpRsh32Ux16:
+ return rewriteValueAMD64_OpRsh32Ux16(v, config)
+ case OpRsh32Ux32:
+ return rewriteValueAMD64_OpRsh32Ux32(v, config)
+ case OpRsh32Ux64:
+ return rewriteValueAMD64_OpRsh32Ux64(v, config)
+ case OpRsh32Ux8:
+ return rewriteValueAMD64_OpRsh32Ux8(v, config)
+ case OpRsh32x16:
+ return rewriteValueAMD64_OpRsh32x16(v, config)
+ case OpRsh32x32:
+ return rewriteValueAMD64_OpRsh32x32(v, config)
+ case OpRsh32x64:
+ return rewriteValueAMD64_OpRsh32x64(v, config)
+ case OpRsh32x8:
+ return rewriteValueAMD64_OpRsh32x8(v, config)
+ case OpRsh64Ux16:
+ return rewriteValueAMD64_OpRsh64Ux16(v, config)
+ case OpRsh64Ux32:
+ return rewriteValueAMD64_OpRsh64Ux32(v, config)
+ case OpRsh64Ux64:
+ return rewriteValueAMD64_OpRsh64Ux64(v, config)
+ case OpRsh64Ux8:
+ return rewriteValueAMD64_OpRsh64Ux8(v, config)
+ case OpRsh64x16:
+ return rewriteValueAMD64_OpRsh64x16(v, config)
+ case OpRsh64x32:
+ return rewriteValueAMD64_OpRsh64x32(v, config)
+ case OpRsh64x64:
+ return rewriteValueAMD64_OpRsh64x64(v, config)
+ case OpRsh64x8:
+ return rewriteValueAMD64_OpRsh64x8(v, config)
+ case OpRsh8Ux16:
+ return rewriteValueAMD64_OpRsh8Ux16(v, config)
+ case OpRsh8Ux32:
+ return rewriteValueAMD64_OpRsh8Ux32(v, config)
+ case OpRsh8Ux64:
+ return rewriteValueAMD64_OpRsh8Ux64(v, config)
+ case OpRsh8Ux8:
+ return rewriteValueAMD64_OpRsh8Ux8(v, config)
+ case OpRsh8x16:
+ return rewriteValueAMD64_OpRsh8x16(v, config)
+ case OpRsh8x32:
+ return rewriteValueAMD64_OpRsh8x32(v, config)
+ case OpRsh8x64:
+ return rewriteValueAMD64_OpRsh8x64(v, config)
+ case OpRsh8x8:
+ return rewriteValueAMD64_OpRsh8x8(v, config)
+ case OpAMD64SARB:
+ return rewriteValueAMD64_OpAMD64SARB(v, config)
+ case OpAMD64SARBconst:
+ return rewriteValueAMD64_OpAMD64SARBconst(v, config)
+ case OpAMD64SARL:
+ return rewriteValueAMD64_OpAMD64SARL(v, config)
+ case OpAMD64SARLconst:
+ return rewriteValueAMD64_OpAMD64SARLconst(v, config)
+ case OpAMD64SARQ:
+ return rewriteValueAMD64_OpAMD64SARQ(v, config)
+ case OpAMD64SARQconst:
+ return rewriteValueAMD64_OpAMD64SARQconst(v, config)
+ case OpAMD64SARW:
+ return rewriteValueAMD64_OpAMD64SARW(v, config)
+ case OpAMD64SARWconst:
+ return rewriteValueAMD64_OpAMD64SARWconst(v, config)
+ case OpAMD64SBBLcarrymask:
+ return rewriteValueAMD64_OpAMD64SBBLcarrymask(v, config)
+ case OpAMD64SBBQcarrymask:
+ return rewriteValueAMD64_OpAMD64SBBQcarrymask(v, config)
+ case OpAMD64SETA:
+ return rewriteValueAMD64_OpAMD64SETA(v, config)
+ case OpAMD64SETAE:
+ return rewriteValueAMD64_OpAMD64SETAE(v, config)
+ case OpAMD64SETB:
+ return rewriteValueAMD64_OpAMD64SETB(v, config)
+ case OpAMD64SETBE:
+ return rewriteValueAMD64_OpAMD64SETBE(v, config)
+ case OpAMD64SETEQ:
+ return rewriteValueAMD64_OpAMD64SETEQ(v, config)
+ case OpAMD64SETG:
+ return rewriteValueAMD64_OpAMD64SETG(v, config)
+ case OpAMD64SETGE:
+ return rewriteValueAMD64_OpAMD64SETGE(v, config)
+ case OpAMD64SETL:
+ return rewriteValueAMD64_OpAMD64SETL(v, config)
+ case OpAMD64SETLE:
+ return rewriteValueAMD64_OpAMD64SETLE(v, config)
+ case OpAMD64SETNE:
+ return rewriteValueAMD64_OpAMD64SETNE(v, config)
+ case OpAMD64SHLB:
+ return rewriteValueAMD64_OpAMD64SHLB(v, config)
+ case OpAMD64SHLL:
+ return rewriteValueAMD64_OpAMD64SHLL(v, config)
+ case OpAMD64SHLQ:
+ return rewriteValueAMD64_OpAMD64SHLQ(v, config)
+ case OpAMD64SHLW:
+ return rewriteValueAMD64_OpAMD64SHLW(v, config)
+ case OpAMD64SHRB:
+ return rewriteValueAMD64_OpAMD64SHRB(v, config)
+ case OpAMD64SHRL:
+ return rewriteValueAMD64_OpAMD64SHRL(v, config)
+ case OpAMD64SHRQ:
+ return rewriteValueAMD64_OpAMD64SHRQ(v, config)
+ case OpAMD64SHRW:
+ return rewriteValueAMD64_OpAMD64SHRW(v, config)
+ case OpAMD64SUBB:
+ return rewriteValueAMD64_OpAMD64SUBB(v, config)
+ case OpAMD64SUBBconst:
+ return rewriteValueAMD64_OpAMD64SUBBconst(v, config)
+ case OpAMD64SUBL:
+ return rewriteValueAMD64_OpAMD64SUBL(v, config)
+ case OpAMD64SUBLconst:
+ return rewriteValueAMD64_OpAMD64SUBLconst(v, config)
+ case OpAMD64SUBQ:
+ return rewriteValueAMD64_OpAMD64SUBQ(v, config)
+ case OpAMD64SUBQconst:
+ return rewriteValueAMD64_OpAMD64SUBQconst(v, config)
+ case OpAMD64SUBW:
+ return rewriteValueAMD64_OpAMD64SUBW(v, config)
+ case OpAMD64SUBWconst:
+ return rewriteValueAMD64_OpAMD64SUBWconst(v, config)
+ case OpSignExt16to32:
+ return rewriteValueAMD64_OpSignExt16to32(v, config)
+ case OpSignExt16to64:
+ return rewriteValueAMD64_OpSignExt16to64(v, config)
+ case OpSignExt32to64:
+ return rewriteValueAMD64_OpSignExt32to64(v, config)
+ case OpSignExt8to16:
+ return rewriteValueAMD64_OpSignExt8to16(v, config)
+ case OpSignExt8to32:
+ return rewriteValueAMD64_OpSignExt8to32(v, config)
+ case OpSignExt8to64:
+ return rewriteValueAMD64_OpSignExt8to64(v, config)
+ case OpSqrt:
+ return rewriteValueAMD64_OpSqrt(v, config)
+ case OpStaticCall:
+ return rewriteValueAMD64_OpStaticCall(v, config)
+ case OpStore:
+ return rewriteValueAMD64_OpStore(v, config)
+ case OpSub16:
+ return rewriteValueAMD64_OpSub16(v, config)
+ case OpSub32:
+ return rewriteValueAMD64_OpSub32(v, config)
+ case OpSub32F:
+ return rewriteValueAMD64_OpSub32F(v, config)
+ case OpSub64:
+ return rewriteValueAMD64_OpSub64(v, config)
+ case OpSub64F:
+ return rewriteValueAMD64_OpSub64F(v, config)
+ case OpSub8:
+ return rewriteValueAMD64_OpSub8(v, config)
+ case OpSubPtr:
+ return rewriteValueAMD64_OpSubPtr(v, config)
+ case OpTrunc16to8:
+ return rewriteValueAMD64_OpTrunc16to8(v, config)
+ case OpTrunc32to16:
+ return rewriteValueAMD64_OpTrunc32to16(v, config)
+ case OpTrunc32to8:
+ return rewriteValueAMD64_OpTrunc32to8(v, config)
+ case OpTrunc64to16:
+ return rewriteValueAMD64_OpTrunc64to16(v, config)
+ case OpTrunc64to32:
+ return rewriteValueAMD64_OpTrunc64to32(v, config)
+ case OpTrunc64to8:
+ return rewriteValueAMD64_OpTrunc64to8(v, config)
+ case OpAMD64XORB:
+ return rewriteValueAMD64_OpAMD64XORB(v, config)
+ case OpAMD64XORBconst:
+ return rewriteValueAMD64_OpAMD64XORBconst(v, config)
+ case OpAMD64XORL:
+ return rewriteValueAMD64_OpAMD64XORL(v, config)
+ case OpAMD64XORLconst:
+ return rewriteValueAMD64_OpAMD64XORLconst(v, config)
+ case OpAMD64XORQ:
+ return rewriteValueAMD64_OpAMD64XORQ(v, config)
+ case OpAMD64XORQconst:
+ return rewriteValueAMD64_OpAMD64XORQconst(v, config)
+ case OpAMD64XORW:
+ return rewriteValueAMD64_OpAMD64XORW(v, config)
+ case OpAMD64XORWconst:
+ return rewriteValueAMD64_OpAMD64XORWconst(v, config)
+ case OpXor16:
+ return rewriteValueAMD64_OpXor16(v, config)
+ case OpXor32:
+ return rewriteValueAMD64_OpXor32(v, config)
+ case OpXor64:
+ return rewriteValueAMD64_OpXor64(v, config)
+ case OpXor8:
+ return rewriteValueAMD64_OpXor8(v, config)
+ case OpZero:
+ return rewriteValueAMD64_OpZero(v, config)
+ case OpZeroExt16to32:
+ return rewriteValueAMD64_OpZeroExt16to32(v, config)
+ case OpZeroExt16to64:
+ return rewriteValueAMD64_OpZeroExt16to64(v, config)
+ case OpZeroExt32to64:
+ return rewriteValueAMD64_OpZeroExt32to64(v, config)
+ case OpZeroExt8to16:
+ return rewriteValueAMD64_OpZeroExt8to16(v, config)
+ case OpZeroExt8to32:
+ return rewriteValueAMD64_OpZeroExt8to32(v, config)
+ case OpZeroExt8to64:
+ return rewriteValueAMD64_OpZeroExt8to64(v, config)
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ADDB(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ADDB x (MOVBconst [c]))
+ // cond:
+ // result: (ADDBconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64ADDBconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ADDB (MOVBconst [c]) x)
+ // cond:
+ // result: (ADDBconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64ADDBconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ADDB x (NEGB y))
+ // cond:
+ // result: (SUBB x y)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64NEGB {
+ break
+ }
+ y := v.Args[1].Args[0]
+ v.reset(OpAMD64SUBB)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ADDBconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ADDBconst [c] x)
+ // cond: int8(c)==0
+ // result: x
+ for {
+ c := v.AuxInt
+ x := v.Args[0]
+ if !(int8(c) == 0) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (ADDBconst [c] (MOVBconst [d]))
+ // cond:
+ // result: (MOVBconst [c+d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = c + d
+ return true
+ }
+ // match: (ADDBconst [c] (ADDBconst [d] x))
+ // cond:
+ // result: (ADDBconst [c+d] x)
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64ADDBconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64ADDBconst)
+ v.AuxInt = c + d
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ADDL(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ADDL x (MOVLconst [c]))
+ // cond:
+ // result: (ADDLconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64ADDLconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ADDL (MOVLconst [c]) x)
+ // cond:
+ // result: (ADDLconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64ADDLconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ADDL x (NEGL y))
+ // cond:
+ // result: (SUBL x y)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64NEGL {
+ break
+ }
+ y := v.Args[1].Args[0]
+ v.reset(OpAMD64SUBL)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ADDLconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ADDLconst [c] x)
+ // cond: int32(c)==0
+ // result: x
+ for {
+ c := v.AuxInt
+ x := v.Args[0]
+ if !(int32(c) == 0) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (ADDLconst [c] (MOVLconst [d]))
+ // cond:
+ // result: (MOVLconst [c+d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = c + d
+ return true
+ }
+ // match: (ADDLconst [c] (ADDLconst [d] x))
+ // cond:
+ // result: (ADDLconst [c+d] x)
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64ADDLconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64ADDLconst)
+ v.AuxInt = c + d
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ADDQ(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ADDQ x (MOVQconst [c]))
+ // cond: is32Bit(c)
+ // result: (ADDQconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(is32Bit(c)) {
+ break
+ }
+ v.reset(OpAMD64ADDQconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ADDQ (MOVQconst [c]) x)
+ // cond: is32Bit(c)
+ // result: (ADDQconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ if !(is32Bit(c)) {
+ break
+ }
+ v.reset(OpAMD64ADDQconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ADDQ x (SHLQconst [3] y))
+ // cond:
+ // result: (LEAQ8 x y)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64SHLQconst {
+ break
+ }
+ if v.Args[1].AuxInt != 3 {
+ break
+ }
+ y := v.Args[1].Args[0]
+ v.reset(OpAMD64LEAQ8)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (ADDQ x (SHLQconst [2] y))
+ // cond:
+ // result: (LEAQ4 x y)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64SHLQconst {
+ break
+ }
+ if v.Args[1].AuxInt != 2 {
+ break
+ }
+ y := v.Args[1].Args[0]
+ v.reset(OpAMD64LEAQ4)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (ADDQ x (SHLQconst [1] y))
+ // cond:
+ // result: (LEAQ2 x y)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64SHLQconst {
+ break
+ }
+ if v.Args[1].AuxInt != 1 {
+ break
+ }
+ y := v.Args[1].Args[0]
+ v.reset(OpAMD64LEAQ2)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (ADDQ x (ADDQ y y))
+ // cond:
+ // result: (LEAQ2 x y)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQ {
+ break
+ }
+ y := v.Args[1].Args[0]
+ if v.Args[1].Args[1] != y {
+ break
+ }
+ v.reset(OpAMD64LEAQ2)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (ADDQ x (ADDQ x y))
+ // cond:
+ // result: (LEAQ2 y x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQ {
+ break
+ }
+ if v.Args[1].Args[0] != x {
+ break
+ }
+ y := v.Args[1].Args[1]
+ v.reset(OpAMD64LEAQ2)
+ v.AddArg(y)
+ v.AddArg(x)
+ return true
+ }
+ // match: (ADDQ x (ADDQ y x))
+ // cond:
+ // result: (LEAQ2 y x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQ {
+ break
+ }
+ y := v.Args[1].Args[0]
+ if v.Args[1].Args[1] != x {
+ break
+ }
+ v.reset(OpAMD64LEAQ2)
+ v.AddArg(y)
+ v.AddArg(x)
+ return true
+ }
+ // match: (ADDQ (ADDQconst [c] x) y)
+ // cond:
+ // result: (LEAQ1 [c] x y)
+ for {
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64LEAQ1)
+ v.AuxInt = c
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (ADDQ x (ADDQconst [c] y))
+ // cond:
+ // result: (LEAQ1 [c] x y)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ y := v.Args[1].Args[0]
+ v.reset(OpAMD64LEAQ1)
+ v.AuxInt = c
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (ADDQ x (LEAQ [c] {s} y))
+ // cond: x.Op != OpSB && y.Op != OpSB
+ // result: (LEAQ1 [c] {s} x y)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64LEAQ {
+ break
+ }
+ c := v.Args[1].AuxInt
+ s := v.Args[1].Aux
+ y := v.Args[1].Args[0]
+ if !(x.Op != OpSB && y.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64LEAQ1)
+ v.AuxInt = c
+ v.Aux = s
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (ADDQ (LEAQ [c] {s} x) y)
+ // cond: x.Op != OpSB && y.Op != OpSB
+ // result: (LEAQ1 [c] {s} x y)
+ for {
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ c := v.Args[0].AuxInt
+ s := v.Args[0].Aux
+ x := v.Args[0].Args[0]
+ y := v.Args[1]
+ if !(x.Op != OpSB && y.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64LEAQ1)
+ v.AuxInt = c
+ v.Aux = s
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (ADDQ x (NEGQ y))
+ // cond:
+ // result: (SUBQ x y)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64NEGQ {
+ break
+ }
+ y := v.Args[1].Args[0]
+ v.reset(OpAMD64SUBQ)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ADDQconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ADDQconst [c] (ADDQ x y))
+ // cond:
+ // result: (LEAQ1 [c] x y)
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64ADDQ {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ v.reset(OpAMD64LEAQ1)
+ v.AuxInt = c
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (ADDQconst [c] (LEAQ [d] {s} x))
+ // cond:
+ // result: (LEAQ [c+d] {s} x)
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ d := v.Args[0].AuxInt
+ s := v.Args[0].Aux
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64LEAQ)
+ v.AuxInt = c + d
+ v.Aux = s
+ v.AddArg(x)
+ return true
+ }
+ // match: (ADDQconst [c] (LEAQ1 [d] {s} x y))
+ // cond:
+ // result: (LEAQ1 [c+d] {s} x y)
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64LEAQ1 {
+ break
+ }
+ d := v.Args[0].AuxInt
+ s := v.Args[0].Aux
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ v.reset(OpAMD64LEAQ1)
+ v.AuxInt = c + d
+ v.Aux = s
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (ADDQconst [c] (LEAQ2 [d] {s} x y))
+ // cond:
+ // result: (LEAQ2 [c+d] {s} x y)
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64LEAQ2 {
+ break
+ }
+ d := v.Args[0].AuxInt
+ s := v.Args[0].Aux
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ v.reset(OpAMD64LEAQ2)
+ v.AuxInt = c + d
+ v.Aux = s
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (ADDQconst [c] (LEAQ4 [d] {s} x y))
+ // cond:
+ // result: (LEAQ4 [c+d] {s} x y)
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64LEAQ4 {
+ break
+ }
+ d := v.Args[0].AuxInt
+ s := v.Args[0].Aux
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ v.reset(OpAMD64LEAQ4)
+ v.AuxInt = c + d
+ v.Aux = s
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (ADDQconst [c] (LEAQ8 [d] {s} x y))
+ // cond:
+ // result: (LEAQ8 [c+d] {s} x y)
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64LEAQ8 {
+ break
+ }
+ d := v.Args[0].AuxInt
+ s := v.Args[0].Aux
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ v.reset(OpAMD64LEAQ8)
+ v.AuxInt = c + d
+ v.Aux = s
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (ADDQconst [0] x)
+ // cond:
+ // result: x
+ for {
+ if v.AuxInt != 0 {
+ break
+ }
+ x := v.Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (ADDQconst [c] (MOVQconst [d]))
+ // cond:
+ // result: (MOVQconst [c+d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = c + d
+ return true
+ }
+ // match: (ADDQconst [c] (ADDQconst [d] x))
+ // cond:
+ // result: (ADDQconst [c+d] x)
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64ADDQconst)
+ v.AuxInt = c + d
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ADDW(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ADDW x (MOVWconst [c]))
+ // cond:
+ // result: (ADDWconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64ADDWconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ADDW (MOVWconst [c]) x)
+ // cond:
+ // result: (ADDWconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64ADDWconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ADDW x (NEGW y))
+ // cond:
+ // result: (SUBW x y)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64NEGW {
+ break
+ }
+ y := v.Args[1].Args[0]
+ v.reset(OpAMD64SUBW)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ADDWconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ADDWconst [c] x)
+ // cond: int16(c)==0
+ // result: x
+ for {
+ c := v.AuxInt
+ x := v.Args[0]
+ if !(int16(c) == 0) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (ADDWconst [c] (MOVWconst [d]))
+ // cond:
+ // result: (MOVWconst [c+d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVWconst)
+ v.AuxInt = c + d
+ return true
+ }
+ // match: (ADDWconst [c] (ADDWconst [d] x))
+ // cond:
+ // result: (ADDWconst [c+d] x)
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64ADDWconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64ADDWconst)
+ v.AuxInt = c + d
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ANDB(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ANDB x (MOVLconst [c]))
+ // cond:
+ // result: (ANDBconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64ANDBconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ANDB (MOVLconst [c]) x)
+ // cond:
+ // result: (ANDBconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64ANDBconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ANDB x (MOVBconst [c]))
+ // cond:
+ // result: (ANDBconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64ANDBconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ANDB (MOVBconst [c]) x)
+ // cond:
+ // result: (ANDBconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64ANDBconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ANDB x x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ANDBconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ANDBconst [c] _)
+ // cond: int8(c)==0
+ // result: (MOVBconst [0])
+ for {
+ c := v.AuxInt
+ if !(int8(c) == 0) {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (ANDBconst [c] x)
+ // cond: int8(c)==-1
+ // result: x
+ for {
+ c := v.AuxInt
+ x := v.Args[0]
+ if !(int8(c) == -1) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (ANDBconst [c] (MOVBconst [d]))
+ // cond:
+ // result: (MOVBconst [c&d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = c & d
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ANDL(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ANDL x (MOVLconst [c]))
+ // cond:
+ // result: (ANDLconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64ANDLconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ANDL (MOVLconst [c]) x)
+ // cond:
+ // result: (ANDLconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64ANDLconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ANDL x x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ANDLconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ANDLconst [c] _)
+ // cond: int32(c)==0
+ // result: (MOVLconst [0])
+ for {
+ c := v.AuxInt
+ if !(int32(c) == 0) {
+ break
+ }
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (ANDLconst [c] x)
+ // cond: int32(c)==-1
+ // result: x
+ for {
+ c := v.AuxInt
+ x := v.Args[0]
+ if !(int32(c) == -1) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (ANDLconst [c] (MOVLconst [d]))
+ // cond:
+ // result: (MOVLconst [c&d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = c & d
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ANDQ(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ANDQ x (MOVQconst [c]))
+ // cond: is32Bit(c)
+ // result: (ANDQconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(is32Bit(c)) {
+ break
+ }
+ v.reset(OpAMD64ANDQconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ANDQ (MOVQconst [c]) x)
+ // cond: is32Bit(c)
+ // result: (ANDQconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ if !(is32Bit(c)) {
+ break
+ }
+ v.reset(OpAMD64ANDQconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ANDQ x x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ANDQconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ANDQconst [0] _)
+ // cond:
+ // result: (MOVQconst [0])
+ for {
+ if v.AuxInt != 0 {
+ break
+ }
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (ANDQconst [-1] x)
+ // cond:
+ // result: x
+ for {
+ if v.AuxInt != -1 {
+ break
+ }
+ x := v.Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (ANDQconst [c] (MOVQconst [d]))
+ // cond:
+ // result: (MOVQconst [c&d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = c & d
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ANDW(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ANDW x (MOVLconst [c]))
+ // cond:
+ // result: (ANDWconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64ANDWconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ANDW (MOVLconst [c]) x)
+ // cond:
+ // result: (ANDWconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64ANDWconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ANDW x (MOVWconst [c]))
+ // cond:
+ // result: (ANDWconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64ANDWconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ANDW (MOVWconst [c]) x)
+ // cond:
+ // result: (ANDWconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64ANDWconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ANDW x x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ANDWconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ANDWconst [c] _)
+ // cond: int16(c)==0
+ // result: (MOVWconst [0])
+ for {
+ c := v.AuxInt
+ if !(int16(c) == 0) {
+ break
+ }
+ v.reset(OpAMD64MOVWconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (ANDWconst [c] x)
+ // cond: int16(c)==-1
+ // result: x
+ for {
+ c := v.AuxInt
+ x := v.Args[0]
+ if !(int16(c) == -1) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (ANDWconst [c] (MOVWconst [d]))
+ // cond:
+ // result: (MOVWconst [c&d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVWconst)
+ v.AuxInt = c & d
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAdd16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Add16 x y)
+ // cond:
+ // result: (ADDW x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ADDW)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAdd32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Add32 x y)
+ // cond:
+ // result: (ADDL x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ADDL)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAdd32F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Add32F x y)
+ // cond:
+ // result: (ADDSS x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ADDSS)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAdd64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Add64 x y)
+ // cond:
+ // result: (ADDQ x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ADDQ)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAdd64F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Add64F x y)
+ // cond:
+ // result: (ADDSD x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ADDSD)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAdd8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Add8 x y)
+ // cond:
+ // result: (ADDB x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ADDB)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAddPtr(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (AddPtr x y)
+ // cond:
+ // result: (ADDQ x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ADDQ)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAddr(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Addr {sym} base)
+ // cond:
+ // result: (LEAQ {sym} base)
+ for {
+ sym := v.Aux
+ base := v.Args[0]
+ v.reset(OpAMD64LEAQ)
+ v.Aux = sym
+ v.AddArg(base)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAnd16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (And16 x y)
+ // cond:
+ // result: (ANDW x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDW)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAnd32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (And32 x y)
+ // cond:
+ // result: (ANDL x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDL)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAnd64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (And64 x y)
+ // cond:
+ // result: (ANDQ x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDQ)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAnd8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (And8 x y)
+ // cond:
+ // result: (ANDB x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDB)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAvg64u(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Avg64u x y)
+ // cond:
+ // result: (AVGQU x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64AVGQU)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64CMPB(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (CMPB x (MOVBconst [c]))
+ // cond:
+ // result: (CMPBconst x [c])
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64CMPBconst)
+ v.AddArg(x)
+ v.AuxInt = c
+ return true
+ }
+ // match: (CMPB (MOVBconst [c]) x)
+ // cond:
+ // result: (InvertFlags (CMPBconst x [c]))
+ for {
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64InvertFlags)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPBconst, TypeFlags)
+ v0.AddArg(x)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64CMPBconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (CMPBconst (MOVBconst [x]) [y])
+ // cond: int8(x)==int8(y)
+ // result: (FlagEQ)
+ for {
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(int8(x) == int8(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagEQ)
+ return true
+ }
+ // match: (CMPBconst (MOVBconst [x]) [y])
+ // cond: int8(x)<int8(y) && uint8(x)<uint8(y)
+ // result: (FlagLT_ULT)
+ for {
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(int8(x) < int8(y) && uint8(x) < uint8(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagLT_ULT)
+ return true
+ }
+ // match: (CMPBconst (MOVBconst [x]) [y])
+ // cond: int8(x)<int8(y) && uint8(x)>uint8(y)
+ // result: (FlagLT_UGT)
+ for {
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(int8(x) < int8(y) && uint8(x) > uint8(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagLT_UGT)
+ return true
+ }
+ // match: (CMPBconst (MOVBconst [x]) [y])
+ // cond: int8(x)>int8(y) && uint8(x)<uint8(y)
+ // result: (FlagGT_ULT)
+ for {
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(int8(x) > int8(y) && uint8(x) < uint8(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagGT_ULT)
+ return true
+ }
+ // match: (CMPBconst (MOVBconst [x]) [y])
+ // cond: int8(x)>int8(y) && uint8(x)>uint8(y)
+ // result: (FlagGT_UGT)
+ for {
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(int8(x) > int8(y) && uint8(x) > uint8(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagGT_UGT)
+ return true
+ }
+ // match: (CMPBconst (ANDBconst _ [m]) [n])
+ // cond: int8(m)+1==int8(n) && isPowerOfTwo(int64(int8(n)))
+ // result: (FlagLT_ULT)
+ for {
+ if v.Args[0].Op != OpAMD64ANDBconst {
+ break
+ }
+ m := v.Args[0].AuxInt
+ n := v.AuxInt
+ if !(int8(m)+1 == int8(n) && isPowerOfTwo(int64(int8(n)))) {
+ break
+ }
+ v.reset(OpAMD64FlagLT_ULT)
+ return true
+ }
+ // match: (CMPBconst (ANDB x y) [0])
+ // cond:
+ // result: (TESTB x y)
+ for {
+ if v.Args[0].Op != OpAMD64ANDB {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ if v.AuxInt != 0 {
+ break
+ }
+ v.reset(OpAMD64TESTB)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (CMPBconst (ANDBconst [c] x) [0])
+ // cond:
+ // result: (TESTBconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64ANDBconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ if v.AuxInt != 0 {
+ break
+ }
+ v.reset(OpAMD64TESTBconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64CMPL(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (CMPL x (MOVLconst [c]))
+ // cond:
+ // result: (CMPLconst x [c])
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64CMPLconst)
+ v.AddArg(x)
+ v.AuxInt = c
+ return true
+ }
+ // match: (CMPL (MOVLconst [c]) x)
+ // cond:
+ // result: (InvertFlags (CMPLconst x [c]))
+ for {
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64InvertFlags)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPLconst, TypeFlags)
+ v0.AddArg(x)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64CMPLconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (CMPLconst (MOVLconst [x]) [y])
+ // cond: int32(x)==int32(y)
+ // result: (FlagEQ)
+ for {
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(int32(x) == int32(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagEQ)
+ return true
+ }
+ // match: (CMPLconst (MOVLconst [x]) [y])
+ // cond: int32(x)<int32(y) && uint32(x)<uint32(y)
+ // result: (FlagLT_ULT)
+ for {
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(int32(x) < int32(y) && uint32(x) < uint32(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagLT_ULT)
+ return true
+ }
+ // match: (CMPLconst (MOVLconst [x]) [y])
+ // cond: int32(x)<int32(y) && uint32(x)>uint32(y)
+ // result: (FlagLT_UGT)
+ for {
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(int32(x) < int32(y) && uint32(x) > uint32(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagLT_UGT)
+ return true
+ }
+ // match: (CMPLconst (MOVLconst [x]) [y])
+ // cond: int32(x)>int32(y) && uint32(x)<uint32(y)
+ // result: (FlagGT_ULT)
+ for {
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(int32(x) > int32(y) && uint32(x) < uint32(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagGT_ULT)
+ return true
+ }
+ // match: (CMPLconst (MOVLconst [x]) [y])
+ // cond: int32(x)>int32(y) && uint32(x)>uint32(y)
+ // result: (FlagGT_UGT)
+ for {
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(int32(x) > int32(y) && uint32(x) > uint32(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagGT_UGT)
+ return true
+ }
+ // match: (CMPLconst (ANDLconst _ [m]) [n])
+ // cond: int32(m)+1==int32(n) && isPowerOfTwo(int64(int32(n)))
+ // result: (FlagLT_ULT)
+ for {
+ if v.Args[0].Op != OpAMD64ANDLconst {
+ break
+ }
+ m := v.Args[0].AuxInt
+ n := v.AuxInt
+ if !(int32(m)+1 == int32(n) && isPowerOfTwo(int64(int32(n)))) {
+ break
+ }
+ v.reset(OpAMD64FlagLT_ULT)
+ return true
+ }
+ // match: (CMPLconst (ANDL x y) [0])
+ // cond:
+ // result: (TESTL x y)
+ for {
+ if v.Args[0].Op != OpAMD64ANDL {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ if v.AuxInt != 0 {
+ break
+ }
+ v.reset(OpAMD64TESTL)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (CMPLconst (ANDLconst [c] x) [0])
+ // cond:
+ // result: (TESTLconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64ANDLconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ if v.AuxInt != 0 {
+ break
+ }
+ v.reset(OpAMD64TESTLconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64CMPQ(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (CMPQ x (MOVQconst [c]))
+ // cond: is32Bit(c)
+ // result: (CMPQconst x [c])
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(is32Bit(c)) {
+ break
+ }
+ v.reset(OpAMD64CMPQconst)
+ v.AddArg(x)
+ v.AuxInt = c
+ return true
+ }
+ // match: (CMPQ (MOVQconst [c]) x)
+ // cond: is32Bit(c)
+ // result: (InvertFlags (CMPQconst x [c]))
+ for {
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ if !(is32Bit(c)) {
+ break
+ }
+ v.reset(OpAMD64InvertFlags)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPQconst, TypeFlags)
+ v0.AddArg(x)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64CMPQconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (CMPQconst (MOVQconst [x]) [y])
+ // cond: x==y
+ // result: (FlagEQ)
+ for {
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(x == y) {
+ break
+ }
+ v.reset(OpAMD64FlagEQ)
+ return true
+ }
+ // match: (CMPQconst (MOVQconst [x]) [y])
+ // cond: x<y && uint64(x)<uint64(y)
+ // result: (FlagLT_ULT)
+ for {
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(x < y && uint64(x) < uint64(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagLT_ULT)
+ return true
+ }
+ // match: (CMPQconst (MOVQconst [x]) [y])
+ // cond: x<y && uint64(x)>uint64(y)
+ // result: (FlagLT_UGT)
+ for {
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(x < y && uint64(x) > uint64(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagLT_UGT)
+ return true
+ }
+ // match: (CMPQconst (MOVQconst [x]) [y])
+ // cond: x>y && uint64(x)<uint64(y)
+ // result: (FlagGT_ULT)
+ for {
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(x > y && uint64(x) < uint64(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagGT_ULT)
+ return true
+ }
+ // match: (CMPQconst (MOVQconst [x]) [y])
+ // cond: x>y && uint64(x)>uint64(y)
+ // result: (FlagGT_UGT)
+ for {
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(x > y && uint64(x) > uint64(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagGT_UGT)
+ return true
+ }
+ // match: (CMPQconst (ANDQconst _ [m]) [n])
+ // cond: m+1==n && isPowerOfTwo(n)
+ // result: (FlagLT_ULT)
+ for {
+ if v.Args[0].Op != OpAMD64ANDQconst {
+ break
+ }
+ m := v.Args[0].AuxInt
+ n := v.AuxInt
+ if !(m+1 == n && isPowerOfTwo(n)) {
+ break
+ }
+ v.reset(OpAMD64FlagLT_ULT)
+ return true
+ }
+ // match: (CMPQconst (ANDQ x y) [0])
+ // cond:
+ // result: (TESTQ x y)
+ for {
+ if v.Args[0].Op != OpAMD64ANDQ {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ if v.AuxInt != 0 {
+ break
+ }
+ v.reset(OpAMD64TESTQ)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (CMPQconst (ANDQconst [c] x) [0])
+ // cond:
+ // result: (TESTQconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64ANDQconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ if v.AuxInt != 0 {
+ break
+ }
+ v.reset(OpAMD64TESTQconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64CMPW(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (CMPW x (MOVWconst [c]))
+ // cond:
+ // result: (CMPWconst x [c])
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64CMPWconst)
+ v.AddArg(x)
+ v.AuxInt = c
+ return true
+ }
+ // match: (CMPW (MOVWconst [c]) x)
+ // cond:
+ // result: (InvertFlags (CMPWconst x [c]))
+ for {
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64InvertFlags)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPWconst, TypeFlags)
+ v0.AddArg(x)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64CMPWconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (CMPWconst (MOVWconst [x]) [y])
+ // cond: int16(x)==int16(y)
+ // result: (FlagEQ)
+ for {
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(int16(x) == int16(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagEQ)
+ return true
+ }
+ // match: (CMPWconst (MOVWconst [x]) [y])
+ // cond: int16(x)<int16(y) && uint16(x)<uint16(y)
+ // result: (FlagLT_ULT)
+ for {
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(int16(x) < int16(y) && uint16(x) < uint16(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagLT_ULT)
+ return true
+ }
+ // match: (CMPWconst (MOVWconst [x]) [y])
+ // cond: int16(x)<int16(y) && uint16(x)>uint16(y)
+ // result: (FlagLT_UGT)
+ for {
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(int16(x) < int16(y) && uint16(x) > uint16(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagLT_UGT)
+ return true
+ }
+ // match: (CMPWconst (MOVWconst [x]) [y])
+ // cond: int16(x)>int16(y) && uint16(x)<uint16(y)
+ // result: (FlagGT_ULT)
+ for {
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(int16(x) > int16(y) && uint16(x) < uint16(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagGT_ULT)
+ return true
+ }
+ // match: (CMPWconst (MOVWconst [x]) [y])
+ // cond: int16(x)>int16(y) && uint16(x)>uint16(y)
+ // result: (FlagGT_UGT)
+ for {
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ x := v.Args[0].AuxInt
+ y := v.AuxInt
+ if !(int16(x) > int16(y) && uint16(x) > uint16(y)) {
+ break
+ }
+ v.reset(OpAMD64FlagGT_UGT)
+ return true
+ }
+ // match: (CMPWconst (ANDWconst _ [m]) [n])
+ // cond: int16(m)+1==int16(n) && isPowerOfTwo(int64(int16(n)))
+ // result: (FlagLT_ULT)
+ for {
+ if v.Args[0].Op != OpAMD64ANDWconst {
+ break
+ }
+ m := v.Args[0].AuxInt
+ n := v.AuxInt
+ if !(int16(m)+1 == int16(n) && isPowerOfTwo(int64(int16(n)))) {
+ break
+ }
+ v.reset(OpAMD64FlagLT_ULT)
+ return true
+ }
+ // match: (CMPWconst (ANDW x y) [0])
+ // cond:
+ // result: (TESTW x y)
+ for {
+ if v.Args[0].Op != OpAMD64ANDW {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ if v.AuxInt != 0 {
+ break
+ }
+ v.reset(OpAMD64TESTW)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (CMPWconst (ANDWconst [c] x) [0])
+ // cond:
+ // result: (TESTWconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64ANDWconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ if v.AuxInt != 0 {
+ break
+ }
+ v.reset(OpAMD64TESTWconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpClosureCall(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ClosureCall [argwid] entry closure mem)
+ // cond:
+ // result: (CALLclosure [argwid] entry closure mem)
+ for {
+ argwid := v.AuxInt
+ entry := v.Args[0]
+ closure := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64CALLclosure)
+ v.AuxInt = argwid
+ v.AddArg(entry)
+ v.AddArg(closure)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpCom16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Com16 x)
+ // cond:
+ // result: (NOTW x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64NOTW)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpCom32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Com32 x)
+ // cond:
+ // result: (NOTL x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64NOTL)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpCom64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Com64 x)
+ // cond:
+ // result: (NOTQ x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64NOTQ)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpCom8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Com8 x)
+ // cond:
+ // result: (NOTB x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64NOTB)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpConst16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Const16 [val])
+ // cond:
+ // result: (MOVWconst [val])
+ for {
+ val := v.AuxInt
+ v.reset(OpAMD64MOVWconst)
+ v.AuxInt = val
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpConst32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Const32 [val])
+ // cond:
+ // result: (MOVLconst [val])
+ for {
+ val := v.AuxInt
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = val
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpConst32F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Const32F [val])
+ // cond:
+ // result: (MOVSSconst [val])
+ for {
+ val := v.AuxInt
+ v.reset(OpAMD64MOVSSconst)
+ v.AuxInt = val
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpConst64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Const64 [val])
+ // cond:
+ // result: (MOVQconst [val])
+ for {
+ val := v.AuxInt
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = val
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpConst64F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Const64F [val])
+ // cond:
+ // result: (MOVSDconst [val])
+ for {
+ val := v.AuxInt
+ v.reset(OpAMD64MOVSDconst)
+ v.AuxInt = val
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpConst8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Const8 [val])
+ // cond:
+ // result: (MOVBconst [val])
+ for {
+ val := v.AuxInt
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = val
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpConstBool(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ConstBool [b])
+ // cond:
+ // result: (MOVBconst [b])
+ for {
+ b := v.AuxInt
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = b
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpConstNil(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ConstNil)
+ // cond:
+ // result: (MOVQconst [0])
+ for {
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpConvert(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Convert <t> x mem)
+ // cond:
+ // result: (MOVQconvert <t> x mem)
+ for {
+ t := v.Type
+ x := v.Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVQconvert)
+ v.Type = t
+ v.AddArg(x)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpCvt32Fto32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Cvt32Fto32 x)
+ // cond:
+ // result: (CVTTSS2SL x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64CVTTSS2SL)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpCvt32Fto64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Cvt32Fto64 x)
+ // cond:
+ // result: (CVTTSS2SQ x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64CVTTSS2SQ)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpCvt32Fto64F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Cvt32Fto64F x)
+ // cond:
+ // result: (CVTSS2SD x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64CVTSS2SD)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpCvt32to32F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Cvt32to32F x)
+ // cond:
+ // result: (CVTSL2SS x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64CVTSL2SS)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpCvt32to64F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Cvt32to64F x)
+ // cond:
+ // result: (CVTSL2SD x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64CVTSL2SD)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpCvt64Fto32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Cvt64Fto32 x)
+ // cond:
+ // result: (CVTTSD2SL x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64CVTTSD2SL)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpCvt64Fto32F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Cvt64Fto32F x)
+ // cond:
+ // result: (CVTSD2SS x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64CVTSD2SS)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpCvt64Fto64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Cvt64Fto64 x)
+ // cond:
+ // result: (CVTTSD2SQ x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64CVTTSD2SQ)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpCvt64to32F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Cvt64to32F x)
+ // cond:
+ // result: (CVTSQ2SS x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64CVTSQ2SS)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpCvt64to64F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Cvt64to64F x)
+ // cond:
+ // result: (CVTSQ2SD x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64CVTSQ2SD)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpDeferCall(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (DeferCall [argwid] mem)
+ // cond:
+ // result: (CALLdefer [argwid] mem)
+ for {
+ argwid := v.AuxInt
+ mem := v.Args[0]
+ v.reset(OpAMD64CALLdefer)
+ v.AuxInt = argwid
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpDiv16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Div16 x y)
+ // cond:
+ // result: (DIVW x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64DIVW)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpDiv16u(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Div16u x y)
+ // cond:
+ // result: (DIVWU x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64DIVWU)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpDiv32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Div32 x y)
+ // cond:
+ // result: (DIVL x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64DIVL)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpDiv32F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Div32F x y)
+ // cond:
+ // result: (DIVSS x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64DIVSS)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpDiv32u(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Div32u x y)
+ // cond:
+ // result: (DIVLU x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64DIVLU)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpDiv64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Div64 x y)
+ // cond:
+ // result: (DIVQ x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64DIVQ)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpDiv64F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Div64F x y)
+ // cond:
+ // result: (DIVSD x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64DIVSD)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpDiv64u(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Div64u x y)
+ // cond:
+ // result: (DIVQU x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64DIVQU)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpDiv8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Div8 x y)
+ // cond:
+ // result: (DIVW (SignExt8to16 x) (SignExt8to16 y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64DIVW)
+ v0 := b.NewValue0(v.Line, OpSignExt8to16, config.fe.TypeInt16())
+ v0.AddArg(x)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpSignExt8to16, config.fe.TypeInt16())
+ v1.AddArg(y)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpDiv8u(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Div8u x y)
+ // cond:
+ // result: (DIVWU (ZeroExt8to16 x) (ZeroExt8to16 y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64DIVWU)
+ v0 := b.NewValue0(v.Line, OpZeroExt8to16, config.fe.TypeUInt16())
+ v0.AddArg(x)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpZeroExt8to16, config.fe.TypeUInt16())
+ v1.AddArg(y)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpEq16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Eq16 x y)
+ // cond:
+ // result: (SETEQ (CMPW x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETEQ)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPW, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpEq32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Eq32 x y)
+ // cond:
+ // result: (SETEQ (CMPL x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETEQ)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPL, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpEq32F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Eq32F x y)
+ // cond:
+ // result: (SETEQF (UCOMISS x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETEQF)
+ v0 := b.NewValue0(v.Line, OpAMD64UCOMISS, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpEq64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Eq64 x y)
+ // cond:
+ // result: (SETEQ (CMPQ x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETEQ)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPQ, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpEq64F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Eq64F x y)
+ // cond:
+ // result: (SETEQF (UCOMISD x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETEQF)
+ v0 := b.NewValue0(v.Line, OpAMD64UCOMISD, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpEq8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Eq8 x y)
+ // cond:
+ // result: (SETEQ (CMPB x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETEQ)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPB, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpEqPtr(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (EqPtr x y)
+ // cond:
+ // result: (SETEQ (CMPQ x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETEQ)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPQ, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGeq16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq16 x y)
+ // cond:
+ // result: (SETGE (CMPW x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETGE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPW, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGeq16U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq16U x y)
+ // cond:
+ // result: (SETAE (CMPW x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETAE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPW, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGeq32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq32 x y)
+ // cond:
+ // result: (SETGE (CMPL x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETGE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPL, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGeq32F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq32F x y)
+ // cond:
+ // result: (SETGEF (UCOMISS x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETGEF)
+ v0 := b.NewValue0(v.Line, OpAMD64UCOMISS, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGeq32U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq32U x y)
+ // cond:
+ // result: (SETAE (CMPL x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETAE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPL, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGeq64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq64 x y)
+ // cond:
+ // result: (SETGE (CMPQ x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETGE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPQ, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGeq64F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq64F x y)
+ // cond:
+ // result: (SETGEF (UCOMISD x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETGEF)
+ v0 := b.NewValue0(v.Line, OpAMD64UCOMISD, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGeq64U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq64U x y)
+ // cond:
+ // result: (SETAE (CMPQ x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETAE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPQ, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGeq8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq8 x y)
+ // cond:
+ // result: (SETGE (CMPB x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETGE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPB, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGeq8U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq8U x y)
+ // cond:
+ // result: (SETAE (CMPB x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETAE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPB, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGetClosurePtr(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (GetClosurePtr)
+ // cond:
+ // result: (LoweredGetClosurePtr)
+ for {
+ v.reset(OpAMD64LoweredGetClosurePtr)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGetG(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (GetG mem)
+ // cond:
+ // result: (LoweredGetG mem)
+ for {
+ mem := v.Args[0]
+ v.reset(OpAMD64LoweredGetG)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGoCall(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (GoCall [argwid] mem)
+ // cond:
+ // result: (CALLgo [argwid] mem)
+ for {
+ argwid := v.AuxInt
+ mem := v.Args[0]
+ v.reset(OpAMD64CALLgo)
+ v.AuxInt = argwid
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGreater16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater16 x y)
+ // cond:
+ // result: (SETG (CMPW x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETG)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPW, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGreater16U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater16U x y)
+ // cond:
+ // result: (SETA (CMPW x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETA)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPW, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGreater32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater32 x y)
+ // cond:
+ // result: (SETG (CMPL x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETG)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPL, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGreater32F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater32F x y)
+ // cond:
+ // result: (SETGF (UCOMISS x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETGF)
+ v0 := b.NewValue0(v.Line, OpAMD64UCOMISS, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGreater32U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater32U x y)
+ // cond:
+ // result: (SETA (CMPL x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETA)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPL, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGreater64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater64 x y)
+ // cond:
+ // result: (SETG (CMPQ x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETG)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPQ, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGreater64F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater64F x y)
+ // cond:
+ // result: (SETGF (UCOMISD x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETGF)
+ v0 := b.NewValue0(v.Line, OpAMD64UCOMISD, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGreater64U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater64U x y)
+ // cond:
+ // result: (SETA (CMPQ x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETA)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPQ, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGreater8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater8 x y)
+ // cond:
+ // result: (SETG (CMPB x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETG)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPB, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpGreater8U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater8U x y)
+ // cond:
+ // result: (SETA (CMPB x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETA)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPB, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpHmul16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Hmul16 x y)
+ // cond:
+ // result: (HMULW x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64HMULW)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpHmul16u(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Hmul16u x y)
+ // cond:
+ // result: (HMULWU x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64HMULWU)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpHmul32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Hmul32 x y)
+ // cond:
+ // result: (HMULL x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64HMULL)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpHmul32u(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Hmul32u x y)
+ // cond:
+ // result: (HMULLU x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64HMULLU)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpHmul64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Hmul64 x y)
+ // cond:
+ // result: (HMULQ x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64HMULQ)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpHmul64u(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Hmul64u x y)
+ // cond:
+ // result: (HMULQU x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64HMULQU)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpHmul8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Hmul8 x y)
+ // cond:
+ // result: (HMULB x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64HMULB)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpHmul8u(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Hmul8u x y)
+ // cond:
+ // result: (HMULBU x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64HMULBU)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpITab(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ITab (Load ptr mem))
+ // cond:
+ // result: (MOVQload ptr mem)
+ for {
+ if v.Args[0].Op != OpLoad {
+ break
+ }
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[0].Args[1]
+ v.reset(OpAMD64MOVQload)
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpInterCall(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (InterCall [argwid] entry mem)
+ // cond:
+ // result: (CALLinter [argwid] entry mem)
+ for {
+ argwid := v.AuxInt
+ entry := v.Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64CALLinter)
+ v.AuxInt = argwid
+ v.AddArg(entry)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpIsInBounds(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (IsInBounds idx len)
+ // cond:
+ // result: (SETB (CMPQ idx len))
+ for {
+ idx := v.Args[0]
+ len := v.Args[1]
+ v.reset(OpAMD64SETB)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPQ, TypeFlags)
+ v0.AddArg(idx)
+ v0.AddArg(len)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpIsNonNil(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (IsNonNil p)
+ // cond:
+ // result: (SETNE (TESTQ p p))
+ for {
+ p := v.Args[0]
+ v.reset(OpAMD64SETNE)
+ v0 := b.NewValue0(v.Line, OpAMD64TESTQ, TypeFlags)
+ v0.AddArg(p)
+ v0.AddArg(p)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpIsSliceInBounds(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (IsSliceInBounds idx len)
+ // cond:
+ // result: (SETBE (CMPQ idx len))
+ for {
+ idx := v.Args[0]
+ len := v.Args[1]
+ v.reset(OpAMD64SETBE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPQ, TypeFlags)
+ v0.AddArg(idx)
+ v0.AddArg(len)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64LEAQ(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (LEAQ [c] {s} (ADDQconst [d] x))
+ // cond:
+ // result: (LEAQ [c+d] {s} x)
+ for {
+ c := v.AuxInt
+ s := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64LEAQ)
+ v.AuxInt = c + d
+ v.Aux = s
+ v.AddArg(x)
+ return true
+ }
+ // match: (LEAQ [c] {s} (ADDQ x y))
+ // cond: x.Op != OpSB && y.Op != OpSB
+ // result: (LEAQ1 [c] {s} x y)
+ for {
+ c := v.AuxInt
+ s := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQ {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ if !(x.Op != OpSB && y.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64LEAQ1)
+ v.AuxInt = c
+ v.Aux = s
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (LEAQ [off1] {sym1} (LEAQ [off2] {sym2} x))
+ // cond: canMergeSym(sym1, sym2)
+ // result: (LEAQ [addOff(off1,off2)] {mergeSym(sym1,sym2)} x)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ x := v.Args[0].Args[0]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64LEAQ)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(x)
+ return true
+ }
+ // match: (LEAQ [off1] {sym1} (LEAQ1 [off2] {sym2} x y))
+ // cond: canMergeSym(sym1, sym2)
+ // result: (LEAQ1 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ1 {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64LEAQ1)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (LEAQ [off1] {sym1} (LEAQ2 [off2] {sym2} x y))
+ // cond: canMergeSym(sym1, sym2)
+ // result: (LEAQ2 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ2 {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64LEAQ2)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (LEAQ [off1] {sym1} (LEAQ4 [off2] {sym2} x y))
+ // cond: canMergeSym(sym1, sym2)
+ // result: (LEAQ4 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ4 {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64LEAQ4)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (LEAQ [off1] {sym1} (LEAQ8 [off2] {sym2} x y))
+ // cond: canMergeSym(sym1, sym2)
+ // result: (LEAQ8 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ8 {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64LEAQ8)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64LEAQ1(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (LEAQ1 [c] {s} (ADDQconst [d] x) y)
+ // cond: x.Op != OpSB
+ // result: (LEAQ1 [c+d] {s} x y)
+ for {
+ c := v.AuxInt
+ s := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ y := v.Args[1]
+ if !(x.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64LEAQ1)
+ v.AuxInt = c + d
+ v.Aux = s
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (LEAQ1 [c] {s} x (ADDQconst [d] y))
+ // cond: y.Op != OpSB
+ // result: (LEAQ1 [c+d] {s} x y)
+ for {
+ c := v.AuxInt
+ s := v.Aux
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[1].AuxInt
+ y := v.Args[1].Args[0]
+ if !(y.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64LEAQ1)
+ v.AuxInt = c + d
+ v.Aux = s
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (LEAQ1 [off1] {sym1} (LEAQ [off2] {sym2} x) y)
+ // cond: canMergeSym(sym1, sym2) && x.Op != OpSB
+ // result: (LEAQ1 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ x := v.Args[0].Args[0]
+ y := v.Args[1]
+ if !(canMergeSym(sym1, sym2) && x.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64LEAQ1)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (LEAQ1 [off1] {sym1} x (LEAQ [off2] {sym2} y))
+ // cond: canMergeSym(sym1, sym2) && y.Op != OpSB
+ // result: (LEAQ1 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[1].AuxInt
+ sym2 := v.Args[1].Aux
+ y := v.Args[1].Args[0]
+ if !(canMergeSym(sym1, sym2) && y.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64LEAQ1)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64LEAQ2(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (LEAQ2 [c] {s} (ADDQconst [d] x) y)
+ // cond: x.Op != OpSB
+ // result: (LEAQ2 [c+d] {s} x y)
+ for {
+ c := v.AuxInt
+ s := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ y := v.Args[1]
+ if !(x.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64LEAQ2)
+ v.AuxInt = c + d
+ v.Aux = s
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (LEAQ2 [c] {s} x (ADDQconst [d] y))
+ // cond: y.Op != OpSB
+ // result: (LEAQ2 [c+2*d] {s} x y)
+ for {
+ c := v.AuxInt
+ s := v.Aux
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[1].AuxInt
+ y := v.Args[1].Args[0]
+ if !(y.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64LEAQ2)
+ v.AuxInt = c + 2*d
+ v.Aux = s
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (LEAQ2 [off1] {sym1} (LEAQ [off2] {sym2} x) y)
+ // cond: canMergeSym(sym1, sym2) && x.Op != OpSB
+ // result: (LEAQ2 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ x := v.Args[0].Args[0]
+ y := v.Args[1]
+ if !(canMergeSym(sym1, sym2) && x.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64LEAQ2)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64LEAQ4(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (LEAQ4 [c] {s} (ADDQconst [d] x) y)
+ // cond: x.Op != OpSB
+ // result: (LEAQ4 [c+d] {s} x y)
+ for {
+ c := v.AuxInt
+ s := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ y := v.Args[1]
+ if !(x.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64LEAQ4)
+ v.AuxInt = c + d
+ v.Aux = s
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (LEAQ4 [c] {s} x (ADDQconst [d] y))
+ // cond: y.Op != OpSB
+ // result: (LEAQ4 [c+4*d] {s} x y)
+ for {
+ c := v.AuxInt
+ s := v.Aux
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[1].AuxInt
+ y := v.Args[1].Args[0]
+ if !(y.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64LEAQ4)
+ v.AuxInt = c + 4*d
+ v.Aux = s
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (LEAQ4 [off1] {sym1} (LEAQ [off2] {sym2} x) y)
+ // cond: canMergeSym(sym1, sym2) && x.Op != OpSB
+ // result: (LEAQ4 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ x := v.Args[0].Args[0]
+ y := v.Args[1]
+ if !(canMergeSym(sym1, sym2) && x.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64LEAQ4)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64LEAQ8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (LEAQ8 [c] {s} (ADDQconst [d] x) y)
+ // cond: x.Op != OpSB
+ // result: (LEAQ8 [c+d] {s} x y)
+ for {
+ c := v.AuxInt
+ s := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ y := v.Args[1]
+ if !(x.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64LEAQ8)
+ v.AuxInt = c + d
+ v.Aux = s
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (LEAQ8 [c] {s} x (ADDQconst [d] y))
+ // cond: y.Op != OpSB
+ // result: (LEAQ8 [c+8*d] {s} x y)
+ for {
+ c := v.AuxInt
+ s := v.Aux
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[1].AuxInt
+ y := v.Args[1].Args[0]
+ if !(y.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64LEAQ8)
+ v.AuxInt = c + 8*d
+ v.Aux = s
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ // match: (LEAQ8 [off1] {sym1} (LEAQ [off2] {sym2} x) y)
+ // cond: canMergeSym(sym1, sym2) && x.Op != OpSB
+ // result: (LEAQ8 [addOff(off1,off2)] {mergeSym(sym1,sym2)} x y)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ x := v.Args[0].Args[0]
+ y := v.Args[1]
+ if !(canMergeSym(sym1, sym2) && x.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64LEAQ8)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLeq16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq16 x y)
+ // cond:
+ // result: (SETLE (CMPW x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETLE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPW, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLeq16U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq16U x y)
+ // cond:
+ // result: (SETBE (CMPW x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETBE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPW, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLeq32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq32 x y)
+ // cond:
+ // result: (SETLE (CMPL x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETLE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPL, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLeq32F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq32F x y)
+ // cond:
+ // result: (SETGEF (UCOMISS y x))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETGEF)
+ v0 := b.NewValue0(v.Line, OpAMD64UCOMISS, TypeFlags)
+ v0.AddArg(y)
+ v0.AddArg(x)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLeq32U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq32U x y)
+ // cond:
+ // result: (SETBE (CMPL x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETBE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPL, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLeq64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq64 x y)
+ // cond:
+ // result: (SETLE (CMPQ x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETLE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPQ, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLeq64F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq64F x y)
+ // cond:
+ // result: (SETGEF (UCOMISD y x))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETGEF)
+ v0 := b.NewValue0(v.Line, OpAMD64UCOMISD, TypeFlags)
+ v0.AddArg(y)
+ v0.AddArg(x)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLeq64U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq64U x y)
+ // cond:
+ // result: (SETBE (CMPQ x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETBE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPQ, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLeq8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq8 x y)
+ // cond:
+ // result: (SETLE (CMPB x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETLE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPB, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLeq8U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq8U x y)
+ // cond:
+ // result: (SETBE (CMPB x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETBE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPB, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLess16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less16 x y)
+ // cond:
+ // result: (SETL (CMPW x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETL)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPW, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLess16U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less16U x y)
+ // cond:
+ // result: (SETB (CMPW x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETB)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPW, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLess32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less32 x y)
+ // cond:
+ // result: (SETL (CMPL x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETL)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPL, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLess32F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less32F x y)
+ // cond:
+ // result: (SETGF (UCOMISS y x))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETGF)
+ v0 := b.NewValue0(v.Line, OpAMD64UCOMISS, TypeFlags)
+ v0.AddArg(y)
+ v0.AddArg(x)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLess32U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less32U x y)
+ // cond:
+ // result: (SETB (CMPL x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETB)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPL, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLess64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less64 x y)
+ // cond:
+ // result: (SETL (CMPQ x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETL)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPQ, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLess64F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less64F x y)
+ // cond:
+ // result: (SETGF (UCOMISD y x))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETGF)
+ v0 := b.NewValue0(v.Line, OpAMD64UCOMISD, TypeFlags)
+ v0.AddArg(y)
+ v0.AddArg(x)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLess64U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less64U x y)
+ // cond:
+ // result: (SETB (CMPQ x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETB)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPQ, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLess8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less8 x y)
+ // cond:
+ // result: (SETL (CMPB x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETL)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPB, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLess8U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less8U x y)
+ // cond:
+ // result: (SETB (CMPB x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETB)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPB, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLoad(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Load <t> ptr mem)
+ // cond: (is64BitInt(t) || isPtr(t))
+ // result: (MOVQload ptr mem)
+ for {
+ t := v.Type
+ ptr := v.Args[0]
+ mem := v.Args[1]
+ if !(is64BitInt(t) || isPtr(t)) {
+ break
+ }
+ v.reset(OpAMD64MOVQload)
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Load <t> ptr mem)
+ // cond: is32BitInt(t)
+ // result: (MOVLload ptr mem)
+ for {
+ t := v.Type
+ ptr := v.Args[0]
+ mem := v.Args[1]
+ if !(is32BitInt(t)) {
+ break
+ }
+ v.reset(OpAMD64MOVLload)
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Load <t> ptr mem)
+ // cond: is16BitInt(t)
+ // result: (MOVWload ptr mem)
+ for {
+ t := v.Type
+ ptr := v.Args[0]
+ mem := v.Args[1]
+ if !(is16BitInt(t)) {
+ break
+ }
+ v.reset(OpAMD64MOVWload)
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Load <t> ptr mem)
+ // cond: (t.IsBoolean() || is8BitInt(t))
+ // result: (MOVBload ptr mem)
+ for {
+ t := v.Type
+ ptr := v.Args[0]
+ mem := v.Args[1]
+ if !(t.IsBoolean() || is8BitInt(t)) {
+ break
+ }
+ v.reset(OpAMD64MOVBload)
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Load <t> ptr mem)
+ // cond: is32BitFloat(t)
+ // result: (MOVSSload ptr mem)
+ for {
+ t := v.Type
+ ptr := v.Args[0]
+ mem := v.Args[1]
+ if !(is32BitFloat(t)) {
+ break
+ }
+ v.reset(OpAMD64MOVSSload)
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Load <t> ptr mem)
+ // cond: is64BitFloat(t)
+ // result: (MOVSDload ptr mem)
+ for {
+ t := v.Type
+ ptr := v.Args[0]
+ mem := v.Args[1]
+ if !(is64BitFloat(t)) {
+ break
+ }
+ v.reset(OpAMD64MOVSDload)
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLrot16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lrot16 <t> x [c])
+ // cond:
+ // result: (ROLWconst <t> [c&15] x)
+ for {
+ t := v.Type
+ x := v.Args[0]
+ c := v.AuxInt
+ v.reset(OpAMD64ROLWconst)
+ v.Type = t
+ v.AuxInt = c & 15
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLrot32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lrot32 <t> x [c])
+ // cond:
+ // result: (ROLLconst <t> [c&31] x)
+ for {
+ t := v.Type
+ x := v.Args[0]
+ c := v.AuxInt
+ v.reset(OpAMD64ROLLconst)
+ v.Type = t
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLrot64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lrot64 <t> x [c])
+ // cond:
+ // result: (ROLQconst <t> [c&63] x)
+ for {
+ t := v.Type
+ x := v.Args[0]
+ c := v.AuxInt
+ v.reset(OpAMD64ROLQconst)
+ v.Type = t
+ v.AuxInt = c & 63
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLrot8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lrot8 <t> x [c])
+ // cond:
+ // result: (ROLBconst <t> [c&7] x)
+ for {
+ t := v.Type
+ x := v.Args[0]
+ c := v.AuxInt
+ v.reset(OpAMD64ROLBconst)
+ v.Type = t
+ v.AuxInt = c & 7
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLsh16x16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh16x16 <t> x y)
+ // cond:
+ // result: (ANDW (SHLW <t> x y) (SBBLcarrymask <t> (CMPWconst y [16])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDW)
+ v0 := b.NewValue0(v.Line, OpAMD64SHLW, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPWconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 16
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLsh16x32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh16x32 <t> x y)
+ // cond:
+ // result: (ANDW (SHLW <t> x y) (SBBLcarrymask <t> (CMPLconst y [16])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDW)
+ v0 := b.NewValue0(v.Line, OpAMD64SHLW, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPLconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 16
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLsh16x64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh16x64 <t> x y)
+ // cond:
+ // result: (ANDW (SHLW <t> x y) (SBBLcarrymask <t> (CMPQconst y [16])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDW)
+ v0 := b.NewValue0(v.Line, OpAMD64SHLW, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPQconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 16
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLsh16x8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh16x8 <t> x y)
+ // cond:
+ // result: (ANDW (SHLW <t> x y) (SBBLcarrymask <t> (CMPBconst y [16])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDW)
+ v0 := b.NewValue0(v.Line, OpAMD64SHLW, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPBconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 16
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLsh32x16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh32x16 <t> x y)
+ // cond:
+ // result: (ANDL (SHLL <t> x y) (SBBLcarrymask <t> (CMPWconst y [32])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDL)
+ v0 := b.NewValue0(v.Line, OpAMD64SHLL, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPWconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 32
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLsh32x32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh32x32 <t> x y)
+ // cond:
+ // result: (ANDL (SHLL <t> x y) (SBBLcarrymask <t> (CMPLconst y [32])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDL)
+ v0 := b.NewValue0(v.Line, OpAMD64SHLL, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPLconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 32
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLsh32x64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh32x64 <t> x y)
+ // cond:
+ // result: (ANDL (SHLL <t> x y) (SBBLcarrymask <t> (CMPQconst y [32])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDL)
+ v0 := b.NewValue0(v.Line, OpAMD64SHLL, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPQconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 32
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLsh32x8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh32x8 <t> x y)
+ // cond:
+ // result: (ANDL (SHLL <t> x y) (SBBLcarrymask <t> (CMPBconst y [32])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDL)
+ v0 := b.NewValue0(v.Line, OpAMD64SHLL, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPBconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 32
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLsh64x16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh64x16 <t> x y)
+ // cond:
+ // result: (ANDQ (SHLQ <t> x y) (SBBQcarrymask <t> (CMPWconst y [64])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDQ)
+ v0 := b.NewValue0(v.Line, OpAMD64SHLQ, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBQcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPWconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 64
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLsh64x32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh64x32 <t> x y)
+ // cond:
+ // result: (ANDQ (SHLQ <t> x y) (SBBQcarrymask <t> (CMPLconst y [64])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDQ)
+ v0 := b.NewValue0(v.Line, OpAMD64SHLQ, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBQcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPLconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 64
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLsh64x64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh64x64 <t> x y)
+ // cond:
+ // result: (ANDQ (SHLQ <t> x y) (SBBQcarrymask <t> (CMPQconst y [64])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDQ)
+ v0 := b.NewValue0(v.Line, OpAMD64SHLQ, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBQcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPQconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 64
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLsh64x8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh64x8 <t> x y)
+ // cond:
+ // result: (ANDQ (SHLQ <t> x y) (SBBQcarrymask <t> (CMPBconst y [64])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDQ)
+ v0 := b.NewValue0(v.Line, OpAMD64SHLQ, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBQcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPBconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 64
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLsh8x16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh8x16 <t> x y)
+ // cond:
+ // result: (ANDB (SHLB <t> x y) (SBBLcarrymask <t> (CMPWconst y [8])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDB)
+ v0 := b.NewValue0(v.Line, OpAMD64SHLB, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPWconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 8
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLsh8x32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh8x32 <t> x y)
+ // cond:
+ // result: (ANDB (SHLB <t> x y) (SBBLcarrymask <t> (CMPLconst y [8])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDB)
+ v0 := b.NewValue0(v.Line, OpAMD64SHLB, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPLconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 8
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLsh8x64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh8x64 <t> x y)
+ // cond:
+ // result: (ANDB (SHLB <t> x y) (SBBLcarrymask <t> (CMPQconst y [8])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDB)
+ v0 := b.NewValue0(v.Line, OpAMD64SHLB, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPQconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 8
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpLsh8x8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh8x8 <t> x y)
+ // cond:
+ // result: (ANDB (SHLB <t> x y) (SBBLcarrymask <t> (CMPBconst y [8])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDB)
+ v0 := b.NewValue0(v.Line, OpAMD64SHLB, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPBconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 8
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVBQSX(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVBQSX (MOVBload [off] {sym} ptr mem))
+ // cond:
+ // result: @v.Args[0].Block (MOVBQSXload <v.Type> [off] {sym} ptr mem)
+ for {
+ if v.Args[0].Op != OpAMD64MOVBload {
+ break
+ }
+ off := v.Args[0].AuxInt
+ sym := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[0].Args[1]
+ b = v.Args[0].Block
+ v0 := b.NewValue0(v.Line, OpAMD64MOVBQSXload, v.Type)
+ v.reset(OpCopy)
+ v.AddArg(v0)
+ v0.AuxInt = off
+ v0.Aux = sym
+ v0.AddArg(ptr)
+ v0.AddArg(mem)
+ return true
+ }
+ // match: (MOVBQSX (ANDBconst [c] x))
+ // cond: c & 0x80 == 0
+ // result: (ANDQconst [c & 0x7f] x)
+ for {
+ if v.Args[0].Op != OpAMD64ANDBconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ if !(c&0x80 == 0) {
+ break
+ }
+ v.reset(OpAMD64ANDQconst)
+ v.AuxInt = c & 0x7f
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVBQZX(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVBQZX (MOVBload [off] {sym} ptr mem))
+ // cond:
+ // result: @v.Args[0].Block (MOVBQZXload <v.Type> [off] {sym} ptr mem)
+ for {
+ if v.Args[0].Op != OpAMD64MOVBload {
+ break
+ }
+ off := v.Args[0].AuxInt
+ sym := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[0].Args[1]
+ b = v.Args[0].Block
+ v0 := b.NewValue0(v.Line, OpAMD64MOVBQZXload, v.Type)
+ v.reset(OpCopy)
+ v.AddArg(v0)
+ v0.AuxInt = off
+ v0.Aux = sym
+ v0.AddArg(ptr)
+ v0.AddArg(mem)
+ return true
+ }
+ // match: (MOVBQZX (ANDBconst [c] x))
+ // cond:
+ // result: (ANDQconst [c & 0xff] x)
+ for {
+ if v.Args[0].Op != OpAMD64ANDBconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64ANDQconst)
+ v.AuxInt = c & 0xff
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVBload(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVBload [off] {sym} ptr (MOVBstore [off2] {sym2} ptr2 x _))
+ // cond: sym == sym2 && off == off2 && isSamePtr(ptr, ptr2)
+ // result: x
+ for {
+ off := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBstore {
+ break
+ }
+ off2 := v.Args[1].AuxInt
+ sym2 := v.Args[1].Aux
+ ptr2 := v.Args[1].Args[0]
+ x := v.Args[1].Args[1]
+ if !(sym == sym2 && off == off2 && isSamePtr(ptr, ptr2)) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVBload [off1] {sym} (ADDQconst [off2] ptr) mem)
+ // cond:
+ // result: (MOVBload [addOff(off1, off2)] {sym} ptr mem)
+ for {
+ off1 := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVBload)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVBload [off1] {sym1} (LEAQ [off2] {sym2} base) mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVBload [addOff(off1,off2)] {mergeSym(sym1,sym2)} base mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ base := v.Args[0].Args[0]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVBload)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(base)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVBload [off1] {sym1} (LEAQ1 [off2] {sym2} ptr idx) mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVBloadidx1 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ1 {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVBloadidx1)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVBload [off] {sym} (ADDQ ptr idx) mem)
+ // cond: ptr.Op != OpSB
+ // result: (MOVBloadidx1 [off] {sym} ptr idx mem)
+ for {
+ off := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQ {
+ break
+ }
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ mem := v.Args[1]
+ if !(ptr.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64MOVBloadidx1)
+ v.AuxInt = off
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVBloadidx1(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVBloadidx1 [c] {sym} (ADDQconst [d] ptr) idx mem)
+ // cond:
+ // result: (MOVBloadidx1 [c+d] {sym} ptr idx mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVBloadidx1)
+ v.AuxInt = c + d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVBloadidx1 [c] {sym} ptr (ADDQconst [d] idx) mem)
+ // cond:
+ // result: (MOVBloadidx1 [c+d] {sym} ptr idx mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[1].AuxInt
+ idx := v.Args[1].Args[0]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVBloadidx1)
+ v.AuxInt = c + d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVBstore(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVBstore [off] {sym} ptr (MOVBQSX x) mem)
+ // cond:
+ // result: (MOVBstore [off] {sym} ptr x mem)
+ for {
+ off := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBQSX {
+ break
+ }
+ x := v.Args[1].Args[0]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVBstore)
+ v.AuxInt = off
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(x)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVBstore [off] {sym} ptr (MOVBQZX x) mem)
+ // cond:
+ // result: (MOVBstore [off] {sym} ptr x mem)
+ for {
+ off := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBQZX {
+ break
+ }
+ x := v.Args[1].Args[0]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVBstore)
+ v.AuxInt = off
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(x)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVBstore [off1] {sym} (ADDQconst [off2] ptr) val mem)
+ // cond:
+ // result: (MOVBstore [addOff(off1, off2)] {sym} ptr val mem)
+ for {
+ off1 := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVBstore)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVBstore [off] {sym} ptr (MOVBconst [c]) mem)
+ // cond: validOff(off)
+ // result: (MOVBstoreconst [makeValAndOff(int64(int8(c)),off)] {sym} ptr mem)
+ for {
+ off := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ mem := v.Args[2]
+ if !(validOff(off)) {
+ break
+ }
+ v.reset(OpAMD64MOVBstoreconst)
+ v.AuxInt = makeValAndOff(int64(int8(c)), off)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVBstore [off1] {sym1} (LEAQ [off2] {sym2} base) val mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVBstore [addOff(off1,off2)] {mergeSym(sym1,sym2)} base val mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ base := v.Args[0].Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVBstore)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(base)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVBstore [off1] {sym1} (LEAQ1 [off2] {sym2} ptr idx) val mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVBstoreidx1 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx val mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ1 {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ val := v.Args[1]
+ mem := v.Args[2]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVBstoreidx1)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVBstore [off] {sym} (ADDQ ptr idx) val mem)
+ // cond: ptr.Op != OpSB
+ // result: (MOVBstoreidx1 [off] {sym} ptr idx val mem)
+ for {
+ off := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQ {
+ break
+ }
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ val := v.Args[1]
+ mem := v.Args[2]
+ if !(ptr.Op != OpSB) {
+ break
+ }
+ v.reset(OpAMD64MOVBstoreidx1)
+ v.AuxInt = off
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVBstoreconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVBstoreconst [sc] {s} (ADDQconst [off] ptr) mem)
+ // cond: ValAndOff(sc).canAdd(off)
+ // result: (MOVBstoreconst [ValAndOff(sc).add(off)] {s} ptr mem)
+ for {
+ sc := v.AuxInt
+ s := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[1]
+ if !(ValAndOff(sc).canAdd(off)) {
+ break
+ }
+ v.reset(OpAMD64MOVBstoreconst)
+ v.AuxInt = ValAndOff(sc).add(off)
+ v.Aux = s
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVBstoreconst [sc] {sym1} (LEAQ [off] {sym2} ptr) mem)
+ // cond: canMergeSym(sym1, sym2) && ValAndOff(sc).canAdd(off)
+ // result: (MOVBstoreconst [ValAndOff(sc).add(off)] {mergeSym(sym1, sym2)} ptr mem)
+ for {
+ sc := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2) && ValAndOff(sc).canAdd(off)) {
+ break
+ }
+ v.reset(OpAMD64MOVBstoreconst)
+ v.AuxInt = ValAndOff(sc).add(off)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVBstoreconst [x] {sym1} (LEAQ1 [off] {sym2} ptr idx) mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVBstoreconstidx1 [ValAndOff(x).add(off)] {mergeSym(sym1,sym2)} ptr idx mem)
+ for {
+ x := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ1 {
+ break
+ }
+ off := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVBstoreconstidx1)
+ v.AuxInt = ValAndOff(x).add(off)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVBstoreconst [x] {sym} (ADDQ ptr idx) mem)
+ // cond:
+ // result: (MOVBstoreconstidx1 [x] {sym} ptr idx mem)
+ for {
+ x := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQ {
+ break
+ }
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVBstoreconstidx1)
+ v.AuxInt = x
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVBstoreconstidx1(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVBstoreconstidx1 [x] {sym} (ADDQconst [c] ptr) idx mem)
+ // cond:
+ // result: (MOVBstoreconstidx1 [ValAndOff(x).add(c)] {sym} ptr idx mem)
+ for {
+ x := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVBstoreconstidx1)
+ v.AuxInt = ValAndOff(x).add(c)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVBstoreconstidx1 [x] {sym} ptr (ADDQconst [c] idx) mem)
+ // cond:
+ // result: (MOVBstoreconstidx1 [ValAndOff(x).add(c)] {sym} ptr idx mem)
+ for {
+ x := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ idx := v.Args[1].Args[0]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVBstoreconstidx1)
+ v.AuxInt = ValAndOff(x).add(c)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVBstoreidx1(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVBstoreidx1 [c] {sym} (ADDQconst [d] ptr) idx val mem)
+ // cond:
+ // result: (MOVBstoreidx1 [c+d] {sym} ptr idx val mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[1]
+ val := v.Args[2]
+ mem := v.Args[3]
+ v.reset(OpAMD64MOVBstoreidx1)
+ v.AuxInt = c + d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVBstoreidx1 [c] {sym} ptr (ADDQconst [d] idx) val mem)
+ // cond:
+ // result: (MOVBstoreidx1 [c+d] {sym} ptr idx val mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[1].AuxInt
+ idx := v.Args[1].Args[0]
+ val := v.Args[2]
+ mem := v.Args[3]
+ v.reset(OpAMD64MOVBstoreidx1)
+ v.AuxInt = c + d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVLQSX(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVLQSX (MOVLload [off] {sym} ptr mem))
+ // cond:
+ // result: @v.Args[0].Block (MOVLQSXload <v.Type> [off] {sym} ptr mem)
+ for {
+ if v.Args[0].Op != OpAMD64MOVLload {
+ break
+ }
+ off := v.Args[0].AuxInt
+ sym := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[0].Args[1]
+ b = v.Args[0].Block
+ v0 := b.NewValue0(v.Line, OpAMD64MOVLQSXload, v.Type)
+ v.reset(OpCopy)
+ v.AddArg(v0)
+ v0.AuxInt = off
+ v0.Aux = sym
+ v0.AddArg(ptr)
+ v0.AddArg(mem)
+ return true
+ }
+ // match: (MOVLQSX (ANDLconst [c] x))
+ // cond: c & 0x80000000 == 0
+ // result: (ANDQconst [c & 0x7fffffff] x)
+ for {
+ if v.Args[0].Op != OpAMD64ANDLconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ if !(c&0x80000000 == 0) {
+ break
+ }
+ v.reset(OpAMD64ANDQconst)
+ v.AuxInt = c & 0x7fffffff
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVLQZX(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVLQZX (MOVLload [off] {sym} ptr mem))
+ // cond:
+ // result: @v.Args[0].Block (MOVLQZXload <v.Type> [off] {sym} ptr mem)
+ for {
+ if v.Args[0].Op != OpAMD64MOVLload {
+ break
+ }
+ off := v.Args[0].AuxInt
+ sym := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[0].Args[1]
+ b = v.Args[0].Block
+ v0 := b.NewValue0(v.Line, OpAMD64MOVLQZXload, v.Type)
+ v.reset(OpCopy)
+ v.AddArg(v0)
+ v0.AuxInt = off
+ v0.Aux = sym
+ v0.AddArg(ptr)
+ v0.AddArg(mem)
+ return true
+ }
+ // match: (MOVLQZX (ANDLconst [c] x))
+ // cond:
+ // result: (ANDQconst [c & 0xffffffff] x)
+ for {
+ if v.Args[0].Op != OpAMD64ANDLconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64ANDQconst)
+ v.AuxInt = c & 0xffffffff
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVLload(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVLload [off] {sym} ptr (MOVLstore [off2] {sym2} ptr2 x _))
+ // cond: sym == sym2 && off == off2 && isSamePtr(ptr, ptr2)
+ // result: x
+ for {
+ off := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLstore {
+ break
+ }
+ off2 := v.Args[1].AuxInt
+ sym2 := v.Args[1].Aux
+ ptr2 := v.Args[1].Args[0]
+ x := v.Args[1].Args[1]
+ if !(sym == sym2 && off == off2 && isSamePtr(ptr, ptr2)) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVLload [off1] {sym} (ADDQconst [off2] ptr) mem)
+ // cond:
+ // result: (MOVLload [addOff(off1, off2)] {sym} ptr mem)
+ for {
+ off1 := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVLload)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVLload [off1] {sym1} (LEAQ [off2] {sym2} base) mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVLload [addOff(off1,off2)] {mergeSym(sym1,sym2)} base mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ base := v.Args[0].Args[0]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVLload)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(base)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVLload [off1] {sym1} (LEAQ4 [off2] {sym2} ptr idx) mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVLloadidx4 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ4 {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVLloadidx4)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVLloadidx4(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVLloadidx4 [c] {sym} (ADDQconst [d] ptr) idx mem)
+ // cond:
+ // result: (MOVLloadidx4 [c+d] {sym} ptr idx mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVLloadidx4)
+ v.AuxInt = c + d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVLloadidx4 [c] {sym} ptr (ADDQconst [d] idx) mem)
+ // cond:
+ // result: (MOVLloadidx4 [c+4*d] {sym} ptr idx mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[1].AuxInt
+ idx := v.Args[1].Args[0]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVLloadidx4)
+ v.AuxInt = c + 4*d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVLstore(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVLstore [off] {sym} ptr (MOVLQSX x) mem)
+ // cond:
+ // result: (MOVLstore [off] {sym} ptr x mem)
+ for {
+ off := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLQSX {
+ break
+ }
+ x := v.Args[1].Args[0]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVLstore)
+ v.AuxInt = off
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(x)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVLstore [off] {sym} ptr (MOVLQZX x) mem)
+ // cond:
+ // result: (MOVLstore [off] {sym} ptr x mem)
+ for {
+ off := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLQZX {
+ break
+ }
+ x := v.Args[1].Args[0]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVLstore)
+ v.AuxInt = off
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(x)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVLstore [off1] {sym} (ADDQconst [off2] ptr) val mem)
+ // cond:
+ // result: (MOVLstore [addOff(off1, off2)] {sym} ptr val mem)
+ for {
+ off1 := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVLstore)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVLstore [off] {sym} ptr (MOVLconst [c]) mem)
+ // cond: validOff(off)
+ // result: (MOVLstoreconst [makeValAndOff(int64(int32(c)),off)] {sym} ptr mem)
+ for {
+ off := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ mem := v.Args[2]
+ if !(validOff(off)) {
+ break
+ }
+ v.reset(OpAMD64MOVLstoreconst)
+ v.AuxInt = makeValAndOff(int64(int32(c)), off)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVLstore [off1] {sym1} (LEAQ [off2] {sym2} base) val mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVLstore [addOff(off1,off2)] {mergeSym(sym1,sym2)} base val mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ base := v.Args[0].Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVLstore)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(base)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVLstore [off1] {sym1} (LEAQ4 [off2] {sym2} ptr idx) val mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVLstoreidx4 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx val mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ4 {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ val := v.Args[1]
+ mem := v.Args[2]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVLstoreidx4)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVLstoreconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVLstoreconst [sc] {s} (ADDQconst [off] ptr) mem)
+ // cond: ValAndOff(sc).canAdd(off)
+ // result: (MOVLstoreconst [ValAndOff(sc).add(off)] {s} ptr mem)
+ for {
+ sc := v.AuxInt
+ s := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[1]
+ if !(ValAndOff(sc).canAdd(off)) {
+ break
+ }
+ v.reset(OpAMD64MOVLstoreconst)
+ v.AuxInt = ValAndOff(sc).add(off)
+ v.Aux = s
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVLstoreconst [sc] {sym1} (LEAQ [off] {sym2} ptr) mem)
+ // cond: canMergeSym(sym1, sym2) && ValAndOff(sc).canAdd(off)
+ // result: (MOVLstoreconst [ValAndOff(sc).add(off)] {mergeSym(sym1, sym2)} ptr mem)
+ for {
+ sc := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2) && ValAndOff(sc).canAdd(off)) {
+ break
+ }
+ v.reset(OpAMD64MOVLstoreconst)
+ v.AuxInt = ValAndOff(sc).add(off)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVLstoreconst [x] {sym1} (LEAQ4 [off] {sym2} ptr idx) mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVLstoreconstidx4 [ValAndOff(x).add(off)] {mergeSym(sym1,sym2)} ptr idx mem)
+ for {
+ x := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ4 {
+ break
+ }
+ off := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVLstoreconstidx4)
+ v.AuxInt = ValAndOff(x).add(off)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVLstoreconstidx4(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVLstoreconstidx4 [x] {sym} (ADDQconst [c] ptr) idx mem)
+ // cond:
+ // result: (MOVLstoreconstidx4 [ValAndOff(x).add(c)] {sym} ptr idx mem)
+ for {
+ x := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVLstoreconstidx4)
+ v.AuxInt = ValAndOff(x).add(c)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVLstoreconstidx4 [x] {sym} ptr (ADDQconst [c] idx) mem)
+ // cond:
+ // result: (MOVLstoreconstidx4 [ValAndOff(x).add(4*c)] {sym} ptr idx mem)
+ for {
+ x := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ idx := v.Args[1].Args[0]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVLstoreconstidx4)
+ v.AuxInt = ValAndOff(x).add(4 * c)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVLstoreidx4(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVLstoreidx4 [c] {sym} (ADDQconst [d] ptr) idx val mem)
+ // cond:
+ // result: (MOVLstoreidx4 [c+d] {sym} ptr idx val mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[1]
+ val := v.Args[2]
+ mem := v.Args[3]
+ v.reset(OpAMD64MOVLstoreidx4)
+ v.AuxInt = c + d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVLstoreidx4 [c] {sym} ptr (ADDQconst [d] idx) val mem)
+ // cond:
+ // result: (MOVLstoreidx4 [c+4*d] {sym} ptr idx val mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[1].AuxInt
+ idx := v.Args[1].Args[0]
+ val := v.Args[2]
+ mem := v.Args[3]
+ v.reset(OpAMD64MOVLstoreidx4)
+ v.AuxInt = c + 4*d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVOload(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVOload [off1] {sym} (ADDQconst [off2] ptr) mem)
+ // cond:
+ // result: (MOVOload [addOff(off1, off2)] {sym} ptr mem)
+ for {
+ off1 := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVOload)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVOload [off1] {sym1} (LEAQ [off2] {sym2} base) mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVOload [addOff(off1,off2)] {mergeSym(sym1,sym2)} base mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ base := v.Args[0].Args[0]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVOload)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(base)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVOstore(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVOstore [off1] {sym} (ADDQconst [off2] ptr) val mem)
+ // cond:
+ // result: (MOVOstore [addOff(off1, off2)] {sym} ptr val mem)
+ for {
+ off1 := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVOstore)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVOstore [off1] {sym1} (LEAQ [off2] {sym2} base) val mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVOstore [addOff(off1,off2)] {mergeSym(sym1,sym2)} base val mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ base := v.Args[0].Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVOstore)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(base)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVQload(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVQload [off] {sym} ptr (MOVQstore [off2] {sym2} ptr2 x _))
+ // cond: sym == sym2 && off == off2 && isSamePtr(ptr, ptr2)
+ // result: x
+ for {
+ off := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQstore {
+ break
+ }
+ off2 := v.Args[1].AuxInt
+ sym2 := v.Args[1].Aux
+ ptr2 := v.Args[1].Args[0]
+ x := v.Args[1].Args[1]
+ if !(sym == sym2 && off == off2 && isSamePtr(ptr, ptr2)) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVQload [off1] {sym} (ADDQconst [off2] ptr) mem)
+ // cond:
+ // result: (MOVQload [addOff(off1, off2)] {sym} ptr mem)
+ for {
+ off1 := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVQload)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVQload [off1] {sym1} (LEAQ [off2] {sym2} base) mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVQload [addOff(off1,off2)] {mergeSym(sym1,sym2)} base mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ base := v.Args[0].Args[0]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVQload)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(base)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVQload [off1] {sym1} (LEAQ8 [off2] {sym2} ptr idx) mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVQloadidx8 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ8 {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVQloadidx8)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVQloadidx8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVQloadidx8 [c] {sym} (ADDQconst [d] ptr) idx mem)
+ // cond:
+ // result: (MOVQloadidx8 [c+d] {sym} ptr idx mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVQloadidx8)
+ v.AuxInt = c + d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVQloadidx8 [c] {sym} ptr (ADDQconst [d] idx) mem)
+ // cond:
+ // result: (MOVQloadidx8 [c+8*d] {sym} ptr idx mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[1].AuxInt
+ idx := v.Args[1].Args[0]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVQloadidx8)
+ v.AuxInt = c + 8*d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVQstore(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVQstore [off1] {sym} (ADDQconst [off2] ptr) val mem)
+ // cond:
+ // result: (MOVQstore [addOff(off1, off2)] {sym} ptr val mem)
+ for {
+ off1 := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVQstore)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVQstore [off] {sym} ptr (MOVQconst [c]) mem)
+ // cond: validValAndOff(c,off)
+ // result: (MOVQstoreconst [makeValAndOff(c,off)] {sym} ptr mem)
+ for {
+ off := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ mem := v.Args[2]
+ if !(validValAndOff(c, off)) {
+ break
+ }
+ v.reset(OpAMD64MOVQstoreconst)
+ v.AuxInt = makeValAndOff(c, off)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVQstore [off1] {sym1} (LEAQ [off2] {sym2} base) val mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVQstore [addOff(off1,off2)] {mergeSym(sym1,sym2)} base val mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ base := v.Args[0].Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVQstore)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(base)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVQstore [off1] {sym1} (LEAQ8 [off2] {sym2} ptr idx) val mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVQstoreidx8 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx val mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ8 {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ val := v.Args[1]
+ mem := v.Args[2]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVQstoreidx8)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVQstoreconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVQstoreconst [sc] {s} (ADDQconst [off] ptr) mem)
+ // cond: ValAndOff(sc).canAdd(off)
+ // result: (MOVQstoreconst [ValAndOff(sc).add(off)] {s} ptr mem)
+ for {
+ sc := v.AuxInt
+ s := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[1]
+ if !(ValAndOff(sc).canAdd(off)) {
+ break
+ }
+ v.reset(OpAMD64MOVQstoreconst)
+ v.AuxInt = ValAndOff(sc).add(off)
+ v.Aux = s
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVQstoreconst [sc] {sym1} (LEAQ [off] {sym2} ptr) mem)
+ // cond: canMergeSym(sym1, sym2) && ValAndOff(sc).canAdd(off)
+ // result: (MOVQstoreconst [ValAndOff(sc).add(off)] {mergeSym(sym1, sym2)} ptr mem)
+ for {
+ sc := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2) && ValAndOff(sc).canAdd(off)) {
+ break
+ }
+ v.reset(OpAMD64MOVQstoreconst)
+ v.AuxInt = ValAndOff(sc).add(off)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVQstoreconst [x] {sym1} (LEAQ8 [off] {sym2} ptr idx) mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVQstoreconstidx8 [ValAndOff(x).add(off)] {mergeSym(sym1,sym2)} ptr idx mem)
+ for {
+ x := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ8 {
+ break
+ }
+ off := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVQstoreconstidx8)
+ v.AuxInt = ValAndOff(x).add(off)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVQstoreconstidx8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVQstoreconstidx8 [x] {sym} (ADDQconst [c] ptr) idx mem)
+ // cond:
+ // result: (MOVQstoreconstidx8 [ValAndOff(x).add(c)] {sym} ptr idx mem)
+ for {
+ x := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVQstoreconstidx8)
+ v.AuxInt = ValAndOff(x).add(c)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVQstoreconstidx8 [x] {sym} ptr (ADDQconst [c] idx) mem)
+ // cond:
+ // result: (MOVQstoreconstidx8 [ValAndOff(x).add(8*c)] {sym} ptr idx mem)
+ for {
+ x := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ idx := v.Args[1].Args[0]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVQstoreconstidx8)
+ v.AuxInt = ValAndOff(x).add(8 * c)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVQstoreidx8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVQstoreidx8 [c] {sym} (ADDQconst [d] ptr) idx val mem)
+ // cond:
+ // result: (MOVQstoreidx8 [c+d] {sym} ptr idx val mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[1]
+ val := v.Args[2]
+ mem := v.Args[3]
+ v.reset(OpAMD64MOVQstoreidx8)
+ v.AuxInt = c + d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVQstoreidx8 [c] {sym} ptr (ADDQconst [d] idx) val mem)
+ // cond:
+ // result: (MOVQstoreidx8 [c+8*d] {sym} ptr idx val mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[1].AuxInt
+ idx := v.Args[1].Args[0]
+ val := v.Args[2]
+ mem := v.Args[3]
+ v.reset(OpAMD64MOVQstoreidx8)
+ v.AuxInt = c + 8*d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVSDload(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVSDload [off1] {sym} (ADDQconst [off2] ptr) mem)
+ // cond:
+ // result: (MOVSDload [addOff(off1, off2)] {sym} ptr mem)
+ for {
+ off1 := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVSDload)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVSDload [off1] {sym1} (LEAQ [off2] {sym2} base) mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVSDload [addOff(off1,off2)] {mergeSym(sym1,sym2)} base mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ base := v.Args[0].Args[0]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVSDload)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(base)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVSDload [off1] {sym1} (LEAQ8 [off2] {sym2} ptr idx) mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVSDloadidx8 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ8 {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVSDloadidx8)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVSDloadidx8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVSDloadidx8 [c] {sym} (ADDQconst [d] ptr) idx mem)
+ // cond:
+ // result: (MOVSDloadidx8 [c+d] {sym} ptr idx mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVSDloadidx8)
+ v.AuxInt = c + d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVSDloadidx8 [c] {sym} ptr (ADDQconst [d] idx) mem)
+ // cond:
+ // result: (MOVSDloadidx8 [c+8*d] {sym} ptr idx mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[1].AuxInt
+ idx := v.Args[1].Args[0]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVSDloadidx8)
+ v.AuxInt = c + 8*d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVSDstore(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVSDstore [off1] {sym} (ADDQconst [off2] ptr) val mem)
+ // cond:
+ // result: (MOVSDstore [addOff(off1, off2)] {sym} ptr val mem)
+ for {
+ off1 := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVSDstore)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVSDstore [off1] {sym1} (LEAQ [off2] {sym2} base) val mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVSDstore [addOff(off1,off2)] {mergeSym(sym1,sym2)} base val mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ base := v.Args[0].Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVSDstore)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(base)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVSDstore [off1] {sym1} (LEAQ8 [off2] {sym2} ptr idx) val mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVSDstoreidx8 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx val mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ8 {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ val := v.Args[1]
+ mem := v.Args[2]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVSDstoreidx8)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVSDstoreidx8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVSDstoreidx8 [c] {sym} (ADDQconst [d] ptr) idx val mem)
+ // cond:
+ // result: (MOVSDstoreidx8 [c+d] {sym} ptr idx val mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[1]
+ val := v.Args[2]
+ mem := v.Args[3]
+ v.reset(OpAMD64MOVSDstoreidx8)
+ v.AuxInt = c + d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVSDstoreidx8 [c] {sym} ptr (ADDQconst [d] idx) val mem)
+ // cond:
+ // result: (MOVSDstoreidx8 [c+8*d] {sym} ptr idx val mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[1].AuxInt
+ idx := v.Args[1].Args[0]
+ val := v.Args[2]
+ mem := v.Args[3]
+ v.reset(OpAMD64MOVSDstoreidx8)
+ v.AuxInt = c + 8*d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVSSload(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVSSload [off1] {sym} (ADDQconst [off2] ptr) mem)
+ // cond:
+ // result: (MOVSSload [addOff(off1, off2)] {sym} ptr mem)
+ for {
+ off1 := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVSSload)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVSSload [off1] {sym1} (LEAQ [off2] {sym2} base) mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVSSload [addOff(off1,off2)] {mergeSym(sym1,sym2)} base mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ base := v.Args[0].Args[0]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVSSload)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(base)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVSSload [off1] {sym1} (LEAQ4 [off2] {sym2} ptr idx) mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVSSloadidx4 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ4 {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVSSloadidx4)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVSSloadidx4(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVSSloadidx4 [c] {sym} (ADDQconst [d] ptr) idx mem)
+ // cond:
+ // result: (MOVSSloadidx4 [c+d] {sym} ptr idx mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVSSloadidx4)
+ v.AuxInt = c + d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVSSloadidx4 [c] {sym} ptr (ADDQconst [d] idx) mem)
+ // cond:
+ // result: (MOVSSloadidx4 [c+4*d] {sym} ptr idx mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[1].AuxInt
+ idx := v.Args[1].Args[0]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVSSloadidx4)
+ v.AuxInt = c + 4*d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVSSstore(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVSSstore [off1] {sym} (ADDQconst [off2] ptr) val mem)
+ // cond:
+ // result: (MOVSSstore [addOff(off1, off2)] {sym} ptr val mem)
+ for {
+ off1 := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVSSstore)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVSSstore [off1] {sym1} (LEAQ [off2] {sym2} base) val mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVSSstore [addOff(off1,off2)] {mergeSym(sym1,sym2)} base val mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ base := v.Args[0].Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVSSstore)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(base)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVSSstore [off1] {sym1} (LEAQ4 [off2] {sym2} ptr idx) val mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVSSstoreidx4 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx val mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ4 {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ val := v.Args[1]
+ mem := v.Args[2]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVSSstoreidx4)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVSSstoreidx4(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVSSstoreidx4 [c] {sym} (ADDQconst [d] ptr) idx val mem)
+ // cond:
+ // result: (MOVSSstoreidx4 [c+d] {sym} ptr idx val mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[1]
+ val := v.Args[2]
+ mem := v.Args[3]
+ v.reset(OpAMD64MOVSSstoreidx4)
+ v.AuxInt = c + d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVSSstoreidx4 [c] {sym} ptr (ADDQconst [d] idx) val mem)
+ // cond:
+ // result: (MOVSSstoreidx4 [c+4*d] {sym} ptr idx val mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[1].AuxInt
+ idx := v.Args[1].Args[0]
+ val := v.Args[2]
+ mem := v.Args[3]
+ v.reset(OpAMD64MOVSSstoreidx4)
+ v.AuxInt = c + 4*d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVWQSX(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVWQSX (MOVWload [off] {sym} ptr mem))
+ // cond:
+ // result: @v.Args[0].Block (MOVWQSXload <v.Type> [off] {sym} ptr mem)
+ for {
+ if v.Args[0].Op != OpAMD64MOVWload {
+ break
+ }
+ off := v.Args[0].AuxInt
+ sym := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[0].Args[1]
+ b = v.Args[0].Block
+ v0 := b.NewValue0(v.Line, OpAMD64MOVWQSXload, v.Type)
+ v.reset(OpCopy)
+ v.AddArg(v0)
+ v0.AuxInt = off
+ v0.Aux = sym
+ v0.AddArg(ptr)
+ v0.AddArg(mem)
+ return true
+ }
+ // match: (MOVWQSX (ANDWconst [c] x))
+ // cond: c & 0x8000 == 0
+ // result: (ANDQconst [c & 0x7fff] x)
+ for {
+ if v.Args[0].Op != OpAMD64ANDWconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ if !(c&0x8000 == 0) {
+ break
+ }
+ v.reset(OpAMD64ANDQconst)
+ v.AuxInt = c & 0x7fff
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVWQZX(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVWQZX (MOVWload [off] {sym} ptr mem))
+ // cond:
+ // result: @v.Args[0].Block (MOVWQZXload <v.Type> [off] {sym} ptr mem)
+ for {
+ if v.Args[0].Op != OpAMD64MOVWload {
+ break
+ }
+ off := v.Args[0].AuxInt
+ sym := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[0].Args[1]
+ b = v.Args[0].Block
+ v0 := b.NewValue0(v.Line, OpAMD64MOVWQZXload, v.Type)
+ v.reset(OpCopy)
+ v.AddArg(v0)
+ v0.AuxInt = off
+ v0.Aux = sym
+ v0.AddArg(ptr)
+ v0.AddArg(mem)
+ return true
+ }
+ // match: (MOVWQZX (ANDWconst [c] x))
+ // cond:
+ // result: (ANDQconst [c & 0xffff] x)
+ for {
+ if v.Args[0].Op != OpAMD64ANDWconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64ANDQconst)
+ v.AuxInt = c & 0xffff
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVWload(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVWload [off] {sym} ptr (MOVWstore [off2] {sym2} ptr2 x _))
+ // cond: sym == sym2 && off == off2 && isSamePtr(ptr, ptr2)
+ // result: x
+ for {
+ off := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWstore {
+ break
+ }
+ off2 := v.Args[1].AuxInt
+ sym2 := v.Args[1].Aux
+ ptr2 := v.Args[1].Args[0]
+ x := v.Args[1].Args[1]
+ if !(sym == sym2 && off == off2 && isSamePtr(ptr, ptr2)) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVWload [off1] {sym} (ADDQconst [off2] ptr) mem)
+ // cond:
+ // result: (MOVWload [addOff(off1, off2)] {sym} ptr mem)
+ for {
+ off1 := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVWload)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVWload [off1] {sym1} (LEAQ [off2] {sym2} base) mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVWload [addOff(off1,off2)] {mergeSym(sym1,sym2)} base mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ base := v.Args[0].Args[0]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVWload)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(base)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVWload [off1] {sym1} (LEAQ2 [off2] {sym2} ptr idx) mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVWloadidx2 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ2 {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVWloadidx2)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVWloadidx2(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVWloadidx2 [c] {sym} (ADDQconst [d] ptr) idx mem)
+ // cond:
+ // result: (MOVWloadidx2 [c+d] {sym} ptr idx mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVWloadidx2)
+ v.AuxInt = c + d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVWloadidx2 [c] {sym} ptr (ADDQconst [d] idx) mem)
+ // cond:
+ // result: (MOVWloadidx2 [c+2*d] {sym} ptr idx mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[1].AuxInt
+ idx := v.Args[1].Args[0]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVWloadidx2)
+ v.AuxInt = c + 2*d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVWstore(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVWstore [off] {sym} ptr (MOVWQSX x) mem)
+ // cond:
+ // result: (MOVWstore [off] {sym} ptr x mem)
+ for {
+ off := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWQSX {
+ break
+ }
+ x := v.Args[1].Args[0]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVWstore)
+ v.AuxInt = off
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(x)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVWstore [off] {sym} ptr (MOVWQZX x) mem)
+ // cond:
+ // result: (MOVWstore [off] {sym} ptr x mem)
+ for {
+ off := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWQZX {
+ break
+ }
+ x := v.Args[1].Args[0]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVWstore)
+ v.AuxInt = off
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(x)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVWstore [off1] {sym} (ADDQconst [off2] ptr) val mem)
+ // cond:
+ // result: (MOVWstore [addOff(off1, off2)] {sym} ptr val mem)
+ for {
+ off1 := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVWstore)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVWstore [off] {sym} ptr (MOVWconst [c]) mem)
+ // cond: validOff(off)
+ // result: (MOVWstoreconst [makeValAndOff(int64(int16(c)),off)] {sym} ptr mem)
+ for {
+ off := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ mem := v.Args[2]
+ if !(validOff(off)) {
+ break
+ }
+ v.reset(OpAMD64MOVWstoreconst)
+ v.AuxInt = makeValAndOff(int64(int16(c)), off)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVWstore [off1] {sym1} (LEAQ [off2] {sym2} base) val mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVWstore [addOff(off1,off2)] {mergeSym(sym1,sym2)} base val mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ base := v.Args[0].Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVWstore)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(base)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVWstore [off1] {sym1} (LEAQ2 [off2] {sym2} ptr idx) val mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVWstoreidx2 [addOff(off1, off2)] {mergeSym(sym1,sym2)} ptr idx val mem)
+ for {
+ off1 := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ2 {
+ break
+ }
+ off2 := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ val := v.Args[1]
+ mem := v.Args[2]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVWstoreidx2)
+ v.AuxInt = addOff(off1, off2)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVWstoreconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVWstoreconst [sc] {s} (ADDQconst [off] ptr) mem)
+ // cond: ValAndOff(sc).canAdd(off)
+ // result: (MOVWstoreconst [ValAndOff(sc).add(off)] {s} ptr mem)
+ for {
+ sc := v.AuxInt
+ s := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ off := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[1]
+ if !(ValAndOff(sc).canAdd(off)) {
+ break
+ }
+ v.reset(OpAMD64MOVWstoreconst)
+ v.AuxInt = ValAndOff(sc).add(off)
+ v.Aux = s
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVWstoreconst [sc] {sym1} (LEAQ [off] {sym2} ptr) mem)
+ // cond: canMergeSym(sym1, sym2) && ValAndOff(sc).canAdd(off)
+ // result: (MOVWstoreconst [ValAndOff(sc).add(off)] {mergeSym(sym1, sym2)} ptr mem)
+ for {
+ sc := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ {
+ break
+ }
+ off := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2) && ValAndOff(sc).canAdd(off)) {
+ break
+ }
+ v.reset(OpAMD64MOVWstoreconst)
+ v.AuxInt = ValAndOff(sc).add(off)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVWstoreconst [x] {sym1} (LEAQ2 [off] {sym2} ptr idx) mem)
+ // cond: canMergeSym(sym1, sym2)
+ // result: (MOVWstoreconstidx2 [ValAndOff(x).add(off)] {mergeSym(sym1,sym2)} ptr idx mem)
+ for {
+ x := v.AuxInt
+ sym1 := v.Aux
+ if v.Args[0].Op != OpAMD64LEAQ2 {
+ break
+ }
+ off := v.Args[0].AuxInt
+ sym2 := v.Args[0].Aux
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[0].Args[1]
+ mem := v.Args[1]
+ if !(canMergeSym(sym1, sym2)) {
+ break
+ }
+ v.reset(OpAMD64MOVWstoreconstidx2)
+ v.AuxInt = ValAndOff(x).add(off)
+ v.Aux = mergeSym(sym1, sym2)
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVWstoreconstidx2(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVWstoreconstidx2 [x] {sym} (ADDQconst [c] ptr) idx mem)
+ // cond:
+ // result: (MOVWstoreconstidx2 [ValAndOff(x).add(c)] {sym} ptr idx mem)
+ for {
+ x := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVWstoreconstidx2)
+ v.AuxInt = ValAndOff(x).add(c)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVWstoreconstidx2 [x] {sym} ptr (ADDQconst [c] idx) mem)
+ // cond:
+ // result: (MOVWstoreconstidx2 [ValAndOff(x).add(2*c)] {sym} ptr idx mem)
+ for {
+ x := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ idx := v.Args[1].Args[0]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVWstoreconstidx2)
+ v.AuxInt = ValAndOff(x).add(2 * c)
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MOVWstoreidx2(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MOVWstoreidx2 [c] {sym} (ADDQconst [d] ptr) idx val mem)
+ // cond:
+ // result: (MOVWstoreidx2 [c+d] {sym} ptr idx val mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ if v.Args[0].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ ptr := v.Args[0].Args[0]
+ idx := v.Args[1]
+ val := v.Args[2]
+ mem := v.Args[3]
+ v.reset(OpAMD64MOVWstoreidx2)
+ v.AuxInt = c + d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (MOVWstoreidx2 [c] {sym} ptr (ADDQconst [d] idx) val mem)
+ // cond:
+ // result: (MOVWstoreidx2 [c+2*d] {sym} ptr idx val mem)
+ for {
+ c := v.AuxInt
+ sym := v.Aux
+ ptr := v.Args[0]
+ if v.Args[1].Op != OpAMD64ADDQconst {
+ break
+ }
+ d := v.Args[1].AuxInt
+ idx := v.Args[1].Args[0]
+ val := v.Args[2]
+ mem := v.Args[3]
+ v.reset(OpAMD64MOVWstoreidx2)
+ v.AuxInt = c + 2*d
+ v.Aux = sym
+ v.AddArg(ptr)
+ v.AddArg(idx)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MULB(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MULB x (MOVBconst [c]))
+ // cond:
+ // result: (MULBconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64MULBconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (MULB (MOVBconst [c]) x)
+ // cond:
+ // result: (MULBconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64MULBconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MULBconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MULBconst [c] (MOVBconst [d]))
+ // cond:
+ // result: (MOVBconst [c*d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = c * d
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MULL(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MULL x (MOVLconst [c]))
+ // cond:
+ // result: (MULLconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64MULLconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (MULL (MOVLconst [c]) x)
+ // cond:
+ // result: (MULLconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64MULLconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MULLconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MULLconst [c] (MOVLconst [d]))
+ // cond:
+ // result: (MOVLconst [c*d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = c * d
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MULQ(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MULQ x (MOVQconst [c]))
+ // cond: is32Bit(c)
+ // result: (MULQconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(is32Bit(c)) {
+ break
+ }
+ v.reset(OpAMD64MULQconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (MULQ (MOVQconst [c]) x)
+ // cond: is32Bit(c)
+ // result: (MULQconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ if !(is32Bit(c)) {
+ break
+ }
+ v.reset(OpAMD64MULQconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MULQconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MULQconst [-1] x)
+ // cond:
+ // result: (NEGQ x)
+ for {
+ if v.AuxInt != -1 {
+ break
+ }
+ x := v.Args[0]
+ v.reset(OpAMD64NEGQ)
+ v.AddArg(x)
+ return true
+ }
+ // match: (MULQconst [0] _)
+ // cond:
+ // result: (MOVQconst [0])
+ for {
+ if v.AuxInt != 0 {
+ break
+ }
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (MULQconst [1] x)
+ // cond:
+ // result: x
+ for {
+ if v.AuxInt != 1 {
+ break
+ }
+ x := v.Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MULQconst [3] x)
+ // cond:
+ // result: (LEAQ2 x x)
+ for {
+ if v.AuxInt != 3 {
+ break
+ }
+ x := v.Args[0]
+ v.reset(OpAMD64LEAQ2)
+ v.AddArg(x)
+ v.AddArg(x)
+ return true
+ }
+ // match: (MULQconst [5] x)
+ // cond:
+ // result: (LEAQ4 x x)
+ for {
+ if v.AuxInt != 5 {
+ break
+ }
+ x := v.Args[0]
+ v.reset(OpAMD64LEAQ4)
+ v.AddArg(x)
+ v.AddArg(x)
+ return true
+ }
+ // match: (MULQconst [9] x)
+ // cond:
+ // result: (LEAQ8 x x)
+ for {
+ if v.AuxInt != 9 {
+ break
+ }
+ x := v.Args[0]
+ v.reset(OpAMD64LEAQ8)
+ v.AddArg(x)
+ v.AddArg(x)
+ return true
+ }
+ // match: (MULQconst [c] x)
+ // cond: isPowerOfTwo(c)
+ // result: (SHLQconst [log2(c)] x)
+ for {
+ c := v.AuxInt
+ x := v.Args[0]
+ if !(isPowerOfTwo(c)) {
+ break
+ }
+ v.reset(OpAMD64SHLQconst)
+ v.AuxInt = log2(c)
+ v.AddArg(x)
+ return true
+ }
+ // match: (MULQconst [c] (MOVQconst [d]))
+ // cond:
+ // result: (MOVQconst [c*d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = c * d
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MULW(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MULW x (MOVWconst [c]))
+ // cond:
+ // result: (MULWconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64MULWconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (MULW (MOVWconst [c]) x)
+ // cond:
+ // result: (MULWconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64MULWconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64MULWconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (MULWconst [c] (MOVWconst [d]))
+ // cond:
+ // result: (MOVWconst [c*d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVWconst)
+ v.AuxInt = c * d
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpMod16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mod16 x y)
+ // cond:
+ // result: (MODW x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64MODW)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpMod16u(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mod16u x y)
+ // cond:
+ // result: (MODWU x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64MODWU)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpMod32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mod32 x y)
+ // cond:
+ // result: (MODL x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64MODL)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpMod32u(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mod32u x y)
+ // cond:
+ // result: (MODLU x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64MODLU)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpMod64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mod64 x y)
+ // cond:
+ // result: (MODQ x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64MODQ)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpMod64u(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mod64u x y)
+ // cond:
+ // result: (MODQU x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64MODQU)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpMod8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mod8 x y)
+ // cond:
+ // result: (MODW (SignExt8to16 x) (SignExt8to16 y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64MODW)
+ v0 := b.NewValue0(v.Line, OpSignExt8to16, config.fe.TypeInt16())
+ v0.AddArg(x)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpSignExt8to16, config.fe.TypeInt16())
+ v1.AddArg(y)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpMod8u(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mod8u x y)
+ // cond:
+ // result: (MODWU (ZeroExt8to16 x) (ZeroExt8to16 y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64MODWU)
+ v0 := b.NewValue0(v.Line, OpZeroExt8to16, config.fe.TypeUInt16())
+ v0.AddArg(x)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpZeroExt8to16, config.fe.TypeUInt16())
+ v1.AddArg(y)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpMove(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Move [0] _ _ mem)
+ // cond:
+ // result: mem
+ for {
+ if v.AuxInt != 0 {
+ break
+ }
+ mem := v.Args[2]
+ v.reset(OpCopy)
+ v.Type = mem.Type
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Move [1] dst src mem)
+ // cond:
+ // result: (MOVBstore dst (MOVBload src mem) mem)
+ for {
+ if v.AuxInt != 1 {
+ break
+ }
+ dst := v.Args[0]
+ src := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVBstore)
+ v.AddArg(dst)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVBload, config.fe.TypeUInt8())
+ v0.AddArg(src)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Move [2] dst src mem)
+ // cond:
+ // result: (MOVWstore dst (MOVWload src mem) mem)
+ for {
+ if v.AuxInt != 2 {
+ break
+ }
+ dst := v.Args[0]
+ src := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVWstore)
+ v.AddArg(dst)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVWload, config.fe.TypeUInt16())
+ v0.AddArg(src)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Move [4] dst src mem)
+ // cond:
+ // result: (MOVLstore dst (MOVLload src mem) mem)
+ for {
+ if v.AuxInt != 4 {
+ break
+ }
+ dst := v.Args[0]
+ src := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVLstore)
+ v.AddArg(dst)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVLload, config.fe.TypeUInt32())
+ v0.AddArg(src)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Move [8] dst src mem)
+ // cond:
+ // result: (MOVQstore dst (MOVQload src mem) mem)
+ for {
+ if v.AuxInt != 8 {
+ break
+ }
+ dst := v.Args[0]
+ src := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVQstore)
+ v.AddArg(dst)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVQload, config.fe.TypeUInt64())
+ v0.AddArg(src)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Move [16] dst src mem)
+ // cond:
+ // result: (MOVOstore dst (MOVOload src mem) mem)
+ for {
+ if v.AuxInt != 16 {
+ break
+ }
+ dst := v.Args[0]
+ src := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVOstore)
+ v.AddArg(dst)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVOload, TypeInt128)
+ v0.AddArg(src)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Move [3] dst src mem)
+ // cond:
+ // result: (MOVBstore [2] dst (MOVBload [2] src mem) (MOVWstore dst (MOVWload src mem) mem))
+ for {
+ if v.AuxInt != 3 {
+ break
+ }
+ dst := v.Args[0]
+ src := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVBstore)
+ v.AuxInt = 2
+ v.AddArg(dst)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVBload, config.fe.TypeUInt8())
+ v0.AuxInt = 2
+ v0.AddArg(src)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64MOVWstore, TypeMem)
+ v1.AddArg(dst)
+ v2 := b.NewValue0(v.Line, OpAMD64MOVWload, config.fe.TypeUInt16())
+ v2.AddArg(src)
+ v2.AddArg(mem)
+ v1.AddArg(v2)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Move [5] dst src mem)
+ // cond:
+ // result: (MOVBstore [4] dst (MOVBload [4] src mem) (MOVLstore dst (MOVLload src mem) mem))
+ for {
+ if v.AuxInt != 5 {
+ break
+ }
+ dst := v.Args[0]
+ src := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVBstore)
+ v.AuxInt = 4
+ v.AddArg(dst)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVBload, config.fe.TypeUInt8())
+ v0.AuxInt = 4
+ v0.AddArg(src)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64MOVLstore, TypeMem)
+ v1.AddArg(dst)
+ v2 := b.NewValue0(v.Line, OpAMD64MOVLload, config.fe.TypeUInt32())
+ v2.AddArg(src)
+ v2.AddArg(mem)
+ v1.AddArg(v2)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Move [6] dst src mem)
+ // cond:
+ // result: (MOVWstore [4] dst (MOVWload [4] src mem) (MOVLstore dst (MOVLload src mem) mem))
+ for {
+ if v.AuxInt != 6 {
+ break
+ }
+ dst := v.Args[0]
+ src := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVWstore)
+ v.AuxInt = 4
+ v.AddArg(dst)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVWload, config.fe.TypeUInt16())
+ v0.AuxInt = 4
+ v0.AddArg(src)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64MOVLstore, TypeMem)
+ v1.AddArg(dst)
+ v2 := b.NewValue0(v.Line, OpAMD64MOVLload, config.fe.TypeUInt32())
+ v2.AddArg(src)
+ v2.AddArg(mem)
+ v1.AddArg(v2)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Move [7] dst src mem)
+ // cond:
+ // result: (MOVLstore [3] dst (MOVLload [3] src mem) (MOVLstore dst (MOVLload src mem) mem))
+ for {
+ if v.AuxInt != 7 {
+ break
+ }
+ dst := v.Args[0]
+ src := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVLstore)
+ v.AuxInt = 3
+ v.AddArg(dst)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVLload, config.fe.TypeUInt32())
+ v0.AuxInt = 3
+ v0.AddArg(src)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64MOVLstore, TypeMem)
+ v1.AddArg(dst)
+ v2 := b.NewValue0(v.Line, OpAMD64MOVLload, config.fe.TypeUInt32())
+ v2.AddArg(src)
+ v2.AddArg(mem)
+ v1.AddArg(v2)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Move [size] dst src mem)
+ // cond: size > 8 && size < 16
+ // result: (MOVQstore [size-8] dst (MOVQload [size-8] src mem) (MOVQstore dst (MOVQload src mem) mem))
+ for {
+ size := v.AuxInt
+ dst := v.Args[0]
+ src := v.Args[1]
+ mem := v.Args[2]
+ if !(size > 8 && size < 16) {
+ break
+ }
+ v.reset(OpAMD64MOVQstore)
+ v.AuxInt = size - 8
+ v.AddArg(dst)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVQload, config.fe.TypeUInt64())
+ v0.AuxInt = size - 8
+ v0.AddArg(src)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64MOVQstore, TypeMem)
+ v1.AddArg(dst)
+ v2 := b.NewValue0(v.Line, OpAMD64MOVQload, config.fe.TypeUInt64())
+ v2.AddArg(src)
+ v2.AddArg(mem)
+ v1.AddArg(v2)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Move [size] dst src mem)
+ // cond: size > 16 && size%16 != 0 && size%16 <= 8
+ // result: (Move [size-size%16] (ADDQconst <dst.Type> dst [size%16]) (ADDQconst <src.Type> src [size%16]) (MOVQstore dst (MOVQload src mem) mem))
+ for {
+ size := v.AuxInt
+ dst := v.Args[0]
+ src := v.Args[1]
+ mem := v.Args[2]
+ if !(size > 16 && size%16 != 0 && size%16 <= 8) {
+ break
+ }
+ v.reset(OpMove)
+ v.AuxInt = size - size%16
+ v0 := b.NewValue0(v.Line, OpAMD64ADDQconst, dst.Type)
+ v0.AddArg(dst)
+ v0.AuxInt = size % 16
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64ADDQconst, src.Type)
+ v1.AddArg(src)
+ v1.AuxInt = size % 16
+ v.AddArg(v1)
+ v2 := b.NewValue0(v.Line, OpAMD64MOVQstore, TypeMem)
+ v2.AddArg(dst)
+ v3 := b.NewValue0(v.Line, OpAMD64MOVQload, config.fe.TypeUInt64())
+ v3.AddArg(src)
+ v3.AddArg(mem)
+ v2.AddArg(v3)
+ v2.AddArg(mem)
+ v.AddArg(v2)
+ return true
+ }
+ // match: (Move [size] dst src mem)
+ // cond: size > 16 && size%16 != 0 && size%16 > 8
+ // result: (Move [size-size%16] (ADDQconst <dst.Type> dst [size%16]) (ADDQconst <src.Type> src [size%16]) (MOVOstore dst (MOVOload src mem) mem))
+ for {
+ size := v.AuxInt
+ dst := v.Args[0]
+ src := v.Args[1]
+ mem := v.Args[2]
+ if !(size > 16 && size%16 != 0 && size%16 > 8) {
+ break
+ }
+ v.reset(OpMove)
+ v.AuxInt = size - size%16
+ v0 := b.NewValue0(v.Line, OpAMD64ADDQconst, dst.Type)
+ v0.AddArg(dst)
+ v0.AuxInt = size % 16
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64ADDQconst, src.Type)
+ v1.AddArg(src)
+ v1.AuxInt = size % 16
+ v.AddArg(v1)
+ v2 := b.NewValue0(v.Line, OpAMD64MOVOstore, TypeMem)
+ v2.AddArg(dst)
+ v3 := b.NewValue0(v.Line, OpAMD64MOVOload, TypeInt128)
+ v3.AddArg(src)
+ v3.AddArg(mem)
+ v2.AddArg(v3)
+ v2.AddArg(mem)
+ v.AddArg(v2)
+ return true
+ }
+ // match: (Move [size] dst src mem)
+ // cond: size >= 32 && size <= 16*64 && size%16 == 0
+ // result: (DUFFCOPY [14*(64-size/16)] dst src mem)
+ for {
+ size := v.AuxInt
+ dst := v.Args[0]
+ src := v.Args[1]
+ mem := v.Args[2]
+ if !(size >= 32 && size <= 16*64 && size%16 == 0) {
+ break
+ }
+ v.reset(OpAMD64DUFFCOPY)
+ v.AuxInt = 14 * (64 - size/16)
+ v.AddArg(dst)
+ v.AddArg(src)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Move [size] dst src mem)
+ // cond: size > 16*64 && size%8 == 0
+ // result: (REPMOVSQ dst src (MOVQconst [size/8]) mem)
+ for {
+ size := v.AuxInt
+ dst := v.Args[0]
+ src := v.Args[1]
+ mem := v.Args[2]
+ if !(size > 16*64 && size%8 == 0) {
+ break
+ }
+ v.reset(OpAMD64REPMOVSQ)
+ v.AddArg(dst)
+ v.AddArg(src)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVQconst, config.fe.TypeUInt64())
+ v0.AuxInt = size / 8
+ v.AddArg(v0)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpMul16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mul16 x y)
+ // cond:
+ // result: (MULW x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64MULW)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpMul32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mul32 x y)
+ // cond:
+ // result: (MULL x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64MULL)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpMul32F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mul32F x y)
+ // cond:
+ // result: (MULSS x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64MULSS)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpMul64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mul64 x y)
+ // cond:
+ // result: (MULQ x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64MULQ)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpMul64F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mul64F x y)
+ // cond:
+ // result: (MULSD x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64MULSD)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpMul8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mul8 x y)
+ // cond:
+ // result: (MULB x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64MULB)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64NEGB(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (NEGB (MOVBconst [c]))
+ // cond:
+ // result: (MOVBconst [-c])
+ for {
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = -c
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64NEGL(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (NEGL (MOVLconst [c]))
+ // cond:
+ // result: (MOVLconst [-c])
+ for {
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = -c
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64NEGQ(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (NEGQ (MOVQconst [c]))
+ // cond:
+ // result: (MOVQconst [-c])
+ for {
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = -c
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64NEGW(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (NEGW (MOVWconst [c]))
+ // cond:
+ // result: (MOVWconst [-c])
+ for {
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVWconst)
+ v.AuxInt = -c
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64NOTB(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (NOTB (MOVBconst [c]))
+ // cond:
+ // result: (MOVBconst [^c])
+ for {
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = ^c
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64NOTL(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (NOTL (MOVLconst [c]))
+ // cond:
+ // result: (MOVLconst [^c])
+ for {
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = ^c
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64NOTQ(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (NOTQ (MOVQconst [c]))
+ // cond:
+ // result: (MOVQconst [^c])
+ for {
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = ^c
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64NOTW(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (NOTW (MOVWconst [c]))
+ // cond:
+ // result: (MOVWconst [^c])
+ for {
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVWconst)
+ v.AuxInt = ^c
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpNeg16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neg16 x)
+ // cond:
+ // result: (NEGW x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64NEGW)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpNeg32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neg32 x)
+ // cond:
+ // result: (NEGL x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64NEGL)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpNeg32F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neg32F x)
+ // cond:
+ // result: (PXOR x (MOVSSconst <config.Frontend().TypeFloat32()> [f2i(math.Copysign(0, -1))]))
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64PXOR)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVSSconst, config.Frontend().TypeFloat32())
+ v0.AuxInt = f2i(math.Copysign(0, -1))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpNeg64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neg64 x)
+ // cond:
+ // result: (NEGQ x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64NEGQ)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpNeg64F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neg64F x)
+ // cond:
+ // result: (PXOR x (MOVSDconst <config.Frontend().TypeFloat64()> [f2i(math.Copysign(0, -1))]))
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64PXOR)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVSDconst, config.Frontend().TypeFloat64())
+ v0.AuxInt = f2i(math.Copysign(0, -1))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpNeg8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neg8 x)
+ // cond:
+ // result: (NEGB x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64NEGB)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpNeq16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neq16 x y)
+ // cond:
+ // result: (SETNE (CMPW x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETNE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPW, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpNeq32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neq32 x y)
+ // cond:
+ // result: (SETNE (CMPL x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETNE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPL, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpNeq32F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neq32F x y)
+ // cond:
+ // result: (SETNEF (UCOMISS x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETNEF)
+ v0 := b.NewValue0(v.Line, OpAMD64UCOMISS, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpNeq64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neq64 x y)
+ // cond:
+ // result: (SETNE (CMPQ x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETNE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPQ, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpNeq64F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neq64F x y)
+ // cond:
+ // result: (SETNEF (UCOMISD x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETNEF)
+ v0 := b.NewValue0(v.Line, OpAMD64UCOMISD, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpNeq8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neq8 x y)
+ // cond:
+ // result: (SETNE (CMPB x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETNE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPB, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpNeqPtr(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (NeqPtr x y)
+ // cond:
+ // result: (SETNE (CMPQ x y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SETNE)
+ v0 := b.NewValue0(v.Line, OpAMD64CMPQ, TypeFlags)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpNilCheck(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (NilCheck ptr mem)
+ // cond:
+ // result: (LoweredNilCheck ptr mem)
+ for {
+ ptr := v.Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64LoweredNilCheck)
+ v.AddArg(ptr)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpNot(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Not x)
+ // cond:
+ // result: (XORBconst [1] x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64XORBconst)
+ v.AuxInt = 1
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ORB(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ORB x (MOVBconst [c]))
+ // cond:
+ // result: (ORBconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64ORBconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ORB (MOVBconst [c]) x)
+ // cond:
+ // result: (ORBconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64ORBconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ORB x x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ORBconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ORBconst [c] x)
+ // cond: int8(c)==0
+ // result: x
+ for {
+ c := v.AuxInt
+ x := v.Args[0]
+ if !(int8(c) == 0) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (ORBconst [c] _)
+ // cond: int8(c)==-1
+ // result: (MOVBconst [-1])
+ for {
+ c := v.AuxInt
+ if !(int8(c) == -1) {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = -1
+ return true
+ }
+ // match: (ORBconst [c] (MOVBconst [d]))
+ // cond:
+ // result: (MOVBconst [c|d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = c | d
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ORL(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ORL x (MOVLconst [c]))
+ // cond:
+ // result: (ORLconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64ORLconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ORL (MOVLconst [c]) x)
+ // cond:
+ // result: (ORLconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64ORLconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ORL x x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ORLconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ORLconst [c] x)
+ // cond: int32(c)==0
+ // result: x
+ for {
+ c := v.AuxInt
+ x := v.Args[0]
+ if !(int32(c) == 0) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (ORLconst [c] _)
+ // cond: int32(c)==-1
+ // result: (MOVLconst [-1])
+ for {
+ c := v.AuxInt
+ if !(int32(c) == -1) {
+ break
+ }
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = -1
+ return true
+ }
+ // match: (ORLconst [c] (MOVLconst [d]))
+ // cond:
+ // result: (MOVLconst [c|d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = c | d
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ORQ(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ORQ x (MOVQconst [c]))
+ // cond: is32Bit(c)
+ // result: (ORQconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(is32Bit(c)) {
+ break
+ }
+ v.reset(OpAMD64ORQconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ORQ (MOVQconst [c]) x)
+ // cond: is32Bit(c)
+ // result: (ORQconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ if !(is32Bit(c)) {
+ break
+ }
+ v.reset(OpAMD64ORQconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ORQ x x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ORQconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ORQconst [0] x)
+ // cond:
+ // result: x
+ for {
+ if v.AuxInt != 0 {
+ break
+ }
+ x := v.Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (ORQconst [-1] _)
+ // cond:
+ // result: (MOVQconst [-1])
+ for {
+ if v.AuxInt != -1 {
+ break
+ }
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = -1
+ return true
+ }
+ // match: (ORQconst [c] (MOVQconst [d]))
+ // cond:
+ // result: (MOVQconst [c|d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = c | d
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ORW(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ORW x (MOVWconst [c]))
+ // cond:
+ // result: (ORWconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64ORWconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ORW (MOVWconst [c]) x)
+ // cond:
+ // result: (ORWconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64ORWconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (ORW x x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64ORWconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ORWconst [c] x)
+ // cond: int16(c)==0
+ // result: x
+ for {
+ c := v.AuxInt
+ x := v.Args[0]
+ if !(int16(c) == 0) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (ORWconst [c] _)
+ // cond: int16(c)==-1
+ // result: (MOVWconst [-1])
+ for {
+ c := v.AuxInt
+ if !(int16(c) == -1) {
+ break
+ }
+ v.reset(OpAMD64MOVWconst)
+ v.AuxInt = -1
+ return true
+ }
+ // match: (ORWconst [c] (MOVWconst [d]))
+ // cond:
+ // result: (MOVWconst [c|d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVWconst)
+ v.AuxInt = c | d
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpOffPtr(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (OffPtr [off] ptr)
+ // cond:
+ // result: (ADDQconst [off] ptr)
+ for {
+ off := v.AuxInt
+ ptr := v.Args[0]
+ v.reset(OpAMD64ADDQconst)
+ v.AuxInt = off
+ v.AddArg(ptr)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpOr16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Or16 x y)
+ // cond:
+ // result: (ORW x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ORW)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpOr32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Or32 x y)
+ // cond:
+ // result: (ORL x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ORL)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpOr64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Or64 x y)
+ // cond:
+ // result: (ORQ x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ORQ)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpOr8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Or8 x y)
+ // cond:
+ // result: (ORB x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ORB)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh16Ux16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh16Ux16 <t> x y)
+ // cond:
+ // result: (ANDW (SHRW <t> x y) (SBBLcarrymask <t> (CMPWconst y [16])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDW)
+ v0 := b.NewValue0(v.Line, OpAMD64SHRW, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPWconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 16
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh16Ux32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh16Ux32 <t> x y)
+ // cond:
+ // result: (ANDW (SHRW <t> x y) (SBBLcarrymask <t> (CMPLconst y [16])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDW)
+ v0 := b.NewValue0(v.Line, OpAMD64SHRW, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPLconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 16
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh16Ux64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh16Ux64 <t> x y)
+ // cond:
+ // result: (ANDW (SHRW <t> x y) (SBBLcarrymask <t> (CMPQconst y [16])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDW)
+ v0 := b.NewValue0(v.Line, OpAMD64SHRW, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPQconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 16
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh16Ux8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh16Ux8 <t> x y)
+ // cond:
+ // result: (ANDW (SHRW <t> x y) (SBBLcarrymask <t> (CMPBconst y [16])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDW)
+ v0 := b.NewValue0(v.Line, OpAMD64SHRW, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPBconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 16
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh16x16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh16x16 <t> x y)
+ // cond:
+ // result: (SARW <t> x (ORW <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPWconst y [16])))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SARW)
+ v.Type = t
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64ORW, y.Type)
+ v0.AddArg(y)
+ v1 := b.NewValue0(v.Line, OpAMD64NOTL, y.Type)
+ v2 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, y.Type)
+ v3 := b.NewValue0(v.Line, OpAMD64CMPWconst, TypeFlags)
+ v3.AddArg(y)
+ v3.AuxInt = 16
+ v2.AddArg(v3)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh16x32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh16x32 <t> x y)
+ // cond:
+ // result: (SARW <t> x (ORL <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPLconst y [16])))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SARW)
+ v.Type = t
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64ORL, y.Type)
+ v0.AddArg(y)
+ v1 := b.NewValue0(v.Line, OpAMD64NOTL, y.Type)
+ v2 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, y.Type)
+ v3 := b.NewValue0(v.Line, OpAMD64CMPLconst, TypeFlags)
+ v3.AddArg(y)
+ v3.AuxInt = 16
+ v2.AddArg(v3)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh16x64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh16x64 <t> x y)
+ // cond:
+ // result: (SARW <t> x (ORQ <y.Type> y (NOTQ <y.Type> (SBBQcarrymask <y.Type> (CMPQconst y [16])))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SARW)
+ v.Type = t
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64ORQ, y.Type)
+ v0.AddArg(y)
+ v1 := b.NewValue0(v.Line, OpAMD64NOTQ, y.Type)
+ v2 := b.NewValue0(v.Line, OpAMD64SBBQcarrymask, y.Type)
+ v3 := b.NewValue0(v.Line, OpAMD64CMPQconst, TypeFlags)
+ v3.AddArg(y)
+ v3.AuxInt = 16
+ v2.AddArg(v3)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh16x8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh16x8 <t> x y)
+ // cond:
+ // result: (SARW <t> x (ORB <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPBconst y [16])))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SARW)
+ v.Type = t
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64ORB, y.Type)
+ v0.AddArg(y)
+ v1 := b.NewValue0(v.Line, OpAMD64NOTL, y.Type)
+ v2 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, y.Type)
+ v3 := b.NewValue0(v.Line, OpAMD64CMPBconst, TypeFlags)
+ v3.AddArg(y)
+ v3.AuxInt = 16
+ v2.AddArg(v3)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh32Ux16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh32Ux16 <t> x y)
+ // cond:
+ // result: (ANDL (SHRL <t> x y) (SBBLcarrymask <t> (CMPWconst y [32])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDL)
+ v0 := b.NewValue0(v.Line, OpAMD64SHRL, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPWconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 32
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh32Ux32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh32Ux32 <t> x y)
+ // cond:
+ // result: (ANDL (SHRL <t> x y) (SBBLcarrymask <t> (CMPLconst y [32])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDL)
+ v0 := b.NewValue0(v.Line, OpAMD64SHRL, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPLconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 32
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh32Ux64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh32Ux64 <t> x y)
+ // cond:
+ // result: (ANDL (SHRL <t> x y) (SBBLcarrymask <t> (CMPQconst y [32])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDL)
+ v0 := b.NewValue0(v.Line, OpAMD64SHRL, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPQconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 32
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh32Ux8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh32Ux8 <t> x y)
+ // cond:
+ // result: (ANDL (SHRL <t> x y) (SBBLcarrymask <t> (CMPBconst y [32])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDL)
+ v0 := b.NewValue0(v.Line, OpAMD64SHRL, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPBconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 32
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh32x16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh32x16 <t> x y)
+ // cond:
+ // result: (SARL <t> x (ORW <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPWconst y [32])))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SARL)
+ v.Type = t
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64ORW, y.Type)
+ v0.AddArg(y)
+ v1 := b.NewValue0(v.Line, OpAMD64NOTL, y.Type)
+ v2 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, y.Type)
+ v3 := b.NewValue0(v.Line, OpAMD64CMPWconst, TypeFlags)
+ v3.AddArg(y)
+ v3.AuxInt = 32
+ v2.AddArg(v3)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh32x32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh32x32 <t> x y)
+ // cond:
+ // result: (SARL <t> x (ORL <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPLconst y [32])))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SARL)
+ v.Type = t
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64ORL, y.Type)
+ v0.AddArg(y)
+ v1 := b.NewValue0(v.Line, OpAMD64NOTL, y.Type)
+ v2 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, y.Type)
+ v3 := b.NewValue0(v.Line, OpAMD64CMPLconst, TypeFlags)
+ v3.AddArg(y)
+ v3.AuxInt = 32
+ v2.AddArg(v3)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh32x64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh32x64 <t> x y)
+ // cond:
+ // result: (SARL <t> x (ORQ <y.Type> y (NOTQ <y.Type> (SBBQcarrymask <y.Type> (CMPQconst y [32])))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SARL)
+ v.Type = t
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64ORQ, y.Type)
+ v0.AddArg(y)
+ v1 := b.NewValue0(v.Line, OpAMD64NOTQ, y.Type)
+ v2 := b.NewValue0(v.Line, OpAMD64SBBQcarrymask, y.Type)
+ v3 := b.NewValue0(v.Line, OpAMD64CMPQconst, TypeFlags)
+ v3.AddArg(y)
+ v3.AuxInt = 32
+ v2.AddArg(v3)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh32x8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh32x8 <t> x y)
+ // cond:
+ // result: (SARL <t> x (ORB <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPBconst y [32])))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SARL)
+ v.Type = t
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64ORB, y.Type)
+ v0.AddArg(y)
+ v1 := b.NewValue0(v.Line, OpAMD64NOTL, y.Type)
+ v2 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, y.Type)
+ v3 := b.NewValue0(v.Line, OpAMD64CMPBconst, TypeFlags)
+ v3.AddArg(y)
+ v3.AuxInt = 32
+ v2.AddArg(v3)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh64Ux16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh64Ux16 <t> x y)
+ // cond:
+ // result: (ANDQ (SHRQ <t> x y) (SBBQcarrymask <t> (CMPWconst y [64])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDQ)
+ v0 := b.NewValue0(v.Line, OpAMD64SHRQ, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBQcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPWconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 64
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh64Ux32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh64Ux32 <t> x y)
+ // cond:
+ // result: (ANDQ (SHRQ <t> x y) (SBBQcarrymask <t> (CMPLconst y [64])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDQ)
+ v0 := b.NewValue0(v.Line, OpAMD64SHRQ, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBQcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPLconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 64
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh64Ux64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh64Ux64 <t> x y)
+ // cond:
+ // result: (ANDQ (SHRQ <t> x y) (SBBQcarrymask <t> (CMPQconst y [64])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDQ)
+ v0 := b.NewValue0(v.Line, OpAMD64SHRQ, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBQcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPQconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 64
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh64Ux8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh64Ux8 <t> x y)
+ // cond:
+ // result: (ANDQ (SHRQ <t> x y) (SBBQcarrymask <t> (CMPBconst y [64])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDQ)
+ v0 := b.NewValue0(v.Line, OpAMD64SHRQ, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBQcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPBconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 64
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh64x16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh64x16 <t> x y)
+ // cond:
+ // result: (SARQ <t> x (ORW <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPWconst y [64])))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SARQ)
+ v.Type = t
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64ORW, y.Type)
+ v0.AddArg(y)
+ v1 := b.NewValue0(v.Line, OpAMD64NOTL, y.Type)
+ v2 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, y.Type)
+ v3 := b.NewValue0(v.Line, OpAMD64CMPWconst, TypeFlags)
+ v3.AddArg(y)
+ v3.AuxInt = 64
+ v2.AddArg(v3)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh64x32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh64x32 <t> x y)
+ // cond:
+ // result: (SARQ <t> x (ORL <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPLconst y [64])))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SARQ)
+ v.Type = t
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64ORL, y.Type)
+ v0.AddArg(y)
+ v1 := b.NewValue0(v.Line, OpAMD64NOTL, y.Type)
+ v2 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, y.Type)
+ v3 := b.NewValue0(v.Line, OpAMD64CMPLconst, TypeFlags)
+ v3.AddArg(y)
+ v3.AuxInt = 64
+ v2.AddArg(v3)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh64x64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh64x64 <t> x y)
+ // cond:
+ // result: (SARQ <t> x (ORQ <y.Type> y (NOTQ <y.Type> (SBBQcarrymask <y.Type> (CMPQconst y [64])))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SARQ)
+ v.Type = t
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64ORQ, y.Type)
+ v0.AddArg(y)
+ v1 := b.NewValue0(v.Line, OpAMD64NOTQ, y.Type)
+ v2 := b.NewValue0(v.Line, OpAMD64SBBQcarrymask, y.Type)
+ v3 := b.NewValue0(v.Line, OpAMD64CMPQconst, TypeFlags)
+ v3.AddArg(y)
+ v3.AuxInt = 64
+ v2.AddArg(v3)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh64x8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh64x8 <t> x y)
+ // cond:
+ // result: (SARQ <t> x (ORB <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPBconst y [64])))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SARQ)
+ v.Type = t
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64ORB, y.Type)
+ v0.AddArg(y)
+ v1 := b.NewValue0(v.Line, OpAMD64NOTL, y.Type)
+ v2 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, y.Type)
+ v3 := b.NewValue0(v.Line, OpAMD64CMPBconst, TypeFlags)
+ v3.AddArg(y)
+ v3.AuxInt = 64
+ v2.AddArg(v3)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh8Ux16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh8Ux16 <t> x y)
+ // cond:
+ // result: (ANDB (SHRB <t> x y) (SBBLcarrymask <t> (CMPWconst y [8])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDB)
+ v0 := b.NewValue0(v.Line, OpAMD64SHRB, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPWconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 8
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh8Ux32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh8Ux32 <t> x y)
+ // cond:
+ // result: (ANDB (SHRB <t> x y) (SBBLcarrymask <t> (CMPLconst y [8])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDB)
+ v0 := b.NewValue0(v.Line, OpAMD64SHRB, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPLconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 8
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh8Ux64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh8Ux64 <t> x y)
+ // cond:
+ // result: (ANDB (SHRB <t> x y) (SBBLcarrymask <t> (CMPQconst y [8])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDB)
+ v0 := b.NewValue0(v.Line, OpAMD64SHRB, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPQconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 8
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh8Ux8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh8Ux8 <t> x y)
+ // cond:
+ // result: (ANDB (SHRB <t> x y) (SBBLcarrymask <t> (CMPBconst y [8])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64ANDB)
+ v0 := b.NewValue0(v.Line, OpAMD64SHRB, t)
+ v0.AddArg(x)
+ v0.AddArg(y)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, t)
+ v2 := b.NewValue0(v.Line, OpAMD64CMPBconst, TypeFlags)
+ v2.AddArg(y)
+ v2.AuxInt = 8
+ v1.AddArg(v2)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh8x16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh8x16 <t> x y)
+ // cond:
+ // result: (SARB <t> x (ORW <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPWconst y [8])))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SARB)
+ v.Type = t
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64ORW, y.Type)
+ v0.AddArg(y)
+ v1 := b.NewValue0(v.Line, OpAMD64NOTL, y.Type)
+ v2 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, y.Type)
+ v3 := b.NewValue0(v.Line, OpAMD64CMPWconst, TypeFlags)
+ v3.AddArg(y)
+ v3.AuxInt = 8
+ v2.AddArg(v3)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh8x32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh8x32 <t> x y)
+ // cond:
+ // result: (SARB <t> x (ORL <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPLconst y [8])))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SARB)
+ v.Type = t
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64ORL, y.Type)
+ v0.AddArg(y)
+ v1 := b.NewValue0(v.Line, OpAMD64NOTL, y.Type)
+ v2 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, y.Type)
+ v3 := b.NewValue0(v.Line, OpAMD64CMPLconst, TypeFlags)
+ v3.AddArg(y)
+ v3.AuxInt = 8
+ v2.AddArg(v3)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh8x64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh8x64 <t> x y)
+ // cond:
+ // result: (SARB <t> x (ORQ <y.Type> y (NOTQ <y.Type> (SBBQcarrymask <y.Type> (CMPQconst y [8])))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SARB)
+ v.Type = t
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64ORQ, y.Type)
+ v0.AddArg(y)
+ v1 := b.NewValue0(v.Line, OpAMD64NOTQ, y.Type)
+ v2 := b.NewValue0(v.Line, OpAMD64SBBQcarrymask, y.Type)
+ v3 := b.NewValue0(v.Line, OpAMD64CMPQconst, TypeFlags)
+ v3.AddArg(y)
+ v3.AuxInt = 8
+ v2.AddArg(v3)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpRsh8x8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh8x8 <t> x y)
+ // cond:
+ // result: (SARB <t> x (ORB <y.Type> y (NOTL <y.Type> (SBBLcarrymask <y.Type> (CMPBconst y [8])))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SARB)
+ v.Type = t
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpAMD64ORB, y.Type)
+ v0.AddArg(y)
+ v1 := b.NewValue0(v.Line, OpAMD64NOTL, y.Type)
+ v2 := b.NewValue0(v.Line, OpAMD64SBBLcarrymask, y.Type)
+ v3 := b.NewValue0(v.Line, OpAMD64CMPBconst, TypeFlags)
+ v3.AddArg(y)
+ v3.AuxInt = 8
+ v2.AddArg(v3)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SARB(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SARB x (MOVQconst [c]))
+ // cond:
+ // result: (SARBconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SARBconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SARB x (MOVLconst [c]))
+ // cond:
+ // result: (SARBconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SARBconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SARB x (MOVWconst [c]))
+ // cond:
+ // result: (SARBconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SARBconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SARB x (MOVBconst [c]))
+ // cond:
+ // result: (SARBconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SARBconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SARBconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SARBconst [c] (MOVQconst [d]))
+ // cond:
+ // result: (MOVQconst [d>>uint64(c)])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = d >> uint64(c)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SARL(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SARL x (MOVQconst [c]))
+ // cond:
+ // result: (SARLconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SARLconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SARL x (MOVLconst [c]))
+ // cond:
+ // result: (SARLconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SARLconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SARL x (MOVWconst [c]))
+ // cond:
+ // result: (SARLconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SARLconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SARL x (MOVBconst [c]))
+ // cond:
+ // result: (SARLconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SARLconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SARLconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SARLconst [c] (MOVQconst [d]))
+ // cond:
+ // result: (MOVQconst [d>>uint64(c)])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = d >> uint64(c)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SARQ(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SARQ x (MOVQconst [c]))
+ // cond:
+ // result: (SARQconst [c&63] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SARQconst)
+ v.AuxInt = c & 63
+ v.AddArg(x)
+ return true
+ }
+ // match: (SARQ x (MOVLconst [c]))
+ // cond:
+ // result: (SARQconst [c&63] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SARQconst)
+ v.AuxInt = c & 63
+ v.AddArg(x)
+ return true
+ }
+ // match: (SARQ x (MOVWconst [c]))
+ // cond:
+ // result: (SARQconst [c&63] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SARQconst)
+ v.AuxInt = c & 63
+ v.AddArg(x)
+ return true
+ }
+ // match: (SARQ x (MOVBconst [c]))
+ // cond:
+ // result: (SARQconst [c&63] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SARQconst)
+ v.AuxInt = c & 63
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SARQconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SARQconst [c] (MOVQconst [d]))
+ // cond:
+ // result: (MOVQconst [d>>uint64(c)])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = d >> uint64(c)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SARW(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SARW x (MOVQconst [c]))
+ // cond:
+ // result: (SARWconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SARWconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SARW x (MOVLconst [c]))
+ // cond:
+ // result: (SARWconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SARWconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SARW x (MOVWconst [c]))
+ // cond:
+ // result: (SARWconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SARWconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SARW x (MOVBconst [c]))
+ // cond:
+ // result: (SARWconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SARWconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SARWconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SARWconst [c] (MOVQconst [d]))
+ // cond:
+ // result: (MOVQconst [d>>uint64(c)])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = d >> uint64(c)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SBBLcarrymask(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SBBLcarrymask (FlagEQ))
+ // cond:
+ // result: (MOVLconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagEQ {
+ break
+ }
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SBBLcarrymask (FlagLT_ULT))
+ // cond:
+ // result: (MOVLconst [-1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = -1
+ return true
+ }
+ // match: (SBBLcarrymask (FlagLT_UGT))
+ // cond:
+ // result: (MOVLconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SBBLcarrymask (FlagGT_ULT))
+ // cond:
+ // result: (MOVLconst [-1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = -1
+ return true
+ }
+ // match: (SBBLcarrymask (FlagGT_UGT))
+ // cond:
+ // result: (MOVLconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SBBQcarrymask(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SBBQcarrymask (FlagEQ))
+ // cond:
+ // result: (MOVQconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagEQ {
+ break
+ }
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SBBQcarrymask (FlagLT_ULT))
+ // cond:
+ // result: (MOVQconst [-1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = -1
+ return true
+ }
+ // match: (SBBQcarrymask (FlagLT_UGT))
+ // cond:
+ // result: (MOVQconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SBBQcarrymask (FlagGT_ULT))
+ // cond:
+ // result: (MOVQconst [-1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = -1
+ return true
+ }
+ // match: (SBBQcarrymask (FlagGT_UGT))
+ // cond:
+ // result: (MOVQconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SETA(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SETA (InvertFlags x))
+ // cond:
+ // result: (SETB x)
+ for {
+ if v.Args[0].Op != OpAMD64InvertFlags {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64SETB)
+ v.AddArg(x)
+ return true
+ }
+ // match: (SETA (FlagEQ))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagEQ {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETA (FlagLT_ULT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETA (FlagLT_UGT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETA (FlagGT_ULT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETA (FlagGT_UGT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SETAE(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SETAE (InvertFlags x))
+ // cond:
+ // result: (SETBE x)
+ for {
+ if v.Args[0].Op != OpAMD64InvertFlags {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64SETBE)
+ v.AddArg(x)
+ return true
+ }
+ // match: (SETAE (FlagEQ))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagEQ {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETAE (FlagLT_ULT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETAE (FlagLT_UGT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETAE (FlagGT_ULT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETAE (FlagGT_UGT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SETB(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SETB (InvertFlags x))
+ // cond:
+ // result: (SETA x)
+ for {
+ if v.Args[0].Op != OpAMD64InvertFlags {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64SETA)
+ v.AddArg(x)
+ return true
+ }
+ // match: (SETB (FlagEQ))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagEQ {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETB (FlagLT_ULT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETB (FlagLT_UGT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETB (FlagGT_ULT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETB (FlagGT_UGT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SETBE(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SETBE (InvertFlags x))
+ // cond:
+ // result: (SETAE x)
+ for {
+ if v.Args[0].Op != OpAMD64InvertFlags {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64SETAE)
+ v.AddArg(x)
+ return true
+ }
+ // match: (SETBE (FlagEQ))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagEQ {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETBE (FlagLT_ULT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETBE (FlagLT_UGT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETBE (FlagGT_ULT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETBE (FlagGT_UGT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SETEQ(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SETEQ (InvertFlags x))
+ // cond:
+ // result: (SETEQ x)
+ for {
+ if v.Args[0].Op != OpAMD64InvertFlags {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64SETEQ)
+ v.AddArg(x)
+ return true
+ }
+ // match: (SETEQ (FlagEQ))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagEQ {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETEQ (FlagLT_ULT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETEQ (FlagLT_UGT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETEQ (FlagGT_ULT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETEQ (FlagGT_UGT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SETG(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SETG (InvertFlags x))
+ // cond:
+ // result: (SETL x)
+ for {
+ if v.Args[0].Op != OpAMD64InvertFlags {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64SETL)
+ v.AddArg(x)
+ return true
+ }
+ // match: (SETG (FlagEQ))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagEQ {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETG (FlagLT_ULT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETG (FlagLT_UGT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETG (FlagGT_ULT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETG (FlagGT_UGT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SETGE(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SETGE (InvertFlags x))
+ // cond:
+ // result: (SETLE x)
+ for {
+ if v.Args[0].Op != OpAMD64InvertFlags {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64SETLE)
+ v.AddArg(x)
+ return true
+ }
+ // match: (SETGE (FlagEQ))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagEQ {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETGE (FlagLT_ULT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETGE (FlagLT_UGT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETGE (FlagGT_ULT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETGE (FlagGT_UGT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SETL(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SETL (InvertFlags x))
+ // cond:
+ // result: (SETG x)
+ for {
+ if v.Args[0].Op != OpAMD64InvertFlags {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64SETG)
+ v.AddArg(x)
+ return true
+ }
+ // match: (SETL (FlagEQ))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagEQ {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETL (FlagLT_ULT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETL (FlagLT_UGT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETL (FlagGT_ULT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETL (FlagGT_UGT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SETLE(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SETLE (InvertFlags x))
+ // cond:
+ // result: (SETGE x)
+ for {
+ if v.Args[0].Op != OpAMD64InvertFlags {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64SETGE)
+ v.AddArg(x)
+ return true
+ }
+ // match: (SETLE (FlagEQ))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagEQ {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETLE (FlagLT_ULT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETLE (FlagLT_UGT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETLE (FlagGT_ULT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETLE (FlagGT_UGT))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SETNE(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SETNE (InvertFlags x))
+ // cond:
+ // result: (SETNE x)
+ for {
+ if v.Args[0].Op != OpAMD64InvertFlags {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64SETNE)
+ v.AddArg(x)
+ return true
+ }
+ // match: (SETNE (FlagEQ))
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ if v.Args[0].Op != OpAMD64FlagEQ {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (SETNE (FlagLT_ULT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETNE (FlagLT_UGT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETNE (FlagGT_ULT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (SETNE (FlagGT_UGT))
+ // cond:
+ // result: (MOVBconst [1])
+ for {
+ if v.Args[0].Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 1
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SHLB(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SHLB x (MOVQconst [c]))
+ // cond:
+ // result: (SHLBconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHLBconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHLB x (MOVLconst [c]))
+ // cond:
+ // result: (SHLBconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHLBconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHLB x (MOVWconst [c]))
+ // cond:
+ // result: (SHLBconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHLBconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHLB x (MOVBconst [c]))
+ // cond:
+ // result: (SHLBconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHLBconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SHLL(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SHLL x (MOVQconst [c]))
+ // cond:
+ // result: (SHLLconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHLLconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHLL x (MOVLconst [c]))
+ // cond:
+ // result: (SHLLconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHLLconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHLL x (MOVWconst [c]))
+ // cond:
+ // result: (SHLLconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHLLconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHLL x (MOVBconst [c]))
+ // cond:
+ // result: (SHLLconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHLLconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SHLQ(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SHLQ x (MOVQconst [c]))
+ // cond:
+ // result: (SHLQconst [c&63] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHLQconst)
+ v.AuxInt = c & 63
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHLQ x (MOVLconst [c]))
+ // cond:
+ // result: (SHLQconst [c&63] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHLQconst)
+ v.AuxInt = c & 63
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHLQ x (MOVWconst [c]))
+ // cond:
+ // result: (SHLQconst [c&63] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHLQconst)
+ v.AuxInt = c & 63
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHLQ x (MOVBconst [c]))
+ // cond:
+ // result: (SHLQconst [c&63] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHLQconst)
+ v.AuxInt = c & 63
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SHLW(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SHLW x (MOVQconst [c]))
+ // cond:
+ // result: (SHLWconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHLWconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHLW x (MOVLconst [c]))
+ // cond:
+ // result: (SHLWconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHLWconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHLW x (MOVWconst [c]))
+ // cond:
+ // result: (SHLWconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHLWconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHLW x (MOVBconst [c]))
+ // cond:
+ // result: (SHLWconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHLWconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SHRB(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SHRB x (MOVQconst [c]))
+ // cond:
+ // result: (SHRBconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHRBconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHRB x (MOVLconst [c]))
+ // cond:
+ // result: (SHRBconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHRBconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHRB x (MOVWconst [c]))
+ // cond:
+ // result: (SHRBconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHRBconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHRB x (MOVBconst [c]))
+ // cond:
+ // result: (SHRBconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHRBconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SHRL(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SHRL x (MOVQconst [c]))
+ // cond:
+ // result: (SHRLconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHRLconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHRL x (MOVLconst [c]))
+ // cond:
+ // result: (SHRLconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHRLconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHRL x (MOVWconst [c]))
+ // cond:
+ // result: (SHRLconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHRLconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHRL x (MOVBconst [c]))
+ // cond:
+ // result: (SHRLconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHRLconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SHRQ(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SHRQ x (MOVQconst [c]))
+ // cond:
+ // result: (SHRQconst [c&63] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHRQconst)
+ v.AuxInt = c & 63
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHRQ x (MOVLconst [c]))
+ // cond:
+ // result: (SHRQconst [c&63] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHRQconst)
+ v.AuxInt = c & 63
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHRQ x (MOVWconst [c]))
+ // cond:
+ // result: (SHRQconst [c&63] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHRQconst)
+ v.AuxInt = c & 63
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHRQ x (MOVBconst [c]))
+ // cond:
+ // result: (SHRQconst [c&63] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHRQconst)
+ v.AuxInt = c & 63
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SHRW(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SHRW x (MOVQconst [c]))
+ // cond:
+ // result: (SHRWconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHRWconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHRW x (MOVLconst [c]))
+ // cond:
+ // result: (SHRWconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHRWconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHRW x (MOVWconst [c]))
+ // cond:
+ // result: (SHRWconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHRWconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ // match: (SHRW x (MOVBconst [c]))
+ // cond:
+ // result: (SHRWconst [c&31] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SHRWconst)
+ v.AuxInt = c & 31
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SUBB(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SUBB x (MOVBconst [c]))
+ // cond:
+ // result: (SUBBconst x [c])
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SUBBconst)
+ v.AddArg(x)
+ v.AuxInt = c
+ return true
+ }
+ // match: (SUBB (MOVBconst [c]) x)
+ // cond:
+ // result: (NEGB (SUBBconst <v.Type> x [c]))
+ for {
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64NEGB)
+ v0 := b.NewValue0(v.Line, OpAMD64SUBBconst, v.Type)
+ v0.AddArg(x)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ return true
+ }
+ // match: (SUBB x x)
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SUBBconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SUBBconst [c] x)
+ // cond: int8(c) == 0
+ // result: x
+ for {
+ c := v.AuxInt
+ x := v.Args[0]
+ if !(int8(c) == 0) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (SUBBconst [c] (MOVBconst [d]))
+ // cond:
+ // result: (MOVBconst [d-c])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = d - c
+ return true
+ }
+ // match: (SUBBconst [c] (SUBBconst [d] x))
+ // cond:
+ // result: (ADDBconst [-c-d] x)
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64SUBBconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64ADDBconst)
+ v.AuxInt = -c - d
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SUBL(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SUBL x (MOVLconst [c]))
+ // cond:
+ // result: (SUBLconst x [c])
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SUBLconst)
+ v.AddArg(x)
+ v.AuxInt = c
+ return true
+ }
+ // match: (SUBL (MOVLconst [c]) x)
+ // cond:
+ // result: (NEGL (SUBLconst <v.Type> x [c]))
+ for {
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64NEGL)
+ v0 := b.NewValue0(v.Line, OpAMD64SUBLconst, v.Type)
+ v0.AddArg(x)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ return true
+ }
+ // match: (SUBL x x)
+ // cond:
+ // result: (MOVLconst [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SUBLconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SUBLconst [c] x)
+ // cond: int32(c) == 0
+ // result: x
+ for {
+ c := v.AuxInt
+ x := v.Args[0]
+ if !(int32(c) == 0) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (SUBLconst [c] (MOVLconst [d]))
+ // cond:
+ // result: (MOVLconst [d-c])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = d - c
+ return true
+ }
+ // match: (SUBLconst [c] (SUBLconst [d] x))
+ // cond:
+ // result: (ADDLconst [-c-d] x)
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64SUBLconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64ADDLconst)
+ v.AuxInt = -c - d
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SUBQ(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SUBQ x (MOVQconst [c]))
+ // cond: is32Bit(c)
+ // result: (SUBQconst x [c])
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(is32Bit(c)) {
+ break
+ }
+ v.reset(OpAMD64SUBQconst)
+ v.AddArg(x)
+ v.AuxInt = c
+ return true
+ }
+ // match: (SUBQ (MOVQconst [c]) x)
+ // cond: is32Bit(c)
+ // result: (NEGQ (SUBQconst <v.Type> x [c]))
+ for {
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ if !(is32Bit(c)) {
+ break
+ }
+ v.reset(OpAMD64NEGQ)
+ v0 := b.NewValue0(v.Line, OpAMD64SUBQconst, v.Type)
+ v0.AddArg(x)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ return true
+ }
+ // match: (SUBQ x x)
+ // cond:
+ // result: (MOVQconst [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SUBQconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SUBQconst [0] x)
+ // cond:
+ // result: x
+ for {
+ if v.AuxInt != 0 {
+ break
+ }
+ x := v.Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (SUBQconst [c] (MOVQconst [d]))
+ // cond:
+ // result: (MOVQconst [d-c])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = d - c
+ return true
+ }
+ // match: (SUBQconst [c] (SUBQconst [d] x))
+ // cond:
+ // result: (ADDQconst [-c-d] x)
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64SUBQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64ADDQconst)
+ v.AuxInt = -c - d
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SUBW(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SUBW x (MOVWconst [c]))
+ // cond:
+ // result: (SUBWconst x [c])
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64SUBWconst)
+ v.AddArg(x)
+ v.AuxInt = c
+ return true
+ }
+ // match: (SUBW (MOVWconst [c]) x)
+ // cond:
+ // result: (NEGW (SUBWconst <v.Type> x [c]))
+ for {
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64NEGW)
+ v0 := b.NewValue0(v.Line, OpAMD64SUBWconst, v.Type)
+ v0.AddArg(x)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ return true
+ }
+ // match: (SUBW x x)
+ // cond:
+ // result: (MOVWconst [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpAMD64MOVWconst)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64SUBWconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SUBWconst [c] x)
+ // cond: int16(c) == 0
+ // result: x
+ for {
+ c := v.AuxInt
+ x := v.Args[0]
+ if !(int16(c) == 0) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (SUBWconst [c] (MOVWconst [d]))
+ // cond:
+ // result: (MOVWconst [d-c])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVWconst)
+ v.AuxInt = d - c
+ return true
+ }
+ // match: (SUBWconst [c] (SUBWconst [d] x))
+ // cond:
+ // result: (ADDWconst [-c-d] x)
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64SUBWconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ x := v.Args[0].Args[0]
+ v.reset(OpAMD64ADDWconst)
+ v.AuxInt = -c - d
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpSignExt16to32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SignExt16to32 x)
+ // cond:
+ // result: (MOVWQSX x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64MOVWQSX)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpSignExt16to64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SignExt16to64 x)
+ // cond:
+ // result: (MOVWQSX x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64MOVWQSX)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpSignExt32to64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SignExt32to64 x)
+ // cond:
+ // result: (MOVLQSX x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64MOVLQSX)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpSignExt8to16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SignExt8to16 x)
+ // cond:
+ // result: (MOVBQSX x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64MOVBQSX)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpSignExt8to32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SignExt8to32 x)
+ // cond:
+ // result: (MOVBQSX x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64MOVBQSX)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpSignExt8to64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SignExt8to64 x)
+ // cond:
+ // result: (MOVBQSX x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64MOVBQSX)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpSqrt(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Sqrt x)
+ // cond:
+ // result: (SQRTSD x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64SQRTSD)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpStaticCall(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (StaticCall [argwid] {target} mem)
+ // cond:
+ // result: (CALLstatic [argwid] {target} mem)
+ for {
+ argwid := v.AuxInt
+ target := v.Aux
+ mem := v.Args[0]
+ v.reset(OpAMD64CALLstatic)
+ v.AuxInt = argwid
+ v.Aux = target
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpStore(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Store [8] ptr val mem)
+ // cond: is64BitFloat(val.Type)
+ // result: (MOVSDstore ptr val mem)
+ for {
+ if v.AuxInt != 8 {
+ break
+ }
+ ptr := v.Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ if !(is64BitFloat(val.Type)) {
+ break
+ }
+ v.reset(OpAMD64MOVSDstore)
+ v.AddArg(ptr)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Store [4] ptr val mem)
+ // cond: is32BitFloat(val.Type)
+ // result: (MOVSSstore ptr val mem)
+ for {
+ if v.AuxInt != 4 {
+ break
+ }
+ ptr := v.Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ if !(is32BitFloat(val.Type)) {
+ break
+ }
+ v.reset(OpAMD64MOVSSstore)
+ v.AddArg(ptr)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Store [8] ptr val mem)
+ // cond:
+ // result: (MOVQstore ptr val mem)
+ for {
+ if v.AuxInt != 8 {
+ break
+ }
+ ptr := v.Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVQstore)
+ v.AddArg(ptr)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Store [4] ptr val mem)
+ // cond:
+ // result: (MOVLstore ptr val mem)
+ for {
+ if v.AuxInt != 4 {
+ break
+ }
+ ptr := v.Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVLstore)
+ v.AddArg(ptr)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Store [2] ptr val mem)
+ // cond:
+ // result: (MOVWstore ptr val mem)
+ for {
+ if v.AuxInt != 2 {
+ break
+ }
+ ptr := v.Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVWstore)
+ v.AddArg(ptr)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Store [1] ptr val mem)
+ // cond:
+ // result: (MOVBstore ptr val mem)
+ for {
+ if v.AuxInt != 1 {
+ break
+ }
+ ptr := v.Args[0]
+ val := v.Args[1]
+ mem := v.Args[2]
+ v.reset(OpAMD64MOVBstore)
+ v.AddArg(ptr)
+ v.AddArg(val)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpSub16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Sub16 x y)
+ // cond:
+ // result: (SUBW x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SUBW)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpSub32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Sub32 x y)
+ // cond:
+ // result: (SUBL x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SUBL)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpSub32F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Sub32F x y)
+ // cond:
+ // result: (SUBSS x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SUBSS)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpSub64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Sub64 x y)
+ // cond:
+ // result: (SUBQ x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SUBQ)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpSub64F(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Sub64F x y)
+ // cond:
+ // result: (SUBSD x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SUBSD)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpSub8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Sub8 x y)
+ // cond:
+ // result: (SUBB x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SUBB)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpSubPtr(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SubPtr x y)
+ // cond:
+ // result: (SUBQ x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64SUBQ)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpTrunc16to8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Trunc16to8 x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpTrunc32to16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Trunc32to16 x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpTrunc32to8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Trunc32to8 x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpTrunc64to16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Trunc64to16 x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpTrunc64to32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Trunc64to32 x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpTrunc64to8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Trunc64to8 x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64XORB(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (XORB x (MOVBconst [c]))
+ // cond:
+ // result: (XORBconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64XORBconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (XORB (MOVBconst [c]) x)
+ // cond:
+ // result: (XORBconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64XORBconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (XORB x x)
+ // cond:
+ // result: (MOVBconst [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64XORBconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (XORBconst [c] x)
+ // cond: int8(c)==0
+ // result: x
+ for {
+ c := v.AuxInt
+ x := v.Args[0]
+ if !(int8(c) == 0) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (XORBconst [c] (MOVBconst [d]))
+ // cond:
+ // result: (MOVBconst [c^d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVBconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVBconst)
+ v.AuxInt = c ^ d
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64XORL(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (XORL x (MOVLconst [c]))
+ // cond:
+ // result: (XORLconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64XORLconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (XORL (MOVLconst [c]) x)
+ // cond:
+ // result: (XORLconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64XORLconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (XORL x x)
+ // cond:
+ // result: (MOVLconst [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64XORLconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (XORLconst [c] x)
+ // cond: int32(c)==0
+ // result: x
+ for {
+ c := v.AuxInt
+ x := v.Args[0]
+ if !(int32(c) == 0) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (XORLconst [c] (MOVLconst [d]))
+ // cond:
+ // result: (MOVLconst [c^d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVLconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVLconst)
+ v.AuxInt = c ^ d
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64XORQ(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (XORQ x (MOVQconst [c]))
+ // cond: is32Bit(c)
+ // result: (XORQconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(is32Bit(c)) {
+ break
+ }
+ v.reset(OpAMD64XORQconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (XORQ (MOVQconst [c]) x)
+ // cond: is32Bit(c)
+ // result: (XORQconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ if !(is32Bit(c)) {
+ break
+ }
+ v.reset(OpAMD64XORQconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (XORQ x x)
+ // cond:
+ // result: (MOVQconst [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64XORQconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (XORQconst [0] x)
+ // cond:
+ // result: x
+ for {
+ if v.AuxInt != 0 {
+ break
+ }
+ x := v.Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (XORQconst [c] (MOVQconst [d]))
+ // cond:
+ // result: (MOVQconst [c^d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVQconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVQconst)
+ v.AuxInt = c ^ d
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64XORW(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (XORW x (MOVWconst [c]))
+ // cond:
+ // result: (XORWconst [c] x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpAMD64XORWconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (XORW (MOVWconst [c]) x)
+ // cond:
+ // result: (XORWconst [c] x)
+ for {
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ c := v.Args[0].AuxInt
+ x := v.Args[1]
+ v.reset(OpAMD64XORWconst)
+ v.AuxInt = c
+ v.AddArg(x)
+ return true
+ }
+ // match: (XORW x x)
+ // cond:
+ // result: (MOVWconst [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpAMD64MOVWconst)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpAMD64XORWconst(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (XORWconst [c] x)
+ // cond: int16(c)==0
+ // result: x
+ for {
+ c := v.AuxInt
+ x := v.Args[0]
+ if !(int16(c) == 0) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (XORWconst [c] (MOVWconst [d]))
+ // cond:
+ // result: (MOVWconst [c^d])
+ for {
+ c := v.AuxInt
+ if v.Args[0].Op != OpAMD64MOVWconst {
+ break
+ }
+ d := v.Args[0].AuxInt
+ v.reset(OpAMD64MOVWconst)
+ v.AuxInt = c ^ d
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpXor16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Xor16 x y)
+ // cond:
+ // result: (XORW x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64XORW)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpXor32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Xor32 x y)
+ // cond:
+ // result: (XORL x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64XORL)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpXor64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Xor64 x y)
+ // cond:
+ // result: (XORQ x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64XORQ)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpXor8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Xor8 x y)
+ // cond:
+ // result: (XORB x y)
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpAMD64XORB)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpZero(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Zero [0] _ mem)
+ // cond:
+ // result: mem
+ for {
+ if v.AuxInt != 0 {
+ break
+ }
+ mem := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = mem.Type
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Zero [1] destptr mem)
+ // cond:
+ // result: (MOVBstoreconst [0] destptr mem)
+ for {
+ if v.AuxInt != 1 {
+ break
+ }
+ destptr := v.Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVBstoreconst)
+ v.AuxInt = 0
+ v.AddArg(destptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Zero [2] destptr mem)
+ // cond:
+ // result: (MOVWstoreconst [0] destptr mem)
+ for {
+ if v.AuxInt != 2 {
+ break
+ }
+ destptr := v.Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVWstoreconst)
+ v.AuxInt = 0
+ v.AddArg(destptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Zero [4] destptr mem)
+ // cond:
+ // result: (MOVLstoreconst [0] destptr mem)
+ for {
+ if v.AuxInt != 4 {
+ break
+ }
+ destptr := v.Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVLstoreconst)
+ v.AuxInt = 0
+ v.AddArg(destptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Zero [8] destptr mem)
+ // cond:
+ // result: (MOVQstoreconst [0] destptr mem)
+ for {
+ if v.AuxInt != 8 {
+ break
+ }
+ destptr := v.Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVQstoreconst)
+ v.AuxInt = 0
+ v.AddArg(destptr)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Zero [3] destptr mem)
+ // cond:
+ // result: (MOVBstoreconst [makeValAndOff(0,2)] destptr (MOVWstoreconst [0] destptr mem))
+ for {
+ if v.AuxInt != 3 {
+ break
+ }
+ destptr := v.Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVBstoreconst)
+ v.AuxInt = makeValAndOff(0, 2)
+ v.AddArg(destptr)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVWstoreconst, TypeMem)
+ v0.AuxInt = 0
+ v0.AddArg(destptr)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Zero [5] destptr mem)
+ // cond:
+ // result: (MOVBstoreconst [makeValAndOff(0,4)] destptr (MOVLstoreconst [0] destptr mem))
+ for {
+ if v.AuxInt != 5 {
+ break
+ }
+ destptr := v.Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVBstoreconst)
+ v.AuxInt = makeValAndOff(0, 4)
+ v.AddArg(destptr)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVLstoreconst, TypeMem)
+ v0.AuxInt = 0
+ v0.AddArg(destptr)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Zero [6] destptr mem)
+ // cond:
+ // result: (MOVWstoreconst [makeValAndOff(0,4)] destptr (MOVLstoreconst [0] destptr mem))
+ for {
+ if v.AuxInt != 6 {
+ break
+ }
+ destptr := v.Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVWstoreconst)
+ v.AuxInt = makeValAndOff(0, 4)
+ v.AddArg(destptr)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVLstoreconst, TypeMem)
+ v0.AuxInt = 0
+ v0.AddArg(destptr)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Zero [7] destptr mem)
+ // cond:
+ // result: (MOVLstoreconst [makeValAndOff(0,3)] destptr (MOVLstoreconst [0] destptr mem))
+ for {
+ if v.AuxInt != 7 {
+ break
+ }
+ destptr := v.Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVLstoreconst)
+ v.AuxInt = makeValAndOff(0, 3)
+ v.AddArg(destptr)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVLstoreconst, TypeMem)
+ v0.AuxInt = 0
+ v0.AddArg(destptr)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Zero [size] destptr mem)
+ // cond: size%8 != 0 && size > 8
+ // result: (Zero [size-size%8] (ADDQconst destptr [size%8]) (MOVQstoreconst [0] destptr mem))
+ for {
+ size := v.AuxInt
+ destptr := v.Args[0]
+ mem := v.Args[1]
+ if !(size%8 != 0 && size > 8) {
+ break
+ }
+ v.reset(OpZero)
+ v.AuxInt = size - size%8
+ v0 := b.NewValue0(v.Line, OpAMD64ADDQconst, config.fe.TypeUInt64())
+ v0.AddArg(destptr)
+ v0.AuxInt = size % 8
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64MOVQstoreconst, TypeMem)
+ v1.AuxInt = 0
+ v1.AddArg(destptr)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Zero [16] destptr mem)
+ // cond:
+ // result: (MOVQstoreconst [makeValAndOff(0,8)] destptr (MOVQstoreconst [0] destptr mem))
+ for {
+ if v.AuxInt != 16 {
+ break
+ }
+ destptr := v.Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVQstoreconst)
+ v.AuxInt = makeValAndOff(0, 8)
+ v.AddArg(destptr)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVQstoreconst, TypeMem)
+ v0.AuxInt = 0
+ v0.AddArg(destptr)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Zero [24] destptr mem)
+ // cond:
+ // result: (MOVQstoreconst [makeValAndOff(0,16)] destptr (MOVQstoreconst [makeValAndOff(0,8)] destptr (MOVQstoreconst [0] destptr mem)))
+ for {
+ if v.AuxInt != 24 {
+ break
+ }
+ destptr := v.Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVQstoreconst)
+ v.AuxInt = makeValAndOff(0, 16)
+ v.AddArg(destptr)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVQstoreconst, TypeMem)
+ v0.AuxInt = makeValAndOff(0, 8)
+ v0.AddArg(destptr)
+ v1 := b.NewValue0(v.Line, OpAMD64MOVQstoreconst, TypeMem)
+ v1.AuxInt = 0
+ v1.AddArg(destptr)
+ v1.AddArg(mem)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Zero [32] destptr mem)
+ // cond:
+ // result: (MOVQstoreconst [makeValAndOff(0,24)] destptr (MOVQstoreconst [makeValAndOff(0,16)] destptr (MOVQstoreconst [makeValAndOff(0,8)] destptr (MOVQstoreconst [0] destptr mem))))
+ for {
+ if v.AuxInt != 32 {
+ break
+ }
+ destptr := v.Args[0]
+ mem := v.Args[1]
+ v.reset(OpAMD64MOVQstoreconst)
+ v.AuxInt = makeValAndOff(0, 24)
+ v.AddArg(destptr)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVQstoreconst, TypeMem)
+ v0.AuxInt = makeValAndOff(0, 16)
+ v0.AddArg(destptr)
+ v1 := b.NewValue0(v.Line, OpAMD64MOVQstoreconst, TypeMem)
+ v1.AuxInt = makeValAndOff(0, 8)
+ v1.AddArg(destptr)
+ v2 := b.NewValue0(v.Line, OpAMD64MOVQstoreconst, TypeMem)
+ v2.AuxInt = 0
+ v2.AddArg(destptr)
+ v2.AddArg(mem)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Zero [size] destptr mem)
+ // cond: size <= 1024 && size%8 == 0 && size%16 != 0
+ // result: (Zero [size-8] (ADDQconst [8] destptr) (MOVQstore destptr (MOVQconst [0]) mem))
+ for {
+ size := v.AuxInt
+ destptr := v.Args[0]
+ mem := v.Args[1]
+ if !(size <= 1024 && size%8 == 0 && size%16 != 0) {
+ break
+ }
+ v.reset(OpZero)
+ v.AuxInt = size - 8
+ v0 := b.NewValue0(v.Line, OpAMD64ADDQconst, config.fe.TypeUInt64())
+ v0.AuxInt = 8
+ v0.AddArg(destptr)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64MOVQstore, TypeMem)
+ v1.AddArg(destptr)
+ v2 := b.NewValue0(v.Line, OpAMD64MOVQconst, config.fe.TypeUInt64())
+ v2.AuxInt = 0
+ v1.AddArg(v2)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Zero [size] destptr mem)
+ // cond: size <= 1024 && size%16 == 0
+ // result: (DUFFZERO [duffStart(size)] (ADDQconst [duffAdj(size)] destptr) (MOVOconst [0]) mem)
+ for {
+ size := v.AuxInt
+ destptr := v.Args[0]
+ mem := v.Args[1]
+ if !(size <= 1024 && size%16 == 0) {
+ break
+ }
+ v.reset(OpAMD64DUFFZERO)
+ v.AuxInt = duffStart(size)
+ v0 := b.NewValue0(v.Line, OpAMD64ADDQconst, config.fe.TypeUInt64())
+ v0.AuxInt = duffAdj(size)
+ v0.AddArg(destptr)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64MOVOconst, TypeInt128)
+ v1.AuxInt = 0
+ v.AddArg(v1)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Zero [size] destptr mem)
+ // cond: size > 1024 && size%8 == 0
+ // result: (REPSTOSQ destptr (MOVQconst [size/8]) (MOVQconst [0]) mem)
+ for {
+ size := v.AuxInt
+ destptr := v.Args[0]
+ mem := v.Args[1]
+ if !(size > 1024 && size%8 == 0) {
+ break
+ }
+ v.reset(OpAMD64REPSTOSQ)
+ v.AddArg(destptr)
+ v0 := b.NewValue0(v.Line, OpAMD64MOVQconst, config.fe.TypeUInt64())
+ v0.AuxInt = size / 8
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpAMD64MOVQconst, config.fe.TypeUInt64())
+ v1.AuxInt = 0
+ v.AddArg(v1)
+ v.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpZeroExt16to32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ZeroExt16to32 x)
+ // cond:
+ // result: (MOVWQZX x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64MOVWQZX)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpZeroExt16to64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ZeroExt16to64 x)
+ // cond:
+ // result: (MOVWQZX x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64MOVWQZX)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpZeroExt32to64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ZeroExt32to64 x)
+ // cond:
+ // result: (MOVLQZX x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64MOVLQZX)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpZeroExt8to16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ZeroExt8to16 x)
+ // cond:
+ // result: (MOVBQZX x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64MOVBQZX)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpZeroExt8to32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ZeroExt8to32 x)
+ // cond:
+ // result: (MOVBQZX x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64MOVBQZX)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValueAMD64_OpZeroExt8to64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ZeroExt8to64 x)
+ // cond:
+ // result: (MOVBQZX x)
+ for {
+ x := v.Args[0]
+ v.reset(OpAMD64MOVBQZX)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteBlockAMD64(b *Block) bool {
+ switch b.Kind {
+ case BlockAMD64EQ:
+ // match: (EQ (InvertFlags cmp) yes no)
+ // cond:
+ // result: (EQ cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64InvertFlags {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64EQ
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (EQ (FlagEQ) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagEQ {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (EQ (FlagLT_ULT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (EQ (FlagLT_UGT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (EQ (FlagGT_ULT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (EQ (FlagGT_UGT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ case BlockAMD64GE:
+ // match: (GE (InvertFlags cmp) yes no)
+ // cond:
+ // result: (LE cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64InvertFlags {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64LE
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (GE (FlagEQ) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagEQ {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (GE (FlagLT_ULT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (GE (FlagLT_UGT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (GE (FlagGT_ULT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (GE (FlagGT_UGT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ case BlockAMD64GT:
+ // match: (GT (InvertFlags cmp) yes no)
+ // cond:
+ // result: (LT cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64InvertFlags {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64LT
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (GT (FlagEQ) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagEQ {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (GT (FlagLT_ULT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (GT (FlagLT_UGT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (GT (FlagGT_ULT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (GT (FlagGT_UGT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ case BlockIf:
+ // match: (If (SETL cmp) yes no)
+ // cond:
+ // result: (LT cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64SETL {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64LT
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (If (SETLE cmp) yes no)
+ // cond:
+ // result: (LE cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64SETLE {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64LE
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (If (SETG cmp) yes no)
+ // cond:
+ // result: (GT cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64SETG {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64GT
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (If (SETGE cmp) yes no)
+ // cond:
+ // result: (GE cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64SETGE {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64GE
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (If (SETEQ cmp) yes no)
+ // cond:
+ // result: (EQ cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64SETEQ {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64EQ
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (If (SETNE cmp) yes no)
+ // cond:
+ // result: (NE cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64SETNE {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64NE
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (If (SETB cmp) yes no)
+ // cond:
+ // result: (ULT cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64SETB {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64ULT
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (If (SETBE cmp) yes no)
+ // cond:
+ // result: (ULE cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64SETBE {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64ULE
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (If (SETA cmp) yes no)
+ // cond:
+ // result: (UGT cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64SETA {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64UGT
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (If (SETAE cmp) yes no)
+ // cond:
+ // result: (UGE cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64SETAE {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64UGE
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (If (SETGF cmp) yes no)
+ // cond:
+ // result: (UGT cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64SETGF {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64UGT
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (If (SETGEF cmp) yes no)
+ // cond:
+ // result: (UGE cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64SETGEF {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64UGE
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (If (SETEQF cmp) yes no)
+ // cond:
+ // result: (EQF cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64SETEQF {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64EQF
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (If (SETNEF cmp) yes no)
+ // cond:
+ // result: (NEF cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64SETNEF {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64NEF
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (If cond yes no)
+ // cond:
+ // result: (NE (TESTB cond cond) yes no)
+ for {
+ v := b.Control
+ cond := v
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64NE
+ v0 := b.NewValue0(v.Line, OpAMD64TESTB, TypeFlags)
+ v0.AddArg(cond)
+ v0.AddArg(cond)
+ b.Control = v0
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ case BlockAMD64LE:
+ // match: (LE (InvertFlags cmp) yes no)
+ // cond:
+ // result: (GE cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64InvertFlags {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64GE
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (LE (FlagEQ) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagEQ {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (LE (FlagLT_ULT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (LE (FlagLT_UGT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (LE (FlagGT_ULT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (LE (FlagGT_UGT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ case BlockAMD64LT:
+ // match: (LT (InvertFlags cmp) yes no)
+ // cond:
+ // result: (GT cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64InvertFlags {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64GT
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (LT (FlagEQ) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagEQ {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (LT (FlagLT_ULT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (LT (FlagLT_UGT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (LT (FlagGT_ULT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (LT (FlagGT_UGT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ case BlockAMD64NE:
+ // match: (NE (TESTB (SETL cmp)) yes no)
+ // cond:
+ // result: (LT cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64TESTB {
+ break
+ }
+ if v.Args[0].Op != OpAMD64SETL {
+ break
+ }
+ cmp := v.Args[0].Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64LT
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (TESTB (SETLE cmp)) yes no)
+ // cond:
+ // result: (LE cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64TESTB {
+ break
+ }
+ if v.Args[0].Op != OpAMD64SETLE {
+ break
+ }
+ cmp := v.Args[0].Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64LE
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (TESTB (SETG cmp)) yes no)
+ // cond:
+ // result: (GT cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64TESTB {
+ break
+ }
+ if v.Args[0].Op != OpAMD64SETG {
+ break
+ }
+ cmp := v.Args[0].Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64GT
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (TESTB (SETGE cmp)) yes no)
+ // cond:
+ // result: (GE cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64TESTB {
+ break
+ }
+ if v.Args[0].Op != OpAMD64SETGE {
+ break
+ }
+ cmp := v.Args[0].Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64GE
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (TESTB (SETEQ cmp)) yes no)
+ // cond:
+ // result: (EQ cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64TESTB {
+ break
+ }
+ if v.Args[0].Op != OpAMD64SETEQ {
+ break
+ }
+ cmp := v.Args[0].Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64EQ
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (TESTB (SETNE cmp)) yes no)
+ // cond:
+ // result: (NE cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64TESTB {
+ break
+ }
+ if v.Args[0].Op != OpAMD64SETNE {
+ break
+ }
+ cmp := v.Args[0].Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64NE
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (TESTB (SETB cmp)) yes no)
+ // cond:
+ // result: (ULT cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64TESTB {
+ break
+ }
+ if v.Args[0].Op != OpAMD64SETB {
+ break
+ }
+ cmp := v.Args[0].Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64ULT
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (TESTB (SETBE cmp)) yes no)
+ // cond:
+ // result: (ULE cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64TESTB {
+ break
+ }
+ if v.Args[0].Op != OpAMD64SETBE {
+ break
+ }
+ cmp := v.Args[0].Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64ULE
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (TESTB (SETA cmp)) yes no)
+ // cond:
+ // result: (UGT cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64TESTB {
+ break
+ }
+ if v.Args[0].Op != OpAMD64SETA {
+ break
+ }
+ cmp := v.Args[0].Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64UGT
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (TESTB (SETAE cmp)) yes no)
+ // cond:
+ // result: (UGE cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64TESTB {
+ break
+ }
+ if v.Args[0].Op != OpAMD64SETAE {
+ break
+ }
+ cmp := v.Args[0].Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64UGE
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (TESTB (SETGF cmp)) yes no)
+ // cond:
+ // result: (UGT cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64TESTB {
+ break
+ }
+ if v.Args[0].Op != OpAMD64SETGF {
+ break
+ }
+ cmp := v.Args[0].Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64UGT
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (TESTB (SETGEF cmp)) yes no)
+ // cond:
+ // result: (UGE cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64TESTB {
+ break
+ }
+ if v.Args[0].Op != OpAMD64SETGEF {
+ break
+ }
+ cmp := v.Args[0].Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64UGE
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (TESTB (SETEQF cmp)) yes no)
+ // cond:
+ // result: (EQF cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64TESTB {
+ break
+ }
+ if v.Args[0].Op != OpAMD64SETEQF {
+ break
+ }
+ cmp := v.Args[0].Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64EQF
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (TESTB (SETNEF cmp)) yes no)
+ // cond:
+ // result: (NEF cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64TESTB {
+ break
+ }
+ if v.Args[0].Op != OpAMD64SETNEF {
+ break
+ }
+ cmp := v.Args[0].Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64NEF
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (InvertFlags cmp) yes no)
+ // cond:
+ // result: (NE cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64InvertFlags {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64NE
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (FlagEQ) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagEQ {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (NE (FlagLT_ULT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (FlagLT_UGT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (FlagGT_ULT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (NE (FlagGT_UGT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ case BlockAMD64UGE:
+ // match: (UGE (InvertFlags cmp) yes no)
+ // cond:
+ // result: (ULE cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64InvertFlags {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64ULE
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (UGE (FlagEQ) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagEQ {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (UGE (FlagLT_ULT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (UGE (FlagLT_UGT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (UGE (FlagGT_ULT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (UGE (FlagGT_UGT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ case BlockAMD64UGT:
+ // match: (UGT (InvertFlags cmp) yes no)
+ // cond:
+ // result: (ULT cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64InvertFlags {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64ULT
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (UGT (FlagEQ) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagEQ {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (UGT (FlagLT_ULT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (UGT (FlagLT_UGT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (UGT (FlagGT_ULT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (UGT (FlagGT_UGT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ case BlockAMD64ULE:
+ // match: (ULE (InvertFlags cmp) yes no)
+ // cond:
+ // result: (UGE cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64InvertFlags {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64UGE
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (ULE (FlagEQ) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagEQ {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (ULE (FlagLT_ULT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (ULE (FlagLT_UGT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (ULE (FlagGT_ULT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (ULE (FlagGT_UGT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ case BlockAMD64ULT:
+ // match: (ULT (InvertFlags cmp) yes no)
+ // cond:
+ // result: (UGT cmp yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64InvertFlags {
+ break
+ }
+ cmp := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockAMD64UGT
+ b.Control = cmp
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (ULT (FlagEQ) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagEQ {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (ULT (FlagLT_ULT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (ULT (FlagLT_UGT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagLT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (ULT (FlagGT_ULT) yes no)
+ // cond:
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_ULT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (ULT (FlagGT_UGT) yes no)
+ // cond:
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpAMD64FlagGT_UGT {
+ break
+ }
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ }
+ return false
+}
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import "testing"
+
+// TestNlzNto tests nlz/nto of the same number which is used in some of
+// the rewrite rules.
+func TestNlzNto(t *testing.T) {
+ // construct the bit pattern 000...111, nlz(x) + nto(0) = 64
+ var x int64
+ for i := int64(0); i < 64; i++ {
+ if got := nto(x); got != i {
+ t.Errorf("expected nto(0x%X) = %d, got %d", x, i, got)
+ }
+ if got := nlz(x); got != 64-i {
+ t.Errorf("expected nlz(0x%X) = %d, got %d", x, 64-i, got)
+ }
+ x = (x << 1) | 1
+ }
+
+ x = 0
+ // construct the bit pattern 000...111, with bit 33 set as well.
+ for i := int64(0); i < 64; i++ {
+ tx := x | (1 << 32)
+ // nto should be the the number of bits we've shifted on, with an extra bit
+ // at iter 32
+ ntoExp := i
+ if ntoExp == 32 {
+ ntoExp = 33
+ }
+ if got := nto(tx); got != ntoExp {
+ t.Errorf("expected nto(0x%X) = %d, got %d", tx, ntoExp, got)
+ }
+
+ // sinec bit 33 is set, nlz can be no greater than 31
+ nlzExp := 64 - i
+ if nlzExp > 31 {
+ nlzExp = 31
+ }
+ if got := nlz(tx); got != nlzExp {
+ t.Errorf("expected nlz(0x%X) = %d, got %d", tx, nlzExp, got)
+ }
+ x = (x << 1) | 1
+ }
+
+}
+
+func TestNlz(t *testing.T) {
+ var nlzTests = []struct {
+ v int64
+ exp int64
+ }{{0x00, 64},
+ {0x01, 63},
+ {0x0F, 60},
+ {0xFF, 56},
+ {0xffffFFFF, 32},
+ {-0x01, 0}}
+
+ for _, tc := range nlzTests {
+ if got := nlz(tc.v); got != tc.exp {
+ t.Errorf("expected nlz(0x%X) = %d, got %d", tc.v, tc.exp, got)
+ }
+ }
+}
+
+func TestNto(t *testing.T) {
+ var ntoTests = []struct {
+ v int64
+ exp int64
+ }{{0x00, 0},
+ {0x01, 1},
+ {0x0F, 4},
+ {0xFF, 8},
+ {0xffffFFFF, 32},
+ {-0x01, 64}}
+
+ for _, tc := range ntoTests {
+ if got := nto(tc.v); got != tc.exp {
+ t.Errorf("expected nto(0x%X) = %d, got %d", tc.v, tc.exp, got)
+ }
+ }
+}
+
+func TestLog2(t *testing.T) {
+ var log2Tests = []struct {
+ v int64
+ exp int64
+ }{{0, -1}, // nlz expects log2(0) == -1
+ {1, 0},
+ {2, 1},
+ {4, 2},
+ {1024, 10}}
+
+ for _, tc := range log2Tests {
+ if got := log2(tc.v); got != tc.exp {
+ t.Errorf("expected log2(%d) = %d, got %d", tc.v, tc.exp, got)
+ }
+ }
+}
--- /dev/null
+// autogenerated from gen/generic.rules: do not edit!
+// generated with: cd gen; go run *.go
+
+package ssa
+
+import "math"
+
+var _ = math.MinInt8 // in case not otherwise used
+func rewriteValuegeneric(v *Value, config *Config) bool {
+ switch v.Op {
+ case OpAdd16:
+ return rewriteValuegeneric_OpAdd16(v, config)
+ case OpAdd32:
+ return rewriteValuegeneric_OpAdd32(v, config)
+ case OpAdd64:
+ return rewriteValuegeneric_OpAdd64(v, config)
+ case OpAdd8:
+ return rewriteValuegeneric_OpAdd8(v, config)
+ case OpAnd16:
+ return rewriteValuegeneric_OpAnd16(v, config)
+ case OpAnd32:
+ return rewriteValuegeneric_OpAnd32(v, config)
+ case OpAnd64:
+ return rewriteValuegeneric_OpAnd64(v, config)
+ case OpAnd8:
+ return rewriteValuegeneric_OpAnd8(v, config)
+ case OpArg:
+ return rewriteValuegeneric_OpArg(v, config)
+ case OpArrayIndex:
+ return rewriteValuegeneric_OpArrayIndex(v, config)
+ case OpCom16:
+ return rewriteValuegeneric_OpCom16(v, config)
+ case OpCom32:
+ return rewriteValuegeneric_OpCom32(v, config)
+ case OpCom64:
+ return rewriteValuegeneric_OpCom64(v, config)
+ case OpCom8:
+ return rewriteValuegeneric_OpCom8(v, config)
+ case OpComplexImag:
+ return rewriteValuegeneric_OpComplexImag(v, config)
+ case OpComplexReal:
+ return rewriteValuegeneric_OpComplexReal(v, config)
+ case OpConstInterface:
+ return rewriteValuegeneric_OpConstInterface(v, config)
+ case OpConstSlice:
+ return rewriteValuegeneric_OpConstSlice(v, config)
+ case OpConstString:
+ return rewriteValuegeneric_OpConstString(v, config)
+ case OpConvert:
+ return rewriteValuegeneric_OpConvert(v, config)
+ case OpDiv64:
+ return rewriteValuegeneric_OpDiv64(v, config)
+ case OpDiv64u:
+ return rewriteValuegeneric_OpDiv64u(v, config)
+ case OpEq16:
+ return rewriteValuegeneric_OpEq16(v, config)
+ case OpEq32:
+ return rewriteValuegeneric_OpEq32(v, config)
+ case OpEq64:
+ return rewriteValuegeneric_OpEq64(v, config)
+ case OpEq8:
+ return rewriteValuegeneric_OpEq8(v, config)
+ case OpEqInter:
+ return rewriteValuegeneric_OpEqInter(v, config)
+ case OpEqPtr:
+ return rewriteValuegeneric_OpEqPtr(v, config)
+ case OpEqSlice:
+ return rewriteValuegeneric_OpEqSlice(v, config)
+ case OpGeq16:
+ return rewriteValuegeneric_OpGeq16(v, config)
+ case OpGeq16U:
+ return rewriteValuegeneric_OpGeq16U(v, config)
+ case OpGeq32:
+ return rewriteValuegeneric_OpGeq32(v, config)
+ case OpGeq32U:
+ return rewriteValuegeneric_OpGeq32U(v, config)
+ case OpGeq64:
+ return rewriteValuegeneric_OpGeq64(v, config)
+ case OpGeq64U:
+ return rewriteValuegeneric_OpGeq64U(v, config)
+ case OpGeq8:
+ return rewriteValuegeneric_OpGeq8(v, config)
+ case OpGeq8U:
+ return rewriteValuegeneric_OpGeq8U(v, config)
+ case OpGreater16:
+ return rewriteValuegeneric_OpGreater16(v, config)
+ case OpGreater16U:
+ return rewriteValuegeneric_OpGreater16U(v, config)
+ case OpGreater32:
+ return rewriteValuegeneric_OpGreater32(v, config)
+ case OpGreater32U:
+ return rewriteValuegeneric_OpGreater32U(v, config)
+ case OpGreater64:
+ return rewriteValuegeneric_OpGreater64(v, config)
+ case OpGreater64U:
+ return rewriteValuegeneric_OpGreater64U(v, config)
+ case OpGreater8:
+ return rewriteValuegeneric_OpGreater8(v, config)
+ case OpGreater8U:
+ return rewriteValuegeneric_OpGreater8U(v, config)
+ case OpIData:
+ return rewriteValuegeneric_OpIData(v, config)
+ case OpITab:
+ return rewriteValuegeneric_OpITab(v, config)
+ case OpIsInBounds:
+ return rewriteValuegeneric_OpIsInBounds(v, config)
+ case OpIsSliceInBounds:
+ return rewriteValuegeneric_OpIsSliceInBounds(v, config)
+ case OpLeq16:
+ return rewriteValuegeneric_OpLeq16(v, config)
+ case OpLeq16U:
+ return rewriteValuegeneric_OpLeq16U(v, config)
+ case OpLeq32:
+ return rewriteValuegeneric_OpLeq32(v, config)
+ case OpLeq32U:
+ return rewriteValuegeneric_OpLeq32U(v, config)
+ case OpLeq64:
+ return rewriteValuegeneric_OpLeq64(v, config)
+ case OpLeq64U:
+ return rewriteValuegeneric_OpLeq64U(v, config)
+ case OpLeq8:
+ return rewriteValuegeneric_OpLeq8(v, config)
+ case OpLeq8U:
+ return rewriteValuegeneric_OpLeq8U(v, config)
+ case OpLess16:
+ return rewriteValuegeneric_OpLess16(v, config)
+ case OpLess16U:
+ return rewriteValuegeneric_OpLess16U(v, config)
+ case OpLess32:
+ return rewriteValuegeneric_OpLess32(v, config)
+ case OpLess32U:
+ return rewriteValuegeneric_OpLess32U(v, config)
+ case OpLess64:
+ return rewriteValuegeneric_OpLess64(v, config)
+ case OpLess64U:
+ return rewriteValuegeneric_OpLess64U(v, config)
+ case OpLess8:
+ return rewriteValuegeneric_OpLess8(v, config)
+ case OpLess8U:
+ return rewriteValuegeneric_OpLess8U(v, config)
+ case OpLoad:
+ return rewriteValuegeneric_OpLoad(v, config)
+ case OpLsh16x16:
+ return rewriteValuegeneric_OpLsh16x16(v, config)
+ case OpLsh16x32:
+ return rewriteValuegeneric_OpLsh16x32(v, config)
+ case OpLsh16x64:
+ return rewriteValuegeneric_OpLsh16x64(v, config)
+ case OpLsh16x8:
+ return rewriteValuegeneric_OpLsh16x8(v, config)
+ case OpLsh32x16:
+ return rewriteValuegeneric_OpLsh32x16(v, config)
+ case OpLsh32x32:
+ return rewriteValuegeneric_OpLsh32x32(v, config)
+ case OpLsh32x64:
+ return rewriteValuegeneric_OpLsh32x64(v, config)
+ case OpLsh32x8:
+ return rewriteValuegeneric_OpLsh32x8(v, config)
+ case OpLsh64x16:
+ return rewriteValuegeneric_OpLsh64x16(v, config)
+ case OpLsh64x32:
+ return rewriteValuegeneric_OpLsh64x32(v, config)
+ case OpLsh64x64:
+ return rewriteValuegeneric_OpLsh64x64(v, config)
+ case OpLsh64x8:
+ return rewriteValuegeneric_OpLsh64x8(v, config)
+ case OpLsh8x16:
+ return rewriteValuegeneric_OpLsh8x16(v, config)
+ case OpLsh8x32:
+ return rewriteValuegeneric_OpLsh8x32(v, config)
+ case OpLsh8x64:
+ return rewriteValuegeneric_OpLsh8x64(v, config)
+ case OpLsh8x8:
+ return rewriteValuegeneric_OpLsh8x8(v, config)
+ case OpMod64:
+ return rewriteValuegeneric_OpMod64(v, config)
+ case OpMod64u:
+ return rewriteValuegeneric_OpMod64u(v, config)
+ case OpMul16:
+ return rewriteValuegeneric_OpMul16(v, config)
+ case OpMul32:
+ return rewriteValuegeneric_OpMul32(v, config)
+ case OpMul64:
+ return rewriteValuegeneric_OpMul64(v, config)
+ case OpMul8:
+ return rewriteValuegeneric_OpMul8(v, config)
+ case OpNeg16:
+ return rewriteValuegeneric_OpNeg16(v, config)
+ case OpNeg32:
+ return rewriteValuegeneric_OpNeg32(v, config)
+ case OpNeg64:
+ return rewriteValuegeneric_OpNeg64(v, config)
+ case OpNeg8:
+ return rewriteValuegeneric_OpNeg8(v, config)
+ case OpNeq16:
+ return rewriteValuegeneric_OpNeq16(v, config)
+ case OpNeq32:
+ return rewriteValuegeneric_OpNeq32(v, config)
+ case OpNeq64:
+ return rewriteValuegeneric_OpNeq64(v, config)
+ case OpNeq8:
+ return rewriteValuegeneric_OpNeq8(v, config)
+ case OpNeqInter:
+ return rewriteValuegeneric_OpNeqInter(v, config)
+ case OpNeqPtr:
+ return rewriteValuegeneric_OpNeqPtr(v, config)
+ case OpNeqSlice:
+ return rewriteValuegeneric_OpNeqSlice(v, config)
+ case OpOr16:
+ return rewriteValuegeneric_OpOr16(v, config)
+ case OpOr32:
+ return rewriteValuegeneric_OpOr32(v, config)
+ case OpOr64:
+ return rewriteValuegeneric_OpOr64(v, config)
+ case OpOr8:
+ return rewriteValuegeneric_OpOr8(v, config)
+ case OpPhi:
+ return rewriteValuegeneric_OpPhi(v, config)
+ case OpPtrIndex:
+ return rewriteValuegeneric_OpPtrIndex(v, config)
+ case OpRsh16Ux16:
+ return rewriteValuegeneric_OpRsh16Ux16(v, config)
+ case OpRsh16Ux32:
+ return rewriteValuegeneric_OpRsh16Ux32(v, config)
+ case OpRsh16Ux64:
+ return rewriteValuegeneric_OpRsh16Ux64(v, config)
+ case OpRsh16Ux8:
+ return rewriteValuegeneric_OpRsh16Ux8(v, config)
+ case OpRsh16x16:
+ return rewriteValuegeneric_OpRsh16x16(v, config)
+ case OpRsh16x32:
+ return rewriteValuegeneric_OpRsh16x32(v, config)
+ case OpRsh16x64:
+ return rewriteValuegeneric_OpRsh16x64(v, config)
+ case OpRsh16x8:
+ return rewriteValuegeneric_OpRsh16x8(v, config)
+ case OpRsh32Ux16:
+ return rewriteValuegeneric_OpRsh32Ux16(v, config)
+ case OpRsh32Ux32:
+ return rewriteValuegeneric_OpRsh32Ux32(v, config)
+ case OpRsh32Ux64:
+ return rewriteValuegeneric_OpRsh32Ux64(v, config)
+ case OpRsh32Ux8:
+ return rewriteValuegeneric_OpRsh32Ux8(v, config)
+ case OpRsh32x16:
+ return rewriteValuegeneric_OpRsh32x16(v, config)
+ case OpRsh32x32:
+ return rewriteValuegeneric_OpRsh32x32(v, config)
+ case OpRsh32x64:
+ return rewriteValuegeneric_OpRsh32x64(v, config)
+ case OpRsh32x8:
+ return rewriteValuegeneric_OpRsh32x8(v, config)
+ case OpRsh64Ux16:
+ return rewriteValuegeneric_OpRsh64Ux16(v, config)
+ case OpRsh64Ux32:
+ return rewriteValuegeneric_OpRsh64Ux32(v, config)
+ case OpRsh64Ux64:
+ return rewriteValuegeneric_OpRsh64Ux64(v, config)
+ case OpRsh64Ux8:
+ return rewriteValuegeneric_OpRsh64Ux8(v, config)
+ case OpRsh64x16:
+ return rewriteValuegeneric_OpRsh64x16(v, config)
+ case OpRsh64x32:
+ return rewriteValuegeneric_OpRsh64x32(v, config)
+ case OpRsh64x64:
+ return rewriteValuegeneric_OpRsh64x64(v, config)
+ case OpRsh64x8:
+ return rewriteValuegeneric_OpRsh64x8(v, config)
+ case OpRsh8Ux16:
+ return rewriteValuegeneric_OpRsh8Ux16(v, config)
+ case OpRsh8Ux32:
+ return rewriteValuegeneric_OpRsh8Ux32(v, config)
+ case OpRsh8Ux64:
+ return rewriteValuegeneric_OpRsh8Ux64(v, config)
+ case OpRsh8Ux8:
+ return rewriteValuegeneric_OpRsh8Ux8(v, config)
+ case OpRsh8x16:
+ return rewriteValuegeneric_OpRsh8x16(v, config)
+ case OpRsh8x32:
+ return rewriteValuegeneric_OpRsh8x32(v, config)
+ case OpRsh8x64:
+ return rewriteValuegeneric_OpRsh8x64(v, config)
+ case OpRsh8x8:
+ return rewriteValuegeneric_OpRsh8x8(v, config)
+ case OpSliceCap:
+ return rewriteValuegeneric_OpSliceCap(v, config)
+ case OpSliceLen:
+ return rewriteValuegeneric_OpSliceLen(v, config)
+ case OpSlicePtr:
+ return rewriteValuegeneric_OpSlicePtr(v, config)
+ case OpStore:
+ return rewriteValuegeneric_OpStore(v, config)
+ case OpStringLen:
+ return rewriteValuegeneric_OpStringLen(v, config)
+ case OpStringPtr:
+ return rewriteValuegeneric_OpStringPtr(v, config)
+ case OpStructSelect:
+ return rewriteValuegeneric_OpStructSelect(v, config)
+ case OpSub16:
+ return rewriteValuegeneric_OpSub16(v, config)
+ case OpSub32:
+ return rewriteValuegeneric_OpSub32(v, config)
+ case OpSub64:
+ return rewriteValuegeneric_OpSub64(v, config)
+ case OpSub8:
+ return rewriteValuegeneric_OpSub8(v, config)
+ case OpTrunc16to8:
+ return rewriteValuegeneric_OpTrunc16to8(v, config)
+ case OpTrunc32to16:
+ return rewriteValuegeneric_OpTrunc32to16(v, config)
+ case OpTrunc32to8:
+ return rewriteValuegeneric_OpTrunc32to8(v, config)
+ case OpTrunc64to16:
+ return rewriteValuegeneric_OpTrunc64to16(v, config)
+ case OpTrunc64to32:
+ return rewriteValuegeneric_OpTrunc64to32(v, config)
+ case OpTrunc64to8:
+ return rewriteValuegeneric_OpTrunc64to8(v, config)
+ case OpXor16:
+ return rewriteValuegeneric_OpXor16(v, config)
+ case OpXor32:
+ return rewriteValuegeneric_OpXor32(v, config)
+ case OpXor64:
+ return rewriteValuegeneric_OpXor64(v, config)
+ case OpXor8:
+ return rewriteValuegeneric_OpXor8(v, config)
+ }
+ return false
+}
+func rewriteValuegeneric_OpAdd16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Add16 (Const16 [c]) (Const16 [d]))
+ // cond:
+ // result: (Const16 [c+d])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst16)
+ v.AuxInt = c + d
+ return true
+ }
+ // match: (Add16 x (Const16 <t> [c]))
+ // cond: x.Op != OpConst16
+ // result: (Add16 (Const16 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst16) {
+ break
+ }
+ v.reset(OpAdd16)
+ v0 := b.NewValue0(v.Line, OpConst16, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Add16 (Const16 [0]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpAdd32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Add32 (Const32 [c]) (Const32 [d]))
+ // cond:
+ // result: (Const32 [c+d])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst32)
+ v.AuxInt = c + d
+ return true
+ }
+ // match: (Add32 x (Const32 <t> [c]))
+ // cond: x.Op != OpConst32
+ // result: (Add32 (Const32 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst32) {
+ break
+ }
+ v.reset(OpAdd32)
+ v0 := b.NewValue0(v.Line, OpConst32, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Add32 (Const32 [0]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpAdd64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Add64 (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (Const64 [c+d])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst64)
+ v.AuxInt = c + d
+ return true
+ }
+ // match: (Add64 x (Const64 <t> [c]))
+ // cond: x.Op != OpConst64
+ // result: (Add64 (Const64 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst64) {
+ break
+ }
+ v.reset(OpAdd64)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Add64 (Const64 [0]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpAdd8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Add8 (Const8 [c]) (Const8 [d]))
+ // cond:
+ // result: (Const8 [c+d])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst8)
+ v.AuxInt = c + d
+ return true
+ }
+ // match: (Add8 x (Const8 <t> [c]))
+ // cond: x.Op != OpConst8
+ // result: (Add8 (Const8 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst8) {
+ break
+ }
+ v.reset(OpAdd8)
+ v0 := b.NewValue0(v.Line, OpConst8, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Add8 (Const8 [0]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpAnd16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (And16 x (Const16 <t> [c]))
+ // cond: x.Op != OpConst16
+ // result: (And16 (Const16 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst16) {
+ break
+ }
+ v.reset(OpAnd16)
+ v0 := b.NewValue0(v.Line, OpConst16, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (And16 x x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (And16 (Const16 [-1]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ if v.Args[0].AuxInt != -1 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (And16 (Const16 [0]) _)
+ // cond:
+ // result: (Const16 [0])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst16)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpAnd32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (And32 x (Const32 <t> [c]))
+ // cond: x.Op != OpConst32
+ // result: (And32 (Const32 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst32) {
+ break
+ }
+ v.reset(OpAnd32)
+ v0 := b.NewValue0(v.Line, OpConst32, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (And32 x x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (And32 (Const32 [-1]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ if v.Args[0].AuxInt != -1 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (And32 (Const32 [0]) _)
+ // cond:
+ // result: (Const32 [0])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst32)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (And32 <t> (Const32 [y]) x)
+ // cond: nlz(int64(int32(y))) + nto(int64(int32(y))) == 64
+ // result: (Rsh32Ux32 (Lsh32x32 <t> x (Const32 <t> [nlz(int64(int32(y)))-32])) (Const32 <t> [nlz(int64(int32(y)))-32]))
+ for {
+ t := v.Type
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ y := v.Args[0].AuxInt
+ x := v.Args[1]
+ if !(nlz(int64(int32(y)))+nto(int64(int32(y))) == 64) {
+ break
+ }
+ v.reset(OpRsh32Ux32)
+ v0 := b.NewValue0(v.Line, OpLsh32x32, t)
+ v0.AddArg(x)
+ v1 := b.NewValue0(v.Line, OpConst32, t)
+ v1.AuxInt = nlz(int64(int32(y))) - 32
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ v2 := b.NewValue0(v.Line, OpConst32, t)
+ v2.AuxInt = nlz(int64(int32(y))) - 32
+ v.AddArg(v2)
+ return true
+ }
+ // match: (And32 <t> (Const32 [y]) x)
+ // cond: nlo(int64(int32(y))) + ntz(int64(int32(y))) == 64
+ // result: (Lsh32x32 (Rsh32Ux32 <t> x (Const32 <t> [ntz(int64(int32(y)))])) (Const32 <t> [ntz(int64(int32(y)))]))
+ for {
+ t := v.Type
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ y := v.Args[0].AuxInt
+ x := v.Args[1]
+ if !(nlo(int64(int32(y)))+ntz(int64(int32(y))) == 64) {
+ break
+ }
+ v.reset(OpLsh32x32)
+ v0 := b.NewValue0(v.Line, OpRsh32Ux32, t)
+ v0.AddArg(x)
+ v1 := b.NewValue0(v.Line, OpConst32, t)
+ v1.AuxInt = ntz(int64(int32(y)))
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ v2 := b.NewValue0(v.Line, OpConst32, t)
+ v2.AuxInt = ntz(int64(int32(y)))
+ v.AddArg(v2)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpAnd64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (And64 x (Const64 <t> [c]))
+ // cond: x.Op != OpConst64
+ // result: (And64 (Const64 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst64) {
+ break
+ }
+ v.reset(OpAnd64)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (And64 x x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (And64 (Const64 [-1]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != -1 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (And64 (Const64 [0]) _)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (And64 <t> (Const64 [y]) x)
+ // cond: nlz(y) + nto(y) == 64
+ // result: (Rsh64Ux64 (Lsh64x64 <t> x (Const64 <t> [nlz(y)])) (Const64 <t> [nlz(y)]))
+ for {
+ t := v.Type
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ y := v.Args[0].AuxInt
+ x := v.Args[1]
+ if !(nlz(y)+nto(y) == 64) {
+ break
+ }
+ v.reset(OpRsh64Ux64)
+ v0 := b.NewValue0(v.Line, OpLsh64x64, t)
+ v0.AddArg(x)
+ v1 := b.NewValue0(v.Line, OpConst64, t)
+ v1.AuxInt = nlz(y)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ v2 := b.NewValue0(v.Line, OpConst64, t)
+ v2.AuxInt = nlz(y)
+ v.AddArg(v2)
+ return true
+ }
+ // match: (And64 <t> (Const64 [y]) x)
+ // cond: nlo(y) + ntz(y) == 64
+ // result: (Lsh64x64 (Rsh64Ux64 <t> x (Const64 <t> [ntz(y)])) (Const64 <t> [ntz(y)]))
+ for {
+ t := v.Type
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ y := v.Args[0].AuxInt
+ x := v.Args[1]
+ if !(nlo(y)+ntz(y) == 64) {
+ break
+ }
+ v.reset(OpLsh64x64)
+ v0 := b.NewValue0(v.Line, OpRsh64Ux64, t)
+ v0.AddArg(x)
+ v1 := b.NewValue0(v.Line, OpConst64, t)
+ v1.AuxInt = ntz(y)
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ v2 := b.NewValue0(v.Line, OpConst64, t)
+ v2.AuxInt = ntz(y)
+ v.AddArg(v2)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpAnd8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (And8 x (Const8 <t> [c]))
+ // cond: x.Op != OpConst8
+ // result: (And8 (Const8 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst8) {
+ break
+ }
+ v.reset(OpAnd8)
+ v0 := b.NewValue0(v.Line, OpConst8, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (And8 x x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (And8 (Const8 [-1]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ if v.Args[0].AuxInt != -1 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (And8 (Const8 [0]) _)
+ // cond:
+ // result: (Const8 [0])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst8)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpArg(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Arg {n} [off])
+ // cond: v.Type.IsString()
+ // result: (StringMake (Arg <config.fe.TypeBytePtr()> {n} [off]) (Arg <config.fe.TypeInt()> {n} [off+config.PtrSize]))
+ for {
+ n := v.Aux
+ off := v.AuxInt
+ if !(v.Type.IsString()) {
+ break
+ }
+ v.reset(OpStringMake)
+ v0 := b.NewValue0(v.Line, OpArg, config.fe.TypeBytePtr())
+ v0.Aux = n
+ v0.AuxInt = off
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpArg, config.fe.TypeInt())
+ v1.Aux = n
+ v1.AuxInt = off + config.PtrSize
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Arg {n} [off])
+ // cond: v.Type.IsSlice()
+ // result: (SliceMake (Arg <config.fe.TypeBytePtr()> {n} [off]) (Arg <config.fe.TypeInt()> {n} [off+config.PtrSize]) (Arg <config.fe.TypeInt()> {n} [off+2*config.PtrSize]))
+ for {
+ n := v.Aux
+ off := v.AuxInt
+ if !(v.Type.IsSlice()) {
+ break
+ }
+ v.reset(OpSliceMake)
+ v0 := b.NewValue0(v.Line, OpArg, config.fe.TypeBytePtr())
+ v0.Aux = n
+ v0.AuxInt = off
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpArg, config.fe.TypeInt())
+ v1.Aux = n
+ v1.AuxInt = off + config.PtrSize
+ v.AddArg(v1)
+ v2 := b.NewValue0(v.Line, OpArg, config.fe.TypeInt())
+ v2.Aux = n
+ v2.AuxInt = off + 2*config.PtrSize
+ v.AddArg(v2)
+ return true
+ }
+ // match: (Arg {n} [off])
+ // cond: v.Type.IsInterface()
+ // result: (IMake (Arg <config.fe.TypeBytePtr()> {n} [off]) (Arg <config.fe.TypeBytePtr()> {n} [off+config.PtrSize]))
+ for {
+ n := v.Aux
+ off := v.AuxInt
+ if !(v.Type.IsInterface()) {
+ break
+ }
+ v.reset(OpIMake)
+ v0 := b.NewValue0(v.Line, OpArg, config.fe.TypeBytePtr())
+ v0.Aux = n
+ v0.AuxInt = off
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpArg, config.fe.TypeBytePtr())
+ v1.Aux = n
+ v1.AuxInt = off + config.PtrSize
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Arg {n} [off])
+ // cond: v.Type.IsComplex() && v.Type.Size() == 16
+ // result: (ComplexMake (Arg <config.fe.TypeFloat64()> {n} [off]) (Arg <config.fe.TypeFloat64()> {n} [off+8]))
+ for {
+ n := v.Aux
+ off := v.AuxInt
+ if !(v.Type.IsComplex() && v.Type.Size() == 16) {
+ break
+ }
+ v.reset(OpComplexMake)
+ v0 := b.NewValue0(v.Line, OpArg, config.fe.TypeFloat64())
+ v0.Aux = n
+ v0.AuxInt = off
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpArg, config.fe.TypeFloat64())
+ v1.Aux = n
+ v1.AuxInt = off + 8
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Arg {n} [off])
+ // cond: v.Type.IsComplex() && v.Type.Size() == 8
+ // result: (ComplexMake (Arg <config.fe.TypeFloat32()> {n} [off]) (Arg <config.fe.TypeFloat32()> {n} [off+4]))
+ for {
+ n := v.Aux
+ off := v.AuxInt
+ if !(v.Type.IsComplex() && v.Type.Size() == 8) {
+ break
+ }
+ v.reset(OpComplexMake)
+ v0 := b.NewValue0(v.Line, OpArg, config.fe.TypeFloat32())
+ v0.Aux = n
+ v0.AuxInt = off
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpArg, config.fe.TypeFloat32())
+ v1.Aux = n
+ v1.AuxInt = off + 4
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Arg <t>)
+ // cond: t.IsStruct() && t.NumFields() == 0 && config.fe.CanSSA(t)
+ // result: (StructMake0)
+ for {
+ t := v.Type
+ if !(t.IsStruct() && t.NumFields() == 0 && config.fe.CanSSA(t)) {
+ break
+ }
+ v.reset(OpStructMake0)
+ return true
+ }
+ // match: (Arg <t> {n} [off])
+ // cond: t.IsStruct() && t.NumFields() == 1 && config.fe.CanSSA(t)
+ // result: (StructMake1 (Arg <t.FieldType(0)> {n} [off+t.FieldOff(0)]))
+ for {
+ t := v.Type
+ n := v.Aux
+ off := v.AuxInt
+ if !(t.IsStruct() && t.NumFields() == 1 && config.fe.CanSSA(t)) {
+ break
+ }
+ v.reset(OpStructMake1)
+ v0 := b.NewValue0(v.Line, OpArg, t.FieldType(0))
+ v0.Aux = n
+ v0.AuxInt = off + t.FieldOff(0)
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Arg <t> {n} [off])
+ // cond: t.IsStruct() && t.NumFields() == 2 && config.fe.CanSSA(t)
+ // result: (StructMake2 (Arg <t.FieldType(0)> {n} [off+t.FieldOff(0)]) (Arg <t.FieldType(1)> {n} [off+t.FieldOff(1)]))
+ for {
+ t := v.Type
+ n := v.Aux
+ off := v.AuxInt
+ if !(t.IsStruct() && t.NumFields() == 2 && config.fe.CanSSA(t)) {
+ break
+ }
+ v.reset(OpStructMake2)
+ v0 := b.NewValue0(v.Line, OpArg, t.FieldType(0))
+ v0.Aux = n
+ v0.AuxInt = off + t.FieldOff(0)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpArg, t.FieldType(1))
+ v1.Aux = n
+ v1.AuxInt = off + t.FieldOff(1)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Arg <t> {n} [off])
+ // cond: t.IsStruct() && t.NumFields() == 3 && config.fe.CanSSA(t)
+ // result: (StructMake3 (Arg <t.FieldType(0)> {n} [off+t.FieldOff(0)]) (Arg <t.FieldType(1)> {n} [off+t.FieldOff(1)]) (Arg <t.FieldType(2)> {n} [off+t.FieldOff(2)]))
+ for {
+ t := v.Type
+ n := v.Aux
+ off := v.AuxInt
+ if !(t.IsStruct() && t.NumFields() == 3 && config.fe.CanSSA(t)) {
+ break
+ }
+ v.reset(OpStructMake3)
+ v0 := b.NewValue0(v.Line, OpArg, t.FieldType(0))
+ v0.Aux = n
+ v0.AuxInt = off + t.FieldOff(0)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpArg, t.FieldType(1))
+ v1.Aux = n
+ v1.AuxInt = off + t.FieldOff(1)
+ v.AddArg(v1)
+ v2 := b.NewValue0(v.Line, OpArg, t.FieldType(2))
+ v2.Aux = n
+ v2.AuxInt = off + t.FieldOff(2)
+ v.AddArg(v2)
+ return true
+ }
+ // match: (Arg <t> {n} [off])
+ // cond: t.IsStruct() && t.NumFields() == 4 && config.fe.CanSSA(t)
+ // result: (StructMake4 (Arg <t.FieldType(0)> {n} [off+t.FieldOff(0)]) (Arg <t.FieldType(1)> {n} [off+t.FieldOff(1)]) (Arg <t.FieldType(2)> {n} [off+t.FieldOff(2)]) (Arg <t.FieldType(3)> {n} [off+t.FieldOff(3)]))
+ for {
+ t := v.Type
+ n := v.Aux
+ off := v.AuxInt
+ if !(t.IsStruct() && t.NumFields() == 4 && config.fe.CanSSA(t)) {
+ break
+ }
+ v.reset(OpStructMake4)
+ v0 := b.NewValue0(v.Line, OpArg, t.FieldType(0))
+ v0.Aux = n
+ v0.AuxInt = off + t.FieldOff(0)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpArg, t.FieldType(1))
+ v1.Aux = n
+ v1.AuxInt = off + t.FieldOff(1)
+ v.AddArg(v1)
+ v2 := b.NewValue0(v.Line, OpArg, t.FieldType(2))
+ v2.Aux = n
+ v2.AuxInt = off + t.FieldOff(2)
+ v.AddArg(v2)
+ v3 := b.NewValue0(v.Line, OpArg, t.FieldType(3))
+ v3.Aux = n
+ v3.AuxInt = off + t.FieldOff(3)
+ v.AddArg(v3)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpArrayIndex(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ArrayIndex <t> [0] (Load ptr mem))
+ // cond:
+ // result: @v.Args[0].Block (Load <t> ptr mem)
+ for {
+ t := v.Type
+ if v.AuxInt != 0 {
+ break
+ }
+ if v.Args[0].Op != OpLoad {
+ break
+ }
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[0].Args[1]
+ b = v.Args[0].Block
+ v0 := b.NewValue0(v.Line, OpLoad, t)
+ v.reset(OpCopy)
+ v.AddArg(v0)
+ v0.AddArg(ptr)
+ v0.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpCom16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Com16 (Com16 x))
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpCom16 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpCom32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Com32 (Com32 x))
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpCom32 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpCom64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Com64 (Com64 x))
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpCom64 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpCom8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Com8 (Com8 x))
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpCom8 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpComplexImag(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ComplexImag (ComplexMake _ imag ))
+ // cond:
+ // result: imag
+ for {
+ if v.Args[0].Op != OpComplexMake {
+ break
+ }
+ imag := v.Args[0].Args[1]
+ v.reset(OpCopy)
+ v.Type = imag.Type
+ v.AddArg(imag)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpComplexReal(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ComplexReal (ComplexMake real _ ))
+ // cond:
+ // result: real
+ for {
+ if v.Args[0].Op != OpComplexMake {
+ break
+ }
+ real := v.Args[0].Args[0]
+ v.reset(OpCopy)
+ v.Type = real.Type
+ v.AddArg(real)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpConstInterface(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ConstInterface)
+ // cond:
+ // result: (IMake (ConstNil <config.fe.TypeBytePtr()>) (ConstNil <config.fe.TypeBytePtr()>))
+ for {
+ v.reset(OpIMake)
+ v0 := b.NewValue0(v.Line, OpConstNil, config.fe.TypeBytePtr())
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpConstNil, config.fe.TypeBytePtr())
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpConstSlice(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ConstSlice)
+ // cond: config.PtrSize == 4
+ // result: (SliceMake (ConstNil <config.fe.TypeBytePtr()>) (Const32 <config.fe.TypeInt()> [0]) (Const32 <config.fe.TypeInt()> [0]))
+ for {
+ if !(config.PtrSize == 4) {
+ break
+ }
+ v.reset(OpSliceMake)
+ v0 := b.NewValue0(v.Line, OpConstNil, config.fe.TypeBytePtr())
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpConst32, config.fe.TypeInt())
+ v1.AuxInt = 0
+ v.AddArg(v1)
+ v2 := b.NewValue0(v.Line, OpConst32, config.fe.TypeInt())
+ v2.AuxInt = 0
+ v.AddArg(v2)
+ return true
+ }
+ // match: (ConstSlice)
+ // cond: config.PtrSize == 8
+ // result: (SliceMake (ConstNil <config.fe.TypeBytePtr()>) (Const64 <config.fe.TypeInt()> [0]) (Const64 <config.fe.TypeInt()> [0]))
+ for {
+ if !(config.PtrSize == 8) {
+ break
+ }
+ v.reset(OpSliceMake)
+ v0 := b.NewValue0(v.Line, OpConstNil, config.fe.TypeBytePtr())
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpConst64, config.fe.TypeInt())
+ v1.AuxInt = 0
+ v.AddArg(v1)
+ v2 := b.NewValue0(v.Line, OpConst64, config.fe.TypeInt())
+ v2.AuxInt = 0
+ v.AddArg(v2)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpConstString(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ConstString {s})
+ // cond: config.PtrSize == 4 && s.(string) == ""
+ // result: (StringMake (ConstNil) (Const32 <config.fe.TypeInt()> [0]))
+ for {
+ s := v.Aux
+ if !(config.PtrSize == 4 && s.(string) == "") {
+ break
+ }
+ v.reset(OpStringMake)
+ v0 := b.NewValue0(v.Line, OpConstNil, config.fe.TypeBytePtr())
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpConst32, config.fe.TypeInt())
+ v1.AuxInt = 0
+ v.AddArg(v1)
+ return true
+ }
+ // match: (ConstString {s})
+ // cond: config.PtrSize == 8 && s.(string) == ""
+ // result: (StringMake (ConstNil) (Const64 <config.fe.TypeInt()> [0]))
+ for {
+ s := v.Aux
+ if !(config.PtrSize == 8 && s.(string) == "") {
+ break
+ }
+ v.reset(OpStringMake)
+ v0 := b.NewValue0(v.Line, OpConstNil, config.fe.TypeBytePtr())
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpConst64, config.fe.TypeInt())
+ v1.AuxInt = 0
+ v.AddArg(v1)
+ return true
+ }
+ // match: (ConstString {s})
+ // cond: config.PtrSize == 4 && s.(string) != ""
+ // result: (StringMake (Addr <config.fe.TypeBytePtr()> {config.fe.StringData(s.(string))} (SB)) (Const32 <config.fe.TypeInt()> [int64(len(s.(string)))]))
+ for {
+ s := v.Aux
+ if !(config.PtrSize == 4 && s.(string) != "") {
+ break
+ }
+ v.reset(OpStringMake)
+ v0 := b.NewValue0(v.Line, OpAddr, config.fe.TypeBytePtr())
+ v0.Aux = config.fe.StringData(s.(string))
+ v1 := b.NewValue0(v.Line, OpSB, config.fe.TypeUintptr())
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ v2 := b.NewValue0(v.Line, OpConst32, config.fe.TypeInt())
+ v2.AuxInt = int64(len(s.(string)))
+ v.AddArg(v2)
+ return true
+ }
+ // match: (ConstString {s})
+ // cond: config.PtrSize == 8 && s.(string) != ""
+ // result: (StringMake (Addr <config.fe.TypeBytePtr()> {config.fe.StringData(s.(string))} (SB)) (Const64 <config.fe.TypeInt()> [int64(len(s.(string)))]))
+ for {
+ s := v.Aux
+ if !(config.PtrSize == 8 && s.(string) != "") {
+ break
+ }
+ v.reset(OpStringMake)
+ v0 := b.NewValue0(v.Line, OpAddr, config.fe.TypeBytePtr())
+ v0.Aux = config.fe.StringData(s.(string))
+ v1 := b.NewValue0(v.Line, OpSB, config.fe.TypeUintptr())
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ v2 := b.NewValue0(v.Line, OpConst64, config.fe.TypeInt())
+ v2.AuxInt = int64(len(s.(string)))
+ v.AddArg(v2)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpConvert(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Convert (Add64 (Convert ptr mem) off) mem)
+ // cond:
+ // result: (Add64 ptr off)
+ for {
+ if v.Args[0].Op != OpAdd64 {
+ break
+ }
+ if v.Args[0].Args[0].Op != OpConvert {
+ break
+ }
+ ptr := v.Args[0].Args[0].Args[0]
+ mem := v.Args[0].Args[0].Args[1]
+ off := v.Args[0].Args[1]
+ if v.Args[1] != mem {
+ break
+ }
+ v.reset(OpAdd64)
+ v.AddArg(ptr)
+ v.AddArg(off)
+ return true
+ }
+ // match: (Convert (Add64 off (Convert ptr mem)) mem)
+ // cond:
+ // result: (Add64 ptr off)
+ for {
+ if v.Args[0].Op != OpAdd64 {
+ break
+ }
+ off := v.Args[0].Args[0]
+ if v.Args[0].Args[1].Op != OpConvert {
+ break
+ }
+ ptr := v.Args[0].Args[1].Args[0]
+ mem := v.Args[0].Args[1].Args[1]
+ if v.Args[1] != mem {
+ break
+ }
+ v.reset(OpAdd64)
+ v.AddArg(ptr)
+ v.AddArg(off)
+ return true
+ }
+ // match: (Convert (Convert ptr mem) mem)
+ // cond:
+ // result: ptr
+ for {
+ if v.Args[0].Op != OpConvert {
+ break
+ }
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[0].Args[1]
+ if v.Args[1] != mem {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = ptr.Type
+ v.AddArg(ptr)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpDiv64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Div64 <t> x (Const64 [c]))
+ // cond: c > 0 && smagic64ok(c) && smagic64m(c) > 0
+ // result: (Sub64 <t> (Rsh64x64 <t> (Hmul64 <t> (Const64 <t> [smagic64m(c)]) x) (Const64 <t> [smagic64s(c)])) (Rsh64x64 <t> x (Const64 <t> [63])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(c > 0 && smagic64ok(c) && smagic64m(c) > 0) {
+ break
+ }
+ v.reset(OpSub64)
+ v.Type = t
+ v0 := b.NewValue0(v.Line, OpRsh64x64, t)
+ v1 := b.NewValue0(v.Line, OpHmul64, t)
+ v2 := b.NewValue0(v.Line, OpConst64, t)
+ v2.AuxInt = smagic64m(c)
+ v1.AddArg(v2)
+ v1.AddArg(x)
+ v0.AddArg(v1)
+ v3 := b.NewValue0(v.Line, OpConst64, t)
+ v3.AuxInt = smagic64s(c)
+ v0.AddArg(v3)
+ v.AddArg(v0)
+ v4 := b.NewValue0(v.Line, OpRsh64x64, t)
+ v4.AddArg(x)
+ v5 := b.NewValue0(v.Line, OpConst64, t)
+ v5.AuxInt = 63
+ v4.AddArg(v5)
+ v.AddArg(v4)
+ return true
+ }
+ // match: (Div64 <t> x (Const64 [c]))
+ // cond: c > 0 && smagic64ok(c) && smagic64m(c) < 0
+ // result: (Sub64 <t> (Rsh64x64 <t> (Add64 <t> (Hmul64 <t> (Const64 <t> [smagic64m(c)]) x) x) (Const64 <t> [smagic64s(c)])) (Rsh64x64 <t> x (Const64 <t> [63])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(c > 0 && smagic64ok(c) && smagic64m(c) < 0) {
+ break
+ }
+ v.reset(OpSub64)
+ v.Type = t
+ v0 := b.NewValue0(v.Line, OpRsh64x64, t)
+ v1 := b.NewValue0(v.Line, OpAdd64, t)
+ v2 := b.NewValue0(v.Line, OpHmul64, t)
+ v3 := b.NewValue0(v.Line, OpConst64, t)
+ v3.AuxInt = smagic64m(c)
+ v2.AddArg(v3)
+ v2.AddArg(x)
+ v1.AddArg(v2)
+ v1.AddArg(x)
+ v0.AddArg(v1)
+ v4 := b.NewValue0(v.Line, OpConst64, t)
+ v4.AuxInt = smagic64s(c)
+ v0.AddArg(v4)
+ v.AddArg(v0)
+ v5 := b.NewValue0(v.Line, OpRsh64x64, t)
+ v5.AddArg(x)
+ v6 := b.NewValue0(v.Line, OpConst64, t)
+ v6.AuxInt = 63
+ v5.AddArg(v6)
+ v.AddArg(v5)
+ return true
+ }
+ // match: (Div64 <t> x (Const64 [c]))
+ // cond: c < 0 && smagic64ok(c) && smagic64m(c) > 0
+ // result: (Neg64 <t> (Sub64 <t> (Rsh64x64 <t> (Hmul64 <t> (Const64 <t> [smagic64m(c)]) x) (Const64 <t> [smagic64s(c)])) (Rsh64x64 <t> x (Const64 <t> [63]))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(c < 0 && smagic64ok(c) && smagic64m(c) > 0) {
+ break
+ }
+ v.reset(OpNeg64)
+ v.Type = t
+ v0 := b.NewValue0(v.Line, OpSub64, t)
+ v1 := b.NewValue0(v.Line, OpRsh64x64, t)
+ v2 := b.NewValue0(v.Line, OpHmul64, t)
+ v3 := b.NewValue0(v.Line, OpConst64, t)
+ v3.AuxInt = smagic64m(c)
+ v2.AddArg(v3)
+ v2.AddArg(x)
+ v1.AddArg(v2)
+ v4 := b.NewValue0(v.Line, OpConst64, t)
+ v4.AuxInt = smagic64s(c)
+ v1.AddArg(v4)
+ v0.AddArg(v1)
+ v5 := b.NewValue0(v.Line, OpRsh64x64, t)
+ v5.AddArg(x)
+ v6 := b.NewValue0(v.Line, OpConst64, t)
+ v6.AuxInt = 63
+ v5.AddArg(v6)
+ v0.AddArg(v5)
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Div64 <t> x (Const64 [c]))
+ // cond: c < 0 && smagic64ok(c) && smagic64m(c) < 0
+ // result: (Neg64 <t> (Sub64 <t> (Rsh64x64 <t> (Add64 <t> (Hmul64 <t> (Const64 <t> [smagic64m(c)]) x) x) (Const64 <t> [smagic64s(c)])) (Rsh64x64 <t> x (Const64 <t> [63]))))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(c < 0 && smagic64ok(c) && smagic64m(c) < 0) {
+ break
+ }
+ v.reset(OpNeg64)
+ v.Type = t
+ v0 := b.NewValue0(v.Line, OpSub64, t)
+ v1 := b.NewValue0(v.Line, OpRsh64x64, t)
+ v2 := b.NewValue0(v.Line, OpAdd64, t)
+ v3 := b.NewValue0(v.Line, OpHmul64, t)
+ v4 := b.NewValue0(v.Line, OpConst64, t)
+ v4.AuxInt = smagic64m(c)
+ v3.AddArg(v4)
+ v3.AddArg(x)
+ v2.AddArg(v3)
+ v2.AddArg(x)
+ v1.AddArg(v2)
+ v5 := b.NewValue0(v.Line, OpConst64, t)
+ v5.AuxInt = smagic64s(c)
+ v1.AddArg(v5)
+ v0.AddArg(v1)
+ v6 := b.NewValue0(v.Line, OpRsh64x64, t)
+ v6.AddArg(x)
+ v7 := b.NewValue0(v.Line, OpConst64, t)
+ v7.AuxInt = 63
+ v6.AddArg(v7)
+ v0.AddArg(v6)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpDiv64u(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Div64u <t> x (Const64 [c]))
+ // cond: umagic64ok(c) && !umagic64a(c)
+ // result: (Rsh64Ux64 (Hmul64u <t> (Const64 <t> [umagic64m(c)]) x) (Const64 <t> [umagic64s(c)]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(umagic64ok(c) && !umagic64a(c)) {
+ break
+ }
+ v.reset(OpRsh64Ux64)
+ v0 := b.NewValue0(v.Line, OpHmul64u, t)
+ v1 := b.NewValue0(v.Line, OpConst64, t)
+ v1.AuxInt = umagic64m(c)
+ v0.AddArg(v1)
+ v0.AddArg(x)
+ v.AddArg(v0)
+ v2 := b.NewValue0(v.Line, OpConst64, t)
+ v2.AuxInt = umagic64s(c)
+ v.AddArg(v2)
+ return true
+ }
+ // match: (Div64u <t> x (Const64 [c]))
+ // cond: umagic64ok(c) && umagic64a(c)
+ // result: (Rsh64Ux64 (Avg64u <t> (Hmul64u <t> x (Const64 <t> [umagic64m(c)])) x) (Const64 <t> [umagic64s(c)-1]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(umagic64ok(c) && umagic64a(c)) {
+ break
+ }
+ v.reset(OpRsh64Ux64)
+ v0 := b.NewValue0(v.Line, OpAvg64u, t)
+ v1 := b.NewValue0(v.Line, OpHmul64u, t)
+ v1.AddArg(x)
+ v2 := b.NewValue0(v.Line, OpConst64, t)
+ v2.AuxInt = umagic64m(c)
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v0.AddArg(x)
+ v.AddArg(v0)
+ v3 := b.NewValue0(v.Line, OpConst64, t)
+ v3.AuxInt = umagic64s(c) - 1
+ v.AddArg(v3)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpEq16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Eq16 x x)
+ // cond:
+ // result: (ConstBool [1])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpConstBool)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (Eq16 (Const16 <t> [c]) (Add16 (Const16 <t> [d]) x))
+ // cond:
+ // result: (Eq16 (Const16 <t> [c-d]) x)
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ t := v.Args[0].Type
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpAdd16 {
+ break
+ }
+ if v.Args[1].Args[0].Op != OpConst16 {
+ break
+ }
+ if v.Args[1].Args[0].Type != v.Args[0].Type {
+ break
+ }
+ d := v.Args[1].Args[0].AuxInt
+ x := v.Args[1].Args[1]
+ v.reset(OpEq16)
+ v0 := b.NewValue0(v.Line, OpConst16, t)
+ v0.AuxInt = c - d
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Eq16 x (Const16 <t> [c]))
+ // cond: x.Op != OpConst16
+ // result: (Eq16 (Const16 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst16) {
+ break
+ }
+ v.reset(OpEq16)
+ v0 := b.NewValue0(v.Line, OpConst16, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Eq16 (Const16 [c]) (Const16 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int16(c) == int16(d))])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int16(c) == int16(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpEq32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Eq32 x x)
+ // cond:
+ // result: (ConstBool [1])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpConstBool)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (Eq32 (Const32 <t> [c]) (Add32 (Const32 <t> [d]) x))
+ // cond:
+ // result: (Eq32 (Const32 <t> [c-d]) x)
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ t := v.Args[0].Type
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpAdd32 {
+ break
+ }
+ if v.Args[1].Args[0].Op != OpConst32 {
+ break
+ }
+ if v.Args[1].Args[0].Type != v.Args[0].Type {
+ break
+ }
+ d := v.Args[1].Args[0].AuxInt
+ x := v.Args[1].Args[1]
+ v.reset(OpEq32)
+ v0 := b.NewValue0(v.Line, OpConst32, t)
+ v0.AuxInt = c - d
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Eq32 x (Const32 <t> [c]))
+ // cond: x.Op != OpConst32
+ // result: (Eq32 (Const32 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst32) {
+ break
+ }
+ v.reset(OpEq32)
+ v0 := b.NewValue0(v.Line, OpConst32, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Eq32 (Const32 [c]) (Const32 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int32(c) == int32(d))])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int32(c) == int32(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpEq64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Eq64 x x)
+ // cond:
+ // result: (ConstBool [1])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpConstBool)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (Eq64 (Const64 <t> [c]) (Add64 (Const64 <t> [d]) x))
+ // cond:
+ // result: (Eq64 (Const64 <t> [c-d]) x)
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ t := v.Args[0].Type
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpAdd64 {
+ break
+ }
+ if v.Args[1].Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[1].Args[0].Type != v.Args[0].Type {
+ break
+ }
+ d := v.Args[1].Args[0].AuxInt
+ x := v.Args[1].Args[1]
+ v.reset(OpEq64)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c - d
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Eq64 x (Const64 <t> [c]))
+ // cond: x.Op != OpConst64
+ // result: (Eq64 (Const64 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst64) {
+ break
+ }
+ v.reset(OpEq64)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Eq64 (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int64(c) == int64(d))])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int64(c) == int64(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpEq8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Eq8 x x)
+ // cond:
+ // result: (ConstBool [1])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpConstBool)
+ v.AuxInt = 1
+ return true
+ }
+ // match: (Eq8 (ConstBool [c]) (ConstBool [d]))
+ // cond:
+ // result: (ConstBool [b2i((int8(c) != 0) == (int8(d) != 0))])
+ for {
+ if v.Args[0].Op != OpConstBool {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConstBool {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i((int8(c) != 0) == (int8(d) != 0))
+ return true
+ }
+ // match: (Eq8 (ConstBool [0]) x)
+ // cond:
+ // result: (Not x)
+ for {
+ if v.Args[0].Op != OpConstBool {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpNot)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Eq8 (ConstBool [1]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConstBool {
+ break
+ }
+ if v.Args[0].AuxInt != 1 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Eq8 (Const8 <t> [c]) (Add8 (Const8 <t> [d]) x))
+ // cond:
+ // result: (Eq8 (Const8 <t> [c-d]) x)
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ t := v.Args[0].Type
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpAdd8 {
+ break
+ }
+ if v.Args[1].Args[0].Op != OpConst8 {
+ break
+ }
+ if v.Args[1].Args[0].Type != v.Args[0].Type {
+ break
+ }
+ d := v.Args[1].Args[0].AuxInt
+ x := v.Args[1].Args[1]
+ v.reset(OpEq8)
+ v0 := b.NewValue0(v.Line, OpConst8, t)
+ v0.AuxInt = c - d
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Eq8 x (Const8 <t> [c]))
+ // cond: x.Op != OpConst8
+ // result: (Eq8 (Const8 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst8) {
+ break
+ }
+ v.reset(OpEq8)
+ v0 := b.NewValue0(v.Line, OpConst8, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Eq8 x (ConstBool <t> [c]))
+ // cond: x.Op != OpConstBool
+ // result: (Eq8 (ConstBool <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConstBool {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConstBool) {
+ break
+ }
+ v.reset(OpEq8)
+ v0 := b.NewValue0(v.Line, OpConstBool, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Eq8 (Const8 [c]) (Const8 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int8(c) == int8(d))])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int8(c) == int8(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpEqInter(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (EqInter x y)
+ // cond:
+ // result: (EqPtr (ITab x) (ITab y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpEqPtr)
+ v0 := b.NewValue0(v.Line, OpITab, config.fe.TypeBytePtr())
+ v0.AddArg(x)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpITab, config.fe.TypeBytePtr())
+ v1.AddArg(y)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpEqPtr(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (EqPtr p (ConstNil))
+ // cond:
+ // result: (Not (IsNonNil p))
+ for {
+ p := v.Args[0]
+ if v.Args[1].Op != OpConstNil {
+ break
+ }
+ v.reset(OpNot)
+ v0 := b.NewValue0(v.Line, OpIsNonNil, config.fe.TypeBool())
+ v0.AddArg(p)
+ v.AddArg(v0)
+ return true
+ }
+ // match: (EqPtr (ConstNil) p)
+ // cond:
+ // result: (Not (IsNonNil p))
+ for {
+ if v.Args[0].Op != OpConstNil {
+ break
+ }
+ p := v.Args[1]
+ v.reset(OpNot)
+ v0 := b.NewValue0(v.Line, OpIsNonNil, config.fe.TypeBool())
+ v0.AddArg(p)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpEqSlice(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (EqSlice x y)
+ // cond:
+ // result: (EqPtr (SlicePtr x) (SlicePtr y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpEqPtr)
+ v0 := b.NewValue0(v.Line, OpSlicePtr, config.fe.TypeBytePtr())
+ v0.AddArg(x)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpSlicePtr, config.fe.TypeBytePtr())
+ v1.AddArg(y)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpGeq16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq16 (Const16 [c]) (Const16 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int16(c) >= int16(d))])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int16(c) >= int16(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpGeq16U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq16U (Const16 [c]) (Const16 [d]))
+ // cond:
+ // result: (ConstBool [b2i(uint16(c) >= uint16(d))])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(uint16(c) >= uint16(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpGeq32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq32 (Const32 [c]) (Const32 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int32(c) >= int32(d))])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int32(c) >= int32(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpGeq32U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq32U (Const32 [c]) (Const32 [d]))
+ // cond:
+ // result: (ConstBool [b2i(uint32(c) >= uint32(d))])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(uint32(c) >= uint32(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpGeq64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq64 (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int64(c) >= int64(d))])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int64(c) >= int64(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpGeq64U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq64U (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (ConstBool [b2i(uint64(c) >= uint64(d))])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(uint64(c) >= uint64(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpGeq8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq8 (Const8 [c]) (Const8 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int8(c) >= int8(d))])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int8(c) >= int8(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpGeq8U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Geq8U (Const8 [c]) (Const8 [d]))
+ // cond:
+ // result: (ConstBool [b2i(uint8(c) >= uint8(d))])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(uint8(c) >= uint8(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpGreater16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater16 (Const16 [c]) (Const16 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int16(c) > int16(d))])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int16(c) > int16(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpGreater16U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater16U (Const16 [c]) (Const16 [d]))
+ // cond:
+ // result: (ConstBool [b2i(uint16(c) > uint16(d))])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(uint16(c) > uint16(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpGreater32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater32 (Const32 [c]) (Const32 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int32(c) > int32(d))])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int32(c) > int32(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpGreater32U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater32U (Const32 [c]) (Const32 [d]))
+ // cond:
+ // result: (ConstBool [b2i(uint32(c) > uint32(d))])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(uint32(c) > uint32(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpGreater64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater64 (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int64(c) > int64(d))])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int64(c) > int64(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpGreater64U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater64U (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (ConstBool [b2i(uint64(c) > uint64(d))])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(uint64(c) > uint64(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpGreater8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater8 (Const8 [c]) (Const8 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int8(c) > int8(d))])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int8(c) > int8(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpGreater8U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Greater8U (Const8 [c]) (Const8 [d]))
+ // cond:
+ // result: (ConstBool [b2i(uint8(c) > uint8(d))])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(uint8(c) > uint8(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpIData(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (IData (IMake _ data))
+ // cond:
+ // result: data
+ for {
+ if v.Args[0].Op != OpIMake {
+ break
+ }
+ data := v.Args[0].Args[1]
+ v.reset(OpCopy)
+ v.Type = data.Type
+ v.AddArg(data)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpITab(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (ITab (IMake itab _))
+ // cond:
+ // result: itab
+ for {
+ if v.Args[0].Op != OpIMake {
+ break
+ }
+ itab := v.Args[0].Args[0]
+ v.reset(OpCopy)
+ v.Type = itab.Type
+ v.AddArg(itab)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpIsInBounds(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (IsInBounds (Const32 [c]) (Const32 [d]))
+ // cond:
+ // result: (ConstBool [b2i(inBounds32(c,d))])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(inBounds32(c, d))
+ return true
+ }
+ // match: (IsInBounds (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (ConstBool [b2i(inBounds64(c,d))])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(inBounds64(c, d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpIsSliceInBounds(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (IsSliceInBounds (Const32 [c]) (Const32 [d]))
+ // cond:
+ // result: (ConstBool [b2i(sliceInBounds32(c,d))])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(sliceInBounds32(c, d))
+ return true
+ }
+ // match: (IsSliceInBounds (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (ConstBool [b2i(sliceInBounds64(c,d))])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(sliceInBounds64(c, d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLeq16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq16 (Const16 [c]) (Const16 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int16(c) <= int16(d))])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int16(c) <= int16(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLeq16U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq16U (Const16 [c]) (Const16 [d]))
+ // cond:
+ // result: (ConstBool [b2i(uint16(c) <= uint16(d))])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(uint16(c) <= uint16(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLeq32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq32 (Const32 [c]) (Const32 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int32(c) <= int32(d))])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int32(c) <= int32(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLeq32U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq32U (Const32 [c]) (Const32 [d]))
+ // cond:
+ // result: (ConstBool [b2i(uint32(c) <= uint32(d))])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(uint32(c) <= uint32(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLeq64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq64 (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int64(c) <= int64(d))])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int64(c) <= int64(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLeq64U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq64U (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (ConstBool [b2i(uint64(c) <= uint64(d))])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(uint64(c) <= uint64(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLeq8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq8 (Const8 [c]) (Const8 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int8(c) <= int8(d))])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int8(c) <= int8(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLeq8U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Leq8U (Const8 [c]) (Const8 [d]))
+ // cond:
+ // result: (ConstBool [b2i(uint8(c) <= uint8(d))])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(uint8(c) <= uint8(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLess16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less16 (Const16 [c]) (Const16 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int16(c) < int16(d))])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int16(c) < int16(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLess16U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less16U (Const16 [c]) (Const16 [d]))
+ // cond:
+ // result: (ConstBool [b2i(uint16(c) < uint16(d))])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(uint16(c) < uint16(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLess32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less32 (Const32 [c]) (Const32 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int32(c) < int32(d))])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int32(c) < int32(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLess32U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less32U (Const32 [c]) (Const32 [d]))
+ // cond:
+ // result: (ConstBool [b2i(uint32(c) < uint32(d))])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(uint32(c) < uint32(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLess64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less64 (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int64(c) < int64(d))])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int64(c) < int64(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLess64U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less64U (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (ConstBool [b2i(uint64(c) < uint64(d))])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(uint64(c) < uint64(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLess8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less8 (Const8 [c]) (Const8 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int8(c) < int8(d))])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int8(c) < int8(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLess8U(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Less8U (Const8 [c]) (Const8 [d]))
+ // cond:
+ // result: (ConstBool [b2i(uint8(c) < uint8(d))])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(uint8(c) < uint8(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLoad(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Load <t1> p1 (Store [w] p2 x _))
+ // cond: isSamePtr(p1,p2) && t1.Compare(x.Type)==CMPeq && w == t1.Size()
+ // result: x
+ for {
+ t1 := v.Type
+ p1 := v.Args[0]
+ if v.Args[1].Op != OpStore {
+ break
+ }
+ w := v.Args[1].AuxInt
+ p2 := v.Args[1].Args[0]
+ x := v.Args[1].Args[1]
+ if !(isSamePtr(p1, p2) && t1.Compare(x.Type) == CMPeq && w == t1.Size()) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Load <t> _ _)
+ // cond: t.IsStruct() && t.NumFields() == 0 && config.fe.CanSSA(t)
+ // result: (StructMake0)
+ for {
+ t := v.Type
+ if !(t.IsStruct() && t.NumFields() == 0 && config.fe.CanSSA(t)) {
+ break
+ }
+ v.reset(OpStructMake0)
+ return true
+ }
+ // match: (Load <t> ptr mem)
+ // cond: t.IsStruct() && t.NumFields() == 1 && config.fe.CanSSA(t)
+ // result: (StructMake1 (Load <t.FieldType(0)> ptr mem))
+ for {
+ t := v.Type
+ ptr := v.Args[0]
+ mem := v.Args[1]
+ if !(t.IsStruct() && t.NumFields() == 1 && config.fe.CanSSA(t)) {
+ break
+ }
+ v.reset(OpStructMake1)
+ v0 := b.NewValue0(v.Line, OpLoad, t.FieldType(0))
+ v0.AddArg(ptr)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Load <t> ptr mem)
+ // cond: t.IsStruct() && t.NumFields() == 2 && config.fe.CanSSA(t)
+ // result: (StructMake2 (Load <t.FieldType(0)> ptr mem) (Load <t.FieldType(1)> (OffPtr <t.FieldType(1).PtrTo()> [t.FieldOff(1)] ptr) mem))
+ for {
+ t := v.Type
+ ptr := v.Args[0]
+ mem := v.Args[1]
+ if !(t.IsStruct() && t.NumFields() == 2 && config.fe.CanSSA(t)) {
+ break
+ }
+ v.reset(OpStructMake2)
+ v0 := b.NewValue0(v.Line, OpLoad, t.FieldType(0))
+ v0.AddArg(ptr)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpLoad, t.FieldType(1))
+ v2 := b.NewValue0(v.Line, OpOffPtr, t.FieldType(1).PtrTo())
+ v2.AuxInt = t.FieldOff(1)
+ v2.AddArg(ptr)
+ v1.AddArg(v2)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Load <t> ptr mem)
+ // cond: t.IsStruct() && t.NumFields() == 3 && config.fe.CanSSA(t)
+ // result: (StructMake3 (Load <t.FieldType(0)> ptr mem) (Load <t.FieldType(1)> (OffPtr <t.FieldType(1).PtrTo()> [t.FieldOff(1)] ptr) mem) (Load <t.FieldType(2)> (OffPtr <t.FieldType(2).PtrTo()> [t.FieldOff(2)] ptr) mem))
+ for {
+ t := v.Type
+ ptr := v.Args[0]
+ mem := v.Args[1]
+ if !(t.IsStruct() && t.NumFields() == 3 && config.fe.CanSSA(t)) {
+ break
+ }
+ v.reset(OpStructMake3)
+ v0 := b.NewValue0(v.Line, OpLoad, t.FieldType(0))
+ v0.AddArg(ptr)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpLoad, t.FieldType(1))
+ v2 := b.NewValue0(v.Line, OpOffPtr, t.FieldType(1).PtrTo())
+ v2.AuxInt = t.FieldOff(1)
+ v2.AddArg(ptr)
+ v1.AddArg(v2)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ v3 := b.NewValue0(v.Line, OpLoad, t.FieldType(2))
+ v4 := b.NewValue0(v.Line, OpOffPtr, t.FieldType(2).PtrTo())
+ v4.AuxInt = t.FieldOff(2)
+ v4.AddArg(ptr)
+ v3.AddArg(v4)
+ v3.AddArg(mem)
+ v.AddArg(v3)
+ return true
+ }
+ // match: (Load <t> ptr mem)
+ // cond: t.IsStruct() && t.NumFields() == 4 && config.fe.CanSSA(t)
+ // result: (StructMake4 (Load <t.FieldType(0)> ptr mem) (Load <t.FieldType(1)> (OffPtr <t.FieldType(1).PtrTo()> [t.FieldOff(1)] ptr) mem) (Load <t.FieldType(2)> (OffPtr <t.FieldType(2).PtrTo()> [t.FieldOff(2)] ptr) mem) (Load <t.FieldType(3)> (OffPtr <t.FieldType(3).PtrTo()> [t.FieldOff(3)] ptr) mem))
+ for {
+ t := v.Type
+ ptr := v.Args[0]
+ mem := v.Args[1]
+ if !(t.IsStruct() && t.NumFields() == 4 && config.fe.CanSSA(t)) {
+ break
+ }
+ v.reset(OpStructMake4)
+ v0 := b.NewValue0(v.Line, OpLoad, t.FieldType(0))
+ v0.AddArg(ptr)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpLoad, t.FieldType(1))
+ v2 := b.NewValue0(v.Line, OpOffPtr, t.FieldType(1).PtrTo())
+ v2.AuxInt = t.FieldOff(1)
+ v2.AddArg(ptr)
+ v1.AddArg(v2)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ v3 := b.NewValue0(v.Line, OpLoad, t.FieldType(2))
+ v4 := b.NewValue0(v.Line, OpOffPtr, t.FieldType(2).PtrTo())
+ v4.AuxInt = t.FieldOff(2)
+ v4.AddArg(ptr)
+ v3.AddArg(v4)
+ v3.AddArg(mem)
+ v.AddArg(v3)
+ v5 := b.NewValue0(v.Line, OpLoad, t.FieldType(3))
+ v6 := b.NewValue0(v.Line, OpOffPtr, t.FieldType(3).PtrTo())
+ v6.AuxInt = t.FieldOff(3)
+ v6.AddArg(ptr)
+ v5.AddArg(v6)
+ v5.AddArg(mem)
+ v.AddArg(v5)
+ return true
+ }
+ // match: (Load <t> ptr mem)
+ // cond: t.IsComplex() && t.Size() == 8
+ // result: (ComplexMake (Load <config.fe.TypeFloat32()> ptr mem) (Load <config.fe.TypeFloat32()> (OffPtr <config.fe.TypeFloat32().PtrTo()> [4] ptr) mem) )
+ for {
+ t := v.Type
+ ptr := v.Args[0]
+ mem := v.Args[1]
+ if !(t.IsComplex() && t.Size() == 8) {
+ break
+ }
+ v.reset(OpComplexMake)
+ v0 := b.NewValue0(v.Line, OpLoad, config.fe.TypeFloat32())
+ v0.AddArg(ptr)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpLoad, config.fe.TypeFloat32())
+ v2 := b.NewValue0(v.Line, OpOffPtr, config.fe.TypeFloat32().PtrTo())
+ v2.AuxInt = 4
+ v2.AddArg(ptr)
+ v1.AddArg(v2)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Load <t> ptr mem)
+ // cond: t.IsComplex() && t.Size() == 16
+ // result: (ComplexMake (Load <config.fe.TypeFloat64()> ptr mem) (Load <config.fe.TypeFloat64()> (OffPtr <config.fe.TypeFloat64().PtrTo()> [8] ptr) mem) )
+ for {
+ t := v.Type
+ ptr := v.Args[0]
+ mem := v.Args[1]
+ if !(t.IsComplex() && t.Size() == 16) {
+ break
+ }
+ v.reset(OpComplexMake)
+ v0 := b.NewValue0(v.Line, OpLoad, config.fe.TypeFloat64())
+ v0.AddArg(ptr)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpLoad, config.fe.TypeFloat64())
+ v2 := b.NewValue0(v.Line, OpOffPtr, config.fe.TypeFloat64().PtrTo())
+ v2.AuxInt = 8
+ v2.AddArg(ptr)
+ v1.AddArg(v2)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Load <t> ptr mem)
+ // cond: t.IsString()
+ // result: (StringMake (Load <config.fe.TypeBytePtr()> ptr mem) (Load <config.fe.TypeInt()> (OffPtr <config.fe.TypeInt().PtrTo()> [config.PtrSize] ptr) mem))
+ for {
+ t := v.Type
+ ptr := v.Args[0]
+ mem := v.Args[1]
+ if !(t.IsString()) {
+ break
+ }
+ v.reset(OpStringMake)
+ v0 := b.NewValue0(v.Line, OpLoad, config.fe.TypeBytePtr())
+ v0.AddArg(ptr)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpLoad, config.fe.TypeInt())
+ v2 := b.NewValue0(v.Line, OpOffPtr, config.fe.TypeInt().PtrTo())
+ v2.AuxInt = config.PtrSize
+ v2.AddArg(ptr)
+ v1.AddArg(v2)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Load <t> ptr mem)
+ // cond: t.IsSlice()
+ // result: (SliceMake (Load <config.fe.TypeBytePtr()> ptr mem) (Load <config.fe.TypeInt()> (OffPtr <config.fe.TypeInt().PtrTo()> [config.PtrSize] ptr) mem) (Load <config.fe.TypeInt()> (OffPtr <config.fe.TypeInt().PtrTo()> [2*config.PtrSize] ptr) mem))
+ for {
+ t := v.Type
+ ptr := v.Args[0]
+ mem := v.Args[1]
+ if !(t.IsSlice()) {
+ break
+ }
+ v.reset(OpSliceMake)
+ v0 := b.NewValue0(v.Line, OpLoad, config.fe.TypeBytePtr())
+ v0.AddArg(ptr)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpLoad, config.fe.TypeInt())
+ v2 := b.NewValue0(v.Line, OpOffPtr, config.fe.TypeInt().PtrTo())
+ v2.AuxInt = config.PtrSize
+ v2.AddArg(ptr)
+ v1.AddArg(v2)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ v3 := b.NewValue0(v.Line, OpLoad, config.fe.TypeInt())
+ v4 := b.NewValue0(v.Line, OpOffPtr, config.fe.TypeInt().PtrTo())
+ v4.AuxInt = 2 * config.PtrSize
+ v4.AddArg(ptr)
+ v3.AddArg(v4)
+ v3.AddArg(mem)
+ v.AddArg(v3)
+ return true
+ }
+ // match: (Load <t> ptr mem)
+ // cond: t.IsInterface()
+ // result: (IMake (Load <config.fe.TypeBytePtr()> ptr mem) (Load <config.fe.TypeBytePtr()> (OffPtr <config.fe.TypeBytePtr().PtrTo()> [config.PtrSize] ptr) mem))
+ for {
+ t := v.Type
+ ptr := v.Args[0]
+ mem := v.Args[1]
+ if !(t.IsInterface()) {
+ break
+ }
+ v.reset(OpIMake)
+ v0 := b.NewValue0(v.Line, OpLoad, config.fe.TypeBytePtr())
+ v0.AddArg(ptr)
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpLoad, config.fe.TypeBytePtr())
+ v2 := b.NewValue0(v.Line, OpOffPtr, config.fe.TypeBytePtr().PtrTo())
+ v2.AuxInt = config.PtrSize
+ v2.AddArg(ptr)
+ v1.AddArg(v2)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLsh16x16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh16x16 <t> x (Const16 [c]))
+ // cond:
+ // result: (Lsh16x64 x (Const64 <t> [int64(uint16(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpLsh16x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint16(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLsh16x32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh16x32 <t> x (Const32 [c]))
+ // cond:
+ // result: (Lsh16x64 x (Const64 <t> [int64(uint32(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpLsh16x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint32(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLsh16x64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh16x64 (Const16 [c]) (Const64 [d]))
+ // cond:
+ // result: (Const16 [int64(int16(c) << uint64(d))])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst16)
+ v.AuxInt = int64(int16(c) << uint64(d))
+ return true
+ }
+ // match: (Lsh16x64 (Const16 [0]) _)
+ // cond:
+ // result: (Const16 [0])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst16)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Lsh16x64 x (Const64 [0]))
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ if v.Args[1].AuxInt != 0 {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Lsh16x64 _ (Const64 [c]))
+ // cond: uint64(c) >= 16
+ // result: (Const16 [0])
+ for {
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(uint64(c) >= 16) {
+ break
+ }
+ v.reset(OpConst16)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Lsh16x64 <t> (Lsh16x64 x (Const64 [c])) (Const64 [d]))
+ // cond: !uaddOvf(c,d)
+ // result: (Lsh16x64 x (Const64 <t> [c+d]))
+ for {
+ t := v.Type
+ if v.Args[0].Op != OpLsh16x64 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ if v.Args[0].Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].Args[1].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ if !(!uaddOvf(c, d)) {
+ break
+ }
+ v.reset(OpLsh16x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c + d
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLsh16x8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh16x8 <t> x (Const8 [c]))
+ // cond:
+ // result: (Lsh16x64 x (Const64 <t> [int64(uint8(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpLsh16x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint8(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLsh32x16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh32x16 <t> x (Const16 [c]))
+ // cond:
+ // result: (Lsh32x64 x (Const64 <t> [int64(uint16(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpLsh32x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint16(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLsh32x32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh32x32 <t> x (Const32 [c]))
+ // cond:
+ // result: (Lsh32x64 x (Const64 <t> [int64(uint32(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpLsh32x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint32(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLsh32x64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh32x64 (Const32 [c]) (Const64 [d]))
+ // cond:
+ // result: (Const32 [int64(int32(c) << uint64(d))])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst32)
+ v.AuxInt = int64(int32(c) << uint64(d))
+ return true
+ }
+ // match: (Lsh32x64 (Const32 [0]) _)
+ // cond:
+ // result: (Const32 [0])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst32)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Lsh32x64 x (Const64 [0]))
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ if v.Args[1].AuxInt != 0 {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Lsh32x64 _ (Const64 [c]))
+ // cond: uint64(c) >= 32
+ // result: (Const32 [0])
+ for {
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(uint64(c) >= 32) {
+ break
+ }
+ v.reset(OpConst32)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Lsh32x64 <t> (Lsh32x64 x (Const64 [c])) (Const64 [d]))
+ // cond: !uaddOvf(c,d)
+ // result: (Lsh32x64 x (Const64 <t> [c+d]))
+ for {
+ t := v.Type
+ if v.Args[0].Op != OpLsh32x64 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ if v.Args[0].Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].Args[1].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ if !(!uaddOvf(c, d)) {
+ break
+ }
+ v.reset(OpLsh32x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c + d
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLsh32x8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh32x8 <t> x (Const8 [c]))
+ // cond:
+ // result: (Lsh32x64 x (Const64 <t> [int64(uint8(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpLsh32x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint8(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLsh64x16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh64x16 <t> x (Const16 [c]))
+ // cond:
+ // result: (Lsh64x64 x (Const64 <t> [int64(uint16(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpLsh64x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint16(c))
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Lsh64x16 (Const64 [0]) _)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLsh64x32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh64x32 <t> x (Const32 [c]))
+ // cond:
+ // result: (Lsh64x64 x (Const64 <t> [int64(uint32(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpLsh64x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint32(c))
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Lsh64x32 (Const64 [0]) _)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLsh64x64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh64x64 (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (Const64 [c << uint64(d)])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst64)
+ v.AuxInt = c << uint64(d)
+ return true
+ }
+ // match: (Lsh64x64 (Const64 [0]) _)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Lsh64x64 x (Const64 [0]))
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ if v.Args[1].AuxInt != 0 {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Lsh64x64 (Const64 [0]) _)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Lsh64x64 _ (Const64 [c]))
+ // cond: uint64(c) >= 64
+ // result: (Const64 [0])
+ for {
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(uint64(c) >= 64) {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Lsh64x64 <t> (Lsh64x64 x (Const64 [c])) (Const64 [d]))
+ // cond: !uaddOvf(c,d)
+ // result: (Lsh64x64 x (Const64 <t> [c+d]))
+ for {
+ t := v.Type
+ if v.Args[0].Op != OpLsh64x64 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ if v.Args[0].Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].Args[1].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ if !(!uaddOvf(c, d)) {
+ break
+ }
+ v.reset(OpLsh64x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c + d
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLsh64x8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh64x8 <t> x (Const8 [c]))
+ // cond:
+ // result: (Lsh64x64 x (Const64 <t> [int64(uint8(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpLsh64x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint8(c))
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Lsh64x8 (Const64 [0]) _)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLsh8x16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh8x16 <t> x (Const16 [c]))
+ // cond:
+ // result: (Lsh8x64 x (Const64 <t> [int64(uint16(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpLsh8x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint16(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLsh8x32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh8x32 <t> x (Const32 [c]))
+ // cond:
+ // result: (Lsh8x64 x (Const64 <t> [int64(uint32(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpLsh8x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint32(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLsh8x64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh8x64 (Const8 [c]) (Const64 [d]))
+ // cond:
+ // result: (Const8 [int64(int8(c) << uint64(d))])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst8)
+ v.AuxInt = int64(int8(c) << uint64(d))
+ return true
+ }
+ // match: (Lsh8x64 (Const8 [0]) _)
+ // cond:
+ // result: (Const8 [0])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst8)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Lsh8x64 x (Const64 [0]))
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ if v.Args[1].AuxInt != 0 {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Lsh8x64 _ (Const64 [c]))
+ // cond: uint64(c) >= 8
+ // result: (Const8 [0])
+ for {
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(uint64(c) >= 8) {
+ break
+ }
+ v.reset(OpConst8)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Lsh8x64 <t> (Lsh8x64 x (Const64 [c])) (Const64 [d]))
+ // cond: !uaddOvf(c,d)
+ // result: (Lsh8x64 x (Const64 <t> [c+d]))
+ for {
+ t := v.Type
+ if v.Args[0].Op != OpLsh8x64 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ if v.Args[0].Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].Args[1].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ if !(!uaddOvf(c, d)) {
+ break
+ }
+ v.reset(OpLsh8x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c + d
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpLsh8x8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Lsh8x8 <t> x (Const8 [c]))
+ // cond:
+ // result: (Lsh8x64 x (Const64 <t> [int64(uint8(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpLsh8x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint8(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpMod64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mod64 <t> x (Const64 [c]))
+ // cond: smagic64ok(c)
+ // result: (Sub64 x (Mul64 <t> (Div64 <t> x (Const64 <t> [c])) (Const64 <t> [c])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(smagic64ok(c)) {
+ break
+ }
+ v.reset(OpSub64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpMul64, t)
+ v1 := b.NewValue0(v.Line, OpDiv64, t)
+ v1.AddArg(x)
+ v2 := b.NewValue0(v.Line, OpConst64, t)
+ v2.AuxInt = c
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v3 := b.NewValue0(v.Line, OpConst64, t)
+ v3.AuxInt = c
+ v0.AddArg(v3)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpMod64u(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mod64u <t> x (Const64 [c]))
+ // cond: umagic64ok(c)
+ // result: (Sub64 x (Mul64 <t> (Div64u <t> x (Const64 <t> [c])) (Const64 <t> [c])))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(umagic64ok(c)) {
+ break
+ }
+ v.reset(OpSub64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpMul64, t)
+ v1 := b.NewValue0(v.Line, OpDiv64u, t)
+ v1.AddArg(x)
+ v2 := b.NewValue0(v.Line, OpConst64, t)
+ v2.AuxInt = c
+ v1.AddArg(v2)
+ v0.AddArg(v1)
+ v3 := b.NewValue0(v.Line, OpConst64, t)
+ v3.AuxInt = c
+ v0.AddArg(v3)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpMul16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mul16 (Const16 [c]) (Const16 [d]))
+ // cond:
+ // result: (Const16 [c*d])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst16)
+ v.AuxInt = c * d
+ return true
+ }
+ // match: (Mul16 x (Const16 <t> [c]))
+ // cond: x.Op != OpConst16
+ // result: (Mul16 (Const16 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst16) {
+ break
+ }
+ v.reset(OpMul16)
+ v0 := b.NewValue0(v.Line, OpConst16, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Mul16 (Const16 [0]) _)
+ // cond:
+ // result: (Const16 [0])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst16)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpMul32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mul32 (Const32 [c]) (Const32 [d]))
+ // cond:
+ // result: (Const32 [c*d])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst32)
+ v.AuxInt = c * d
+ return true
+ }
+ // match: (Mul32 x (Const32 <t> [c]))
+ // cond: x.Op != OpConst32
+ // result: (Mul32 (Const32 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst32) {
+ break
+ }
+ v.reset(OpMul32)
+ v0 := b.NewValue0(v.Line, OpConst32, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Mul32 (Const32 <t> [c]) (Add32 <t> (Const32 <t> [d]) x))
+ // cond:
+ // result: (Add32 (Const32 <t> [c*d]) (Mul32 <t> (Const32 <t> [c]) x))
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ t := v.Args[0].Type
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpAdd32 {
+ break
+ }
+ if v.Args[1].Type != v.Args[0].Type {
+ break
+ }
+ if v.Args[1].Args[0].Op != OpConst32 {
+ break
+ }
+ if v.Args[1].Args[0].Type != v.Args[0].Type {
+ break
+ }
+ d := v.Args[1].Args[0].AuxInt
+ x := v.Args[1].Args[1]
+ v.reset(OpAdd32)
+ v0 := b.NewValue0(v.Line, OpConst32, t)
+ v0.AuxInt = c * d
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpMul32, t)
+ v2 := b.NewValue0(v.Line, OpConst32, t)
+ v2.AuxInt = c
+ v1.AddArg(v2)
+ v1.AddArg(x)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Mul32 (Const32 [0]) _)
+ // cond:
+ // result: (Const32 [0])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst32)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpMul64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mul64 (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (Const64 [c*d])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst64)
+ v.AuxInt = c * d
+ return true
+ }
+ // match: (Mul64 x (Const64 <t> [c]))
+ // cond: x.Op != OpConst64
+ // result: (Mul64 (Const64 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst64) {
+ break
+ }
+ v.reset(OpMul64)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Mul64 (Const64 <t> [c]) (Add64 <t> (Const64 <t> [d]) x))
+ // cond:
+ // result: (Add64 (Const64 <t> [c*d]) (Mul64 <t> (Const64 <t> [c]) x))
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ t := v.Args[0].Type
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpAdd64 {
+ break
+ }
+ if v.Args[1].Type != v.Args[0].Type {
+ break
+ }
+ if v.Args[1].Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[1].Args[0].Type != v.Args[0].Type {
+ break
+ }
+ d := v.Args[1].Args[0].AuxInt
+ x := v.Args[1].Args[1]
+ v.reset(OpAdd64)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c * d
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpMul64, t)
+ v2 := b.NewValue0(v.Line, OpConst64, t)
+ v2.AuxInt = c
+ v1.AddArg(v2)
+ v1.AddArg(x)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Mul64 (Const64 [0]) _)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpMul8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Mul8 (Const8 [c]) (Const8 [d]))
+ // cond:
+ // result: (Const8 [c*d])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst8)
+ v.AuxInt = c * d
+ return true
+ }
+ // match: (Mul8 x (Const8 <t> [c]))
+ // cond: x.Op != OpConst8
+ // result: (Mul8 (Const8 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst8) {
+ break
+ }
+ v.reset(OpMul8)
+ v0 := b.NewValue0(v.Line, OpConst8, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Mul8 (Const8 [0]) _)
+ // cond:
+ // result: (Const8 [0])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst8)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpNeg16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neg16 (Const16 [c]))
+ // cond:
+ // result: (Const16 [-c])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpConst16)
+ v.AuxInt = -c
+ return true
+ }
+ // match: (Neg16 (Sub16 x y))
+ // cond:
+ // result: (Sub16 y x)
+ for {
+ if v.Args[0].Op != OpSub16 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ v.reset(OpSub16)
+ v.AddArg(y)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpNeg32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neg32 (Const32 [c]))
+ // cond:
+ // result: (Const32 [-c])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpConst32)
+ v.AuxInt = -c
+ return true
+ }
+ // match: (Neg32 (Sub32 x y))
+ // cond:
+ // result: (Sub32 y x)
+ for {
+ if v.Args[0].Op != OpSub32 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ v.reset(OpSub32)
+ v.AddArg(y)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpNeg64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neg64 (Const64 [c]))
+ // cond:
+ // result: (Const64 [-c])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpConst64)
+ v.AuxInt = -c
+ return true
+ }
+ // match: (Neg64 (Sub64 x y))
+ // cond:
+ // result: (Sub64 y x)
+ for {
+ if v.Args[0].Op != OpSub64 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ v.reset(OpSub64)
+ v.AddArg(y)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpNeg8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neg8 (Const8 [c]))
+ // cond:
+ // result: (Const8 [-c])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpConst8)
+ v.AuxInt = -c
+ return true
+ }
+ // match: (Neg8 (Sub8 x y))
+ // cond:
+ // result: (Sub8 y x)
+ for {
+ if v.Args[0].Op != OpSub8 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ v.reset(OpSub8)
+ v.AddArg(y)
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpNeq16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neq16 x x)
+ // cond:
+ // result: (ConstBool [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpConstBool)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Neq16 (Const16 <t> [c]) (Add16 (Const16 <t> [d]) x))
+ // cond:
+ // result: (Neq16 (Const16 <t> [c-d]) x)
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ t := v.Args[0].Type
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpAdd16 {
+ break
+ }
+ if v.Args[1].Args[0].Op != OpConst16 {
+ break
+ }
+ if v.Args[1].Args[0].Type != v.Args[0].Type {
+ break
+ }
+ d := v.Args[1].Args[0].AuxInt
+ x := v.Args[1].Args[1]
+ v.reset(OpNeq16)
+ v0 := b.NewValue0(v.Line, OpConst16, t)
+ v0.AuxInt = c - d
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Neq16 x (Const16 <t> [c]))
+ // cond: x.Op != OpConst16
+ // result: (Neq16 (Const16 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst16) {
+ break
+ }
+ v.reset(OpNeq16)
+ v0 := b.NewValue0(v.Line, OpConst16, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Neq16 (Const16 [c]) (Const16 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int16(c) != int16(d))])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int16(c) != int16(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpNeq32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neq32 x x)
+ // cond:
+ // result: (ConstBool [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpConstBool)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Neq32 (Const32 <t> [c]) (Add32 (Const32 <t> [d]) x))
+ // cond:
+ // result: (Neq32 (Const32 <t> [c-d]) x)
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ t := v.Args[0].Type
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpAdd32 {
+ break
+ }
+ if v.Args[1].Args[0].Op != OpConst32 {
+ break
+ }
+ if v.Args[1].Args[0].Type != v.Args[0].Type {
+ break
+ }
+ d := v.Args[1].Args[0].AuxInt
+ x := v.Args[1].Args[1]
+ v.reset(OpNeq32)
+ v0 := b.NewValue0(v.Line, OpConst32, t)
+ v0.AuxInt = c - d
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Neq32 x (Const32 <t> [c]))
+ // cond: x.Op != OpConst32
+ // result: (Neq32 (Const32 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst32) {
+ break
+ }
+ v.reset(OpNeq32)
+ v0 := b.NewValue0(v.Line, OpConst32, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Neq32 (Const32 [c]) (Const32 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int32(c) != int32(d))])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int32(c) != int32(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpNeq64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neq64 x x)
+ // cond:
+ // result: (ConstBool [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpConstBool)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Neq64 (Const64 <t> [c]) (Add64 (Const64 <t> [d]) x))
+ // cond:
+ // result: (Neq64 (Const64 <t> [c-d]) x)
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ t := v.Args[0].Type
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpAdd64 {
+ break
+ }
+ if v.Args[1].Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[1].Args[0].Type != v.Args[0].Type {
+ break
+ }
+ d := v.Args[1].Args[0].AuxInt
+ x := v.Args[1].Args[1]
+ v.reset(OpNeq64)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c - d
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Neq64 x (Const64 <t> [c]))
+ // cond: x.Op != OpConst64
+ // result: (Neq64 (Const64 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst64) {
+ break
+ }
+ v.reset(OpNeq64)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Neq64 (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int64(c) != int64(d))])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int64(c) != int64(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpNeq8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Neq8 x x)
+ // cond:
+ // result: (ConstBool [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpConstBool)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Neq8 (ConstBool [c]) (ConstBool [d]))
+ // cond:
+ // result: (ConstBool [b2i((int8(c) != 0) != (int8(d) != 0))])
+ for {
+ if v.Args[0].Op != OpConstBool {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConstBool {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i((int8(c) != 0) != (int8(d) != 0))
+ return true
+ }
+ // match: (Neq8 (ConstBool [0]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConstBool {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Neq8 (ConstBool [1]) x)
+ // cond:
+ // result: (Not x)
+ for {
+ if v.Args[0].Op != OpConstBool {
+ break
+ }
+ if v.Args[0].AuxInt != 1 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpNot)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Neq8 (Const8 <t> [c]) (Add8 (Const8 <t> [d]) x))
+ // cond:
+ // result: (Neq8 (Const8 <t> [c-d]) x)
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ t := v.Args[0].Type
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpAdd8 {
+ break
+ }
+ if v.Args[1].Args[0].Op != OpConst8 {
+ break
+ }
+ if v.Args[1].Args[0].Type != v.Args[0].Type {
+ break
+ }
+ d := v.Args[1].Args[0].AuxInt
+ x := v.Args[1].Args[1]
+ v.reset(OpNeq8)
+ v0 := b.NewValue0(v.Line, OpConst8, t)
+ v0.AuxInt = c - d
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Neq8 x (Const8 <t> [c]))
+ // cond: x.Op != OpConst8
+ // result: (Neq8 (Const8 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst8) {
+ break
+ }
+ v.reset(OpNeq8)
+ v0 := b.NewValue0(v.Line, OpConst8, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Neq8 x (ConstBool <t> [c]))
+ // cond: x.Op != OpConstBool
+ // result: (Neq8 (ConstBool <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConstBool {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConstBool) {
+ break
+ }
+ v.reset(OpNeq8)
+ v0 := b.NewValue0(v.Line, OpConstBool, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Neq8 (Const8 [c]) (Const8 [d]))
+ // cond:
+ // result: (ConstBool [b2i(int8(c) != int8(d))])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConstBool)
+ v.AuxInt = b2i(int8(c) != int8(d))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpNeqInter(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (NeqInter x y)
+ // cond:
+ // result: (NeqPtr (ITab x) (ITab y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpNeqPtr)
+ v0 := b.NewValue0(v.Line, OpITab, config.fe.TypeBytePtr())
+ v0.AddArg(x)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpITab, config.fe.TypeBytePtr())
+ v1.AddArg(y)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpNeqPtr(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (NeqPtr p (ConstNil))
+ // cond:
+ // result: (IsNonNil p)
+ for {
+ p := v.Args[0]
+ if v.Args[1].Op != OpConstNil {
+ break
+ }
+ v.reset(OpIsNonNil)
+ v.AddArg(p)
+ return true
+ }
+ // match: (NeqPtr (ConstNil) p)
+ // cond:
+ // result: (IsNonNil p)
+ for {
+ if v.Args[0].Op != OpConstNil {
+ break
+ }
+ p := v.Args[1]
+ v.reset(OpIsNonNil)
+ v.AddArg(p)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpNeqSlice(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (NeqSlice x y)
+ // cond:
+ // result: (NeqPtr (SlicePtr x) (SlicePtr y))
+ for {
+ x := v.Args[0]
+ y := v.Args[1]
+ v.reset(OpNeqPtr)
+ v0 := b.NewValue0(v.Line, OpSlicePtr, config.fe.TypeBytePtr())
+ v0.AddArg(x)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpSlicePtr, config.fe.TypeBytePtr())
+ v1.AddArg(y)
+ v.AddArg(v1)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpOr16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Or16 x (Const16 <t> [c]))
+ // cond: x.Op != OpConst16
+ // result: (Or16 (Const16 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst16) {
+ break
+ }
+ v.reset(OpOr16)
+ v0 := b.NewValue0(v.Line, OpConst16, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Or16 x x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Or16 (Const16 [0]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Or16 (Const16 [-1]) _)
+ // cond:
+ // result: (Const16 [-1])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ if v.Args[0].AuxInt != -1 {
+ break
+ }
+ v.reset(OpConst16)
+ v.AuxInt = -1
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpOr32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Or32 x (Const32 <t> [c]))
+ // cond: x.Op != OpConst32
+ // result: (Or32 (Const32 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst32) {
+ break
+ }
+ v.reset(OpOr32)
+ v0 := b.NewValue0(v.Line, OpConst32, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Or32 x x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Or32 (Const32 [0]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Or32 (Const32 [-1]) _)
+ // cond:
+ // result: (Const32 [-1])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ if v.Args[0].AuxInt != -1 {
+ break
+ }
+ v.reset(OpConst32)
+ v.AuxInt = -1
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpOr64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Or64 x (Const64 <t> [c]))
+ // cond: x.Op != OpConst64
+ // result: (Or64 (Const64 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst64) {
+ break
+ }
+ v.reset(OpOr64)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Or64 x x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Or64 (Const64 [0]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Or64 (Const64 [-1]) _)
+ // cond:
+ // result: (Const64 [-1])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != -1 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = -1
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpOr8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Or8 x (Const8 <t> [c]))
+ // cond: x.Op != OpConst8
+ // result: (Or8 (Const8 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst8) {
+ break
+ }
+ v.reset(OpOr8)
+ v0 := b.NewValue0(v.Line, OpConst8, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Or8 x x)
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Or8 (Const8 [0]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Or8 (Const8 [-1]) _)
+ // cond:
+ // result: (Const8 [-1])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ if v.Args[0].AuxInt != -1 {
+ break
+ }
+ v.reset(OpConst8)
+ v.AuxInt = -1
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpPhi(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Phi (Const8 [c]) (Const8 [d]))
+ // cond: int8(c) == int8(d)
+ // result: (Const8 [c])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ if len(v.Args) != 2 {
+ break
+ }
+ if !(int8(c) == int8(d)) {
+ break
+ }
+ v.reset(OpConst8)
+ v.AuxInt = c
+ return true
+ }
+ // match: (Phi (Const16 [c]) (Const16 [d]))
+ // cond: int16(c) == int16(d)
+ // result: (Const16 [c])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ if len(v.Args) != 2 {
+ break
+ }
+ if !(int16(c) == int16(d)) {
+ break
+ }
+ v.reset(OpConst16)
+ v.AuxInt = c
+ return true
+ }
+ // match: (Phi (Const32 [c]) (Const32 [d]))
+ // cond: int32(c) == int32(d)
+ // result: (Const32 [c])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ if len(v.Args) != 2 {
+ break
+ }
+ if !(int32(c) == int32(d)) {
+ break
+ }
+ v.reset(OpConst32)
+ v.AuxInt = c
+ return true
+ }
+ // match: (Phi (Const64 [c]) (Const64 [c]))
+ // cond:
+ // result: (Const64 [c])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ if v.Args[1].AuxInt != v.Args[0].AuxInt {
+ break
+ }
+ if len(v.Args) != 2 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = c
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpPtrIndex(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (PtrIndex <t> ptr idx)
+ // cond: config.PtrSize == 4
+ // result: (AddPtr ptr (Mul32 <config.fe.TypeInt()> idx (Const32 <config.fe.TypeInt()> [t.Elem().Size()])))
+ for {
+ t := v.Type
+ ptr := v.Args[0]
+ idx := v.Args[1]
+ if !(config.PtrSize == 4) {
+ break
+ }
+ v.reset(OpAddPtr)
+ v.AddArg(ptr)
+ v0 := b.NewValue0(v.Line, OpMul32, config.fe.TypeInt())
+ v0.AddArg(idx)
+ v1 := b.NewValue0(v.Line, OpConst32, config.fe.TypeInt())
+ v1.AuxInt = t.Elem().Size()
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ // match: (PtrIndex <t> ptr idx)
+ // cond: config.PtrSize == 8
+ // result: (AddPtr ptr (Mul64 <config.fe.TypeInt()> idx (Const64 <config.fe.TypeInt()> [t.Elem().Size()])))
+ for {
+ t := v.Type
+ ptr := v.Args[0]
+ idx := v.Args[1]
+ if !(config.PtrSize == 8) {
+ break
+ }
+ v.reset(OpAddPtr)
+ v.AddArg(ptr)
+ v0 := b.NewValue0(v.Line, OpMul64, config.fe.TypeInt())
+ v0.AddArg(idx)
+ v1 := b.NewValue0(v.Line, OpConst64, config.fe.TypeInt())
+ v1.AuxInt = t.Elem().Size()
+ v0.AddArg(v1)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh16Ux16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh16Ux16 <t> x (Const16 [c]))
+ // cond:
+ // result: (Rsh16Ux64 x (Const64 <t> [int64(uint16(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh16Ux64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint16(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh16Ux32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh16Ux32 <t> x (Const32 [c]))
+ // cond:
+ // result: (Rsh16Ux64 x (Const64 <t> [int64(uint32(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh16Ux64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint32(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh16Ux64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh16Ux64 (Const16 [c]) (Const64 [d]))
+ // cond:
+ // result: (Const16 [int64(uint16(c) >> uint64(d))])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst16)
+ v.AuxInt = int64(uint16(c) >> uint64(d))
+ return true
+ }
+ // match: (Rsh16Ux64 (Const16 [0]) _)
+ // cond:
+ // result: (Const16 [0])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst16)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Rsh16Ux64 x (Const64 [0]))
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ if v.Args[1].AuxInt != 0 {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Rsh16Ux64 _ (Const64 [c]))
+ // cond: uint64(c) >= 16
+ // result: (Const16 [0])
+ for {
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(uint64(c) >= 16) {
+ break
+ }
+ v.reset(OpConst16)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Rsh16Ux64 <t> (Rsh16Ux64 x (Const64 [c])) (Const64 [d]))
+ // cond: !uaddOvf(c,d)
+ // result: (Rsh16Ux64 x (Const64 <t> [c+d]))
+ for {
+ t := v.Type
+ if v.Args[0].Op != OpRsh16Ux64 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ if v.Args[0].Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].Args[1].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ if !(!uaddOvf(c, d)) {
+ break
+ }
+ v.reset(OpRsh16Ux64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c + d
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh16Ux8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh16Ux8 <t> x (Const8 [c]))
+ // cond:
+ // result: (Rsh16Ux64 x (Const64 <t> [int64(uint8(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh16Ux64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint8(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh16x16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh16x16 <t> x (Const16 [c]))
+ // cond:
+ // result: (Rsh16x64 x (Const64 <t> [int64(uint16(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh16x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint16(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh16x32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh16x32 <t> x (Const32 [c]))
+ // cond:
+ // result: (Rsh16x64 x (Const64 <t> [int64(uint32(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh16x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint32(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh16x64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh16x64 (Const16 [c]) (Const64 [d]))
+ // cond:
+ // result: (Const16 [int64(int16(c) >> uint64(d))])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst16)
+ v.AuxInt = int64(int16(c) >> uint64(d))
+ return true
+ }
+ // match: (Rsh16x64 (Const16 [0]) _)
+ // cond:
+ // result: (Const16 [0])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst16)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Rsh16x64 x (Const64 [0]))
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ if v.Args[1].AuxInt != 0 {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Rsh16x64 <t> (Rsh16x64 x (Const64 [c])) (Const64 [d]))
+ // cond: !uaddOvf(c,d)
+ // result: (Rsh16x64 x (Const64 <t> [c+d]))
+ for {
+ t := v.Type
+ if v.Args[0].Op != OpRsh16x64 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ if v.Args[0].Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].Args[1].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ if !(!uaddOvf(c, d)) {
+ break
+ }
+ v.reset(OpRsh16x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c + d
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh16x8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh16x8 <t> x (Const8 [c]))
+ // cond:
+ // result: (Rsh16x64 x (Const64 <t> [int64(uint8(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh16x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint8(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh32Ux16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh32Ux16 <t> x (Const16 [c]))
+ // cond:
+ // result: (Rsh32Ux64 x (Const64 <t> [int64(uint16(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh32Ux64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint16(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh32Ux32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh32Ux32 <t> x (Const32 [c]))
+ // cond:
+ // result: (Rsh32Ux64 x (Const64 <t> [int64(uint32(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh32Ux64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint32(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh32Ux64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh32Ux64 (Const32 [c]) (Const64 [d]))
+ // cond:
+ // result: (Const32 [int64(uint32(c) >> uint64(d))])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst32)
+ v.AuxInt = int64(uint32(c) >> uint64(d))
+ return true
+ }
+ // match: (Rsh32Ux64 (Const32 [0]) _)
+ // cond:
+ // result: (Const32 [0])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst32)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Rsh32Ux64 x (Const64 [0]))
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ if v.Args[1].AuxInt != 0 {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Rsh32Ux64 _ (Const64 [c]))
+ // cond: uint64(c) >= 32
+ // result: (Const32 [0])
+ for {
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(uint64(c) >= 32) {
+ break
+ }
+ v.reset(OpConst32)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Rsh32Ux64 <t> (Rsh32Ux64 x (Const64 [c])) (Const64 [d]))
+ // cond: !uaddOvf(c,d)
+ // result: (Rsh32Ux64 x (Const64 <t> [c+d]))
+ for {
+ t := v.Type
+ if v.Args[0].Op != OpRsh32Ux64 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ if v.Args[0].Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].Args[1].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ if !(!uaddOvf(c, d)) {
+ break
+ }
+ v.reset(OpRsh32Ux64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c + d
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh32Ux8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh32Ux8 <t> x (Const8 [c]))
+ // cond:
+ // result: (Rsh32Ux64 x (Const64 <t> [int64(uint8(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh32Ux64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint8(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh32x16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh32x16 <t> x (Const16 [c]))
+ // cond:
+ // result: (Rsh32x64 x (Const64 <t> [int64(uint16(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh32x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint16(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh32x32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh32x32 <t> x (Const32 [c]))
+ // cond:
+ // result: (Rsh32x64 x (Const64 <t> [int64(uint32(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh32x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint32(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh32x64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh32x64 (Const32 [c]) (Const64 [d]))
+ // cond:
+ // result: (Const32 [int64(int32(c) >> uint64(d))])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst32)
+ v.AuxInt = int64(int32(c) >> uint64(d))
+ return true
+ }
+ // match: (Rsh32x64 (Const32 [0]) _)
+ // cond:
+ // result: (Const32 [0])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst32)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Rsh32x64 x (Const64 [0]))
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ if v.Args[1].AuxInt != 0 {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Rsh32x64 <t> (Rsh32x64 x (Const64 [c])) (Const64 [d]))
+ // cond: !uaddOvf(c,d)
+ // result: (Rsh32x64 x (Const64 <t> [c+d]))
+ for {
+ t := v.Type
+ if v.Args[0].Op != OpRsh32x64 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ if v.Args[0].Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].Args[1].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ if !(!uaddOvf(c, d)) {
+ break
+ }
+ v.reset(OpRsh32x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c + d
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh32x8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh32x8 <t> x (Const8 [c]))
+ // cond:
+ // result: (Rsh32x64 x (Const64 <t> [int64(uint8(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh32x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint8(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh64Ux16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh64Ux16 <t> x (Const16 [c]))
+ // cond:
+ // result: (Rsh64Ux64 x (Const64 <t> [int64(uint16(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh64Ux64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint16(c))
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Rsh64Ux16 (Const64 [0]) _)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh64Ux32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh64Ux32 <t> x (Const32 [c]))
+ // cond:
+ // result: (Rsh64Ux64 x (Const64 <t> [int64(uint32(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh64Ux64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint32(c))
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Rsh64Ux32 (Const64 [0]) _)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh64Ux64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh64Ux64 (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (Const64 [int64(uint64(c) >> uint64(d))])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst64)
+ v.AuxInt = int64(uint64(c) >> uint64(d))
+ return true
+ }
+ // match: (Rsh64Ux64 (Const64 [0]) _)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Rsh64Ux64 x (Const64 [0]))
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ if v.Args[1].AuxInt != 0 {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Rsh64Ux64 (Const64 [0]) _)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Rsh64Ux64 _ (Const64 [c]))
+ // cond: uint64(c) >= 64
+ // result: (Const64 [0])
+ for {
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(uint64(c) >= 64) {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Rsh64Ux64 <t> (Rsh64Ux64 x (Const64 [c])) (Const64 [d]))
+ // cond: !uaddOvf(c,d)
+ // result: (Rsh64Ux64 x (Const64 <t> [c+d]))
+ for {
+ t := v.Type
+ if v.Args[0].Op != OpRsh64Ux64 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ if v.Args[0].Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].Args[1].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ if !(!uaddOvf(c, d)) {
+ break
+ }
+ v.reset(OpRsh64Ux64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c + d
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh64Ux8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh64Ux8 <t> x (Const8 [c]))
+ // cond:
+ // result: (Rsh64Ux64 x (Const64 <t> [int64(uint8(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh64Ux64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint8(c))
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Rsh64Ux8 (Const64 [0]) _)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh64x16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh64x16 <t> x (Const16 [c]))
+ // cond:
+ // result: (Rsh64x64 x (Const64 <t> [int64(uint16(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh64x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint16(c))
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Rsh64x16 (Const64 [0]) _)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh64x32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh64x32 <t> x (Const32 [c]))
+ // cond:
+ // result: (Rsh64x64 x (Const64 <t> [int64(uint32(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh64x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint32(c))
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Rsh64x32 (Const64 [0]) _)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh64x64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh64x64 (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (Const64 [c >> uint64(d)])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst64)
+ v.AuxInt = c >> uint64(d)
+ return true
+ }
+ // match: (Rsh64x64 (Const64 [0]) _)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Rsh64x64 x (Const64 [0]))
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ if v.Args[1].AuxInt != 0 {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Rsh64x64 (Const64 [0]) _)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Rsh64x64 <t> (Rsh64x64 x (Const64 [c])) (Const64 [d]))
+ // cond: !uaddOvf(c,d)
+ // result: (Rsh64x64 x (Const64 <t> [c+d]))
+ for {
+ t := v.Type
+ if v.Args[0].Op != OpRsh64x64 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ if v.Args[0].Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].Args[1].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ if !(!uaddOvf(c, d)) {
+ break
+ }
+ v.reset(OpRsh64x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c + d
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh64x8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh64x8 <t> x (Const8 [c]))
+ // cond:
+ // result: (Rsh64x64 x (Const64 <t> [int64(uint8(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh64x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint8(c))
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Rsh64x8 (Const64 [0]) _)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh8Ux16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh8Ux16 <t> x (Const16 [c]))
+ // cond:
+ // result: (Rsh8Ux64 x (Const64 <t> [int64(uint16(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh8Ux64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint16(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh8Ux32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh8Ux32 <t> x (Const32 [c]))
+ // cond:
+ // result: (Rsh8Ux64 x (Const64 <t> [int64(uint32(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh8Ux64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint32(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh8Ux64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh8Ux64 (Const8 [c]) (Const64 [d]))
+ // cond:
+ // result: (Const8 [int64(uint8(c) >> uint64(d))])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst8)
+ v.AuxInt = int64(uint8(c) >> uint64(d))
+ return true
+ }
+ // match: (Rsh8Ux64 (Const8 [0]) _)
+ // cond:
+ // result: (Const8 [0])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst8)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Rsh8Ux64 x (Const64 [0]))
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ if v.Args[1].AuxInt != 0 {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Rsh8Ux64 _ (Const64 [c]))
+ // cond: uint64(c) >= 8
+ // result: (Const8 [0])
+ for {
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ if !(uint64(c) >= 8) {
+ break
+ }
+ v.reset(OpConst8)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Rsh8Ux64 <t> (Rsh8Ux64 x (Const64 [c])) (Const64 [d]))
+ // cond: !uaddOvf(c,d)
+ // result: (Rsh8Ux64 x (Const64 <t> [c+d]))
+ for {
+ t := v.Type
+ if v.Args[0].Op != OpRsh8Ux64 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ if v.Args[0].Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].Args[1].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ if !(!uaddOvf(c, d)) {
+ break
+ }
+ v.reset(OpRsh8Ux64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c + d
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh8Ux8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh8Ux8 <t> x (Const8 [c]))
+ // cond:
+ // result: (Rsh8Ux64 x (Const64 <t> [int64(uint8(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh8Ux64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint8(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh8x16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh8x16 <t> x (Const16 [c]))
+ // cond:
+ // result: (Rsh8x64 x (Const64 <t> [int64(uint16(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh8x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint16(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh8x32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh8x32 <t> x (Const32 [c]))
+ // cond:
+ // result: (Rsh8x64 x (Const64 <t> [int64(uint32(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh8x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint32(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh8x64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh8x64 (Const8 [c]) (Const64 [d]))
+ // cond:
+ // result: (Const8 [int64(int8(c) >> uint64(d))])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst8)
+ v.AuxInt = int64(int8(c) >> uint64(d))
+ return true
+ }
+ // match: (Rsh8x64 (Const8 [0]) _)
+ // cond:
+ // result: (Const8 [0])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ v.reset(OpConst8)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Rsh8x64 x (Const64 [0]))
+ // cond:
+ // result: x
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ if v.Args[1].AuxInt != 0 {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (Rsh8x64 <t> (Rsh8x64 x (Const64 [c])) (Const64 [d]))
+ // cond: !uaddOvf(c,d)
+ // result: (Rsh8x64 x (Const64 <t> [c+d]))
+ for {
+ t := v.Type
+ if v.Args[0].Op != OpRsh8x64 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ if v.Args[0].Args[1].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].Args[1].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ if !(!uaddOvf(c, d)) {
+ break
+ }
+ v.reset(OpRsh8x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c + d
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpRsh8x8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Rsh8x8 <t> x (Const8 [c]))
+ // cond:
+ // result: (Rsh8x64 x (Const64 <t> [int64(uint8(c))]))
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ c := v.Args[1].AuxInt
+ v.reset(OpRsh8x64)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = int64(uint8(c))
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpSliceCap(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SliceCap (SliceMake _ _ cap))
+ // cond:
+ // result: cap
+ for {
+ if v.Args[0].Op != OpSliceMake {
+ break
+ }
+ cap := v.Args[0].Args[2]
+ v.reset(OpCopy)
+ v.Type = cap.Type
+ v.AddArg(cap)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpSliceLen(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SliceLen (SliceMake _ len _))
+ // cond:
+ // result: len
+ for {
+ if v.Args[0].Op != OpSliceMake {
+ break
+ }
+ len := v.Args[0].Args[1]
+ v.reset(OpCopy)
+ v.Type = len.Type
+ v.AddArg(len)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpSlicePtr(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (SlicePtr (SliceMake ptr _ _ ))
+ // cond:
+ // result: ptr
+ for {
+ if v.Args[0].Op != OpSliceMake {
+ break
+ }
+ ptr := v.Args[0].Args[0]
+ v.reset(OpCopy)
+ v.Type = ptr.Type
+ v.AddArg(ptr)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpStore(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Store _ (StructMake0) mem)
+ // cond:
+ // result: mem
+ for {
+ if v.Args[1].Op != OpStructMake0 {
+ break
+ }
+ mem := v.Args[2]
+ v.reset(OpCopy)
+ v.Type = mem.Type
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Store dst (StructMake1 <t> f0) mem)
+ // cond:
+ // result: (Store [t.FieldType(0).Size()] dst f0 mem)
+ for {
+ dst := v.Args[0]
+ if v.Args[1].Op != OpStructMake1 {
+ break
+ }
+ t := v.Args[1].Type
+ f0 := v.Args[1].Args[0]
+ mem := v.Args[2]
+ v.reset(OpStore)
+ v.AuxInt = t.FieldType(0).Size()
+ v.AddArg(dst)
+ v.AddArg(f0)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Store dst (StructMake2 <t> f0 f1) mem)
+ // cond:
+ // result: (Store [t.FieldType(1).Size()] (OffPtr <t.FieldType(1).PtrTo()> [t.FieldOff(1)] dst) f1 (Store [t.FieldType(0).Size()] dst f0 mem))
+ for {
+ dst := v.Args[0]
+ if v.Args[1].Op != OpStructMake2 {
+ break
+ }
+ t := v.Args[1].Type
+ f0 := v.Args[1].Args[0]
+ f1 := v.Args[1].Args[1]
+ mem := v.Args[2]
+ v.reset(OpStore)
+ v.AuxInt = t.FieldType(1).Size()
+ v0 := b.NewValue0(v.Line, OpOffPtr, t.FieldType(1).PtrTo())
+ v0.AuxInt = t.FieldOff(1)
+ v0.AddArg(dst)
+ v.AddArg(v0)
+ v.AddArg(f1)
+ v1 := b.NewValue0(v.Line, OpStore, TypeMem)
+ v1.AuxInt = t.FieldType(0).Size()
+ v1.AddArg(dst)
+ v1.AddArg(f0)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Store dst (StructMake3 <t> f0 f1 f2) mem)
+ // cond:
+ // result: (Store [t.FieldType(2).Size()] (OffPtr <t.FieldType(2).PtrTo()> [t.FieldOff(2)] dst) f2 (Store [t.FieldType(1).Size()] (OffPtr <t.FieldType(1).PtrTo()> [t.FieldOff(1)] dst) f1 (Store [t.FieldType(0).Size()] dst f0 mem)))
+ for {
+ dst := v.Args[0]
+ if v.Args[1].Op != OpStructMake3 {
+ break
+ }
+ t := v.Args[1].Type
+ f0 := v.Args[1].Args[0]
+ f1 := v.Args[1].Args[1]
+ f2 := v.Args[1].Args[2]
+ mem := v.Args[2]
+ v.reset(OpStore)
+ v.AuxInt = t.FieldType(2).Size()
+ v0 := b.NewValue0(v.Line, OpOffPtr, t.FieldType(2).PtrTo())
+ v0.AuxInt = t.FieldOff(2)
+ v0.AddArg(dst)
+ v.AddArg(v0)
+ v.AddArg(f2)
+ v1 := b.NewValue0(v.Line, OpStore, TypeMem)
+ v1.AuxInt = t.FieldType(1).Size()
+ v2 := b.NewValue0(v.Line, OpOffPtr, t.FieldType(1).PtrTo())
+ v2.AuxInt = t.FieldOff(1)
+ v2.AddArg(dst)
+ v1.AddArg(v2)
+ v1.AddArg(f1)
+ v3 := b.NewValue0(v.Line, OpStore, TypeMem)
+ v3.AuxInt = t.FieldType(0).Size()
+ v3.AddArg(dst)
+ v3.AddArg(f0)
+ v3.AddArg(mem)
+ v1.AddArg(v3)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Store dst (StructMake4 <t> f0 f1 f2 f3) mem)
+ // cond:
+ // result: (Store [t.FieldType(3).Size()] (OffPtr <t.FieldType(3).PtrTo()> [t.FieldOff(3)] dst) f3 (Store [t.FieldType(2).Size()] (OffPtr <t.FieldType(2).PtrTo()> [t.FieldOff(2)] dst) f2 (Store [t.FieldType(1).Size()] (OffPtr <t.FieldType(1).PtrTo()> [t.FieldOff(1)] dst) f1 (Store [t.FieldType(0).Size()] dst f0 mem))))
+ for {
+ dst := v.Args[0]
+ if v.Args[1].Op != OpStructMake4 {
+ break
+ }
+ t := v.Args[1].Type
+ f0 := v.Args[1].Args[0]
+ f1 := v.Args[1].Args[1]
+ f2 := v.Args[1].Args[2]
+ f3 := v.Args[1].Args[3]
+ mem := v.Args[2]
+ v.reset(OpStore)
+ v.AuxInt = t.FieldType(3).Size()
+ v0 := b.NewValue0(v.Line, OpOffPtr, t.FieldType(3).PtrTo())
+ v0.AuxInt = t.FieldOff(3)
+ v0.AddArg(dst)
+ v.AddArg(v0)
+ v.AddArg(f3)
+ v1 := b.NewValue0(v.Line, OpStore, TypeMem)
+ v1.AuxInt = t.FieldType(2).Size()
+ v2 := b.NewValue0(v.Line, OpOffPtr, t.FieldType(2).PtrTo())
+ v2.AuxInt = t.FieldOff(2)
+ v2.AddArg(dst)
+ v1.AddArg(v2)
+ v1.AddArg(f2)
+ v3 := b.NewValue0(v.Line, OpStore, TypeMem)
+ v3.AuxInt = t.FieldType(1).Size()
+ v4 := b.NewValue0(v.Line, OpOffPtr, t.FieldType(1).PtrTo())
+ v4.AuxInt = t.FieldOff(1)
+ v4.AddArg(dst)
+ v3.AddArg(v4)
+ v3.AddArg(f1)
+ v5 := b.NewValue0(v.Line, OpStore, TypeMem)
+ v5.AuxInt = t.FieldType(0).Size()
+ v5.AddArg(dst)
+ v5.AddArg(f0)
+ v5.AddArg(mem)
+ v3.AddArg(v5)
+ v1.AddArg(v3)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Store [8] dst (ComplexMake real imag) mem)
+ // cond:
+ // result: (Store [4] (OffPtr <config.fe.TypeFloat32().PtrTo()> [4] dst) imag (Store [4] dst real mem))
+ for {
+ if v.AuxInt != 8 {
+ break
+ }
+ dst := v.Args[0]
+ if v.Args[1].Op != OpComplexMake {
+ break
+ }
+ real := v.Args[1].Args[0]
+ imag := v.Args[1].Args[1]
+ mem := v.Args[2]
+ v.reset(OpStore)
+ v.AuxInt = 4
+ v0 := b.NewValue0(v.Line, OpOffPtr, config.fe.TypeFloat32().PtrTo())
+ v0.AuxInt = 4
+ v0.AddArg(dst)
+ v.AddArg(v0)
+ v.AddArg(imag)
+ v1 := b.NewValue0(v.Line, OpStore, TypeMem)
+ v1.AuxInt = 4
+ v1.AddArg(dst)
+ v1.AddArg(real)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Store [16] dst (ComplexMake real imag) mem)
+ // cond:
+ // result: (Store [8] (OffPtr <config.fe.TypeFloat64().PtrTo()> [8] dst) imag (Store [8] dst real mem))
+ for {
+ if v.AuxInt != 16 {
+ break
+ }
+ dst := v.Args[0]
+ if v.Args[1].Op != OpComplexMake {
+ break
+ }
+ real := v.Args[1].Args[0]
+ imag := v.Args[1].Args[1]
+ mem := v.Args[2]
+ v.reset(OpStore)
+ v.AuxInt = 8
+ v0 := b.NewValue0(v.Line, OpOffPtr, config.fe.TypeFloat64().PtrTo())
+ v0.AuxInt = 8
+ v0.AddArg(dst)
+ v.AddArg(v0)
+ v.AddArg(imag)
+ v1 := b.NewValue0(v.Line, OpStore, TypeMem)
+ v1.AuxInt = 8
+ v1.AddArg(dst)
+ v1.AddArg(real)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Store [2*config.PtrSize] dst (StringMake ptr len) mem)
+ // cond:
+ // result: (Store [config.PtrSize] (OffPtr <config.fe.TypeInt().PtrTo()> [config.PtrSize] dst) len (Store [config.PtrSize] dst ptr mem))
+ for {
+ if v.AuxInt != 2*config.PtrSize {
+ break
+ }
+ dst := v.Args[0]
+ if v.Args[1].Op != OpStringMake {
+ break
+ }
+ ptr := v.Args[1].Args[0]
+ len := v.Args[1].Args[1]
+ mem := v.Args[2]
+ v.reset(OpStore)
+ v.AuxInt = config.PtrSize
+ v0 := b.NewValue0(v.Line, OpOffPtr, config.fe.TypeInt().PtrTo())
+ v0.AuxInt = config.PtrSize
+ v0.AddArg(dst)
+ v.AddArg(v0)
+ v.AddArg(len)
+ v1 := b.NewValue0(v.Line, OpStore, TypeMem)
+ v1.AuxInt = config.PtrSize
+ v1.AddArg(dst)
+ v1.AddArg(ptr)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Store [3*config.PtrSize] dst (SliceMake ptr len cap) mem)
+ // cond:
+ // result: (Store [config.PtrSize] (OffPtr <config.fe.TypeInt().PtrTo()> [2*config.PtrSize] dst) cap (Store [config.PtrSize] (OffPtr <config.fe.TypeInt().PtrTo()> [config.PtrSize] dst) len (Store [config.PtrSize] dst ptr mem)))
+ for {
+ if v.AuxInt != 3*config.PtrSize {
+ break
+ }
+ dst := v.Args[0]
+ if v.Args[1].Op != OpSliceMake {
+ break
+ }
+ ptr := v.Args[1].Args[0]
+ len := v.Args[1].Args[1]
+ cap := v.Args[1].Args[2]
+ mem := v.Args[2]
+ v.reset(OpStore)
+ v.AuxInt = config.PtrSize
+ v0 := b.NewValue0(v.Line, OpOffPtr, config.fe.TypeInt().PtrTo())
+ v0.AuxInt = 2 * config.PtrSize
+ v0.AddArg(dst)
+ v.AddArg(v0)
+ v.AddArg(cap)
+ v1 := b.NewValue0(v.Line, OpStore, TypeMem)
+ v1.AuxInt = config.PtrSize
+ v2 := b.NewValue0(v.Line, OpOffPtr, config.fe.TypeInt().PtrTo())
+ v2.AuxInt = config.PtrSize
+ v2.AddArg(dst)
+ v1.AddArg(v2)
+ v1.AddArg(len)
+ v3 := b.NewValue0(v.Line, OpStore, TypeMem)
+ v3.AuxInt = config.PtrSize
+ v3.AddArg(dst)
+ v3.AddArg(ptr)
+ v3.AddArg(mem)
+ v1.AddArg(v3)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Store [2*config.PtrSize] dst (IMake itab data) mem)
+ // cond:
+ // result: (Store [config.PtrSize] (OffPtr <config.fe.TypeBytePtr().PtrTo()> [config.PtrSize] dst) data (Store [config.PtrSize] dst itab mem))
+ for {
+ if v.AuxInt != 2*config.PtrSize {
+ break
+ }
+ dst := v.Args[0]
+ if v.Args[1].Op != OpIMake {
+ break
+ }
+ itab := v.Args[1].Args[0]
+ data := v.Args[1].Args[1]
+ mem := v.Args[2]
+ v.reset(OpStore)
+ v.AuxInt = config.PtrSize
+ v0 := b.NewValue0(v.Line, OpOffPtr, config.fe.TypeBytePtr().PtrTo())
+ v0.AuxInt = config.PtrSize
+ v0.AddArg(dst)
+ v.AddArg(v0)
+ v.AddArg(data)
+ v1 := b.NewValue0(v.Line, OpStore, TypeMem)
+ v1.AuxInt = config.PtrSize
+ v1.AddArg(dst)
+ v1.AddArg(itab)
+ v1.AddArg(mem)
+ v.AddArg(v1)
+ return true
+ }
+ // match: (Store [size] dst (Load <t> src mem) mem)
+ // cond: !config.fe.CanSSA(t)
+ // result: (Move [size] dst src mem)
+ for {
+ size := v.AuxInt
+ dst := v.Args[0]
+ if v.Args[1].Op != OpLoad {
+ break
+ }
+ t := v.Args[1].Type
+ src := v.Args[1].Args[0]
+ mem := v.Args[1].Args[1]
+ if v.Args[2] != mem {
+ break
+ }
+ if !(!config.fe.CanSSA(t)) {
+ break
+ }
+ v.reset(OpMove)
+ v.AuxInt = size
+ v.AddArg(dst)
+ v.AddArg(src)
+ v.AddArg(mem)
+ return true
+ }
+ // match: (Store [size] dst (Load <t> src mem) (VarDef {x} mem))
+ // cond: !config.fe.CanSSA(t)
+ // result: (Move [size] dst src (VarDef {x} mem))
+ for {
+ size := v.AuxInt
+ dst := v.Args[0]
+ if v.Args[1].Op != OpLoad {
+ break
+ }
+ t := v.Args[1].Type
+ src := v.Args[1].Args[0]
+ mem := v.Args[1].Args[1]
+ if v.Args[2].Op != OpVarDef {
+ break
+ }
+ x := v.Args[2].Aux
+ if v.Args[2].Args[0] != mem {
+ break
+ }
+ if !(!config.fe.CanSSA(t)) {
+ break
+ }
+ v.reset(OpMove)
+ v.AuxInt = size
+ v.AddArg(dst)
+ v.AddArg(src)
+ v0 := b.NewValue0(v.Line, OpVarDef, TypeMem)
+ v0.Aux = x
+ v0.AddArg(mem)
+ v.AddArg(v0)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpStringLen(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (StringLen (StringMake _ len))
+ // cond:
+ // result: len
+ for {
+ if v.Args[0].Op != OpStringMake {
+ break
+ }
+ len := v.Args[0].Args[1]
+ v.reset(OpCopy)
+ v.Type = len.Type
+ v.AddArg(len)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpStringPtr(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (StringPtr (StringMake ptr _))
+ // cond:
+ // result: ptr
+ for {
+ if v.Args[0].Op != OpStringMake {
+ break
+ }
+ ptr := v.Args[0].Args[0]
+ v.reset(OpCopy)
+ v.Type = ptr.Type
+ v.AddArg(ptr)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpStructSelect(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (StructSelect (StructMake1 x))
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpStructMake1 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (StructSelect [0] (StructMake2 x _))
+ // cond:
+ // result: x
+ for {
+ if v.AuxInt != 0 {
+ break
+ }
+ if v.Args[0].Op != OpStructMake2 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (StructSelect [1] (StructMake2 _ x))
+ // cond:
+ // result: x
+ for {
+ if v.AuxInt != 1 {
+ break
+ }
+ if v.Args[0].Op != OpStructMake2 {
+ break
+ }
+ x := v.Args[0].Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (StructSelect [0] (StructMake3 x _ _))
+ // cond:
+ // result: x
+ for {
+ if v.AuxInt != 0 {
+ break
+ }
+ if v.Args[0].Op != OpStructMake3 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (StructSelect [1] (StructMake3 _ x _))
+ // cond:
+ // result: x
+ for {
+ if v.AuxInt != 1 {
+ break
+ }
+ if v.Args[0].Op != OpStructMake3 {
+ break
+ }
+ x := v.Args[0].Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (StructSelect [2] (StructMake3 _ _ x))
+ // cond:
+ // result: x
+ for {
+ if v.AuxInt != 2 {
+ break
+ }
+ if v.Args[0].Op != OpStructMake3 {
+ break
+ }
+ x := v.Args[0].Args[2]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (StructSelect [0] (StructMake4 x _ _ _))
+ // cond:
+ // result: x
+ for {
+ if v.AuxInt != 0 {
+ break
+ }
+ if v.Args[0].Op != OpStructMake4 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (StructSelect [1] (StructMake4 _ x _ _))
+ // cond:
+ // result: x
+ for {
+ if v.AuxInt != 1 {
+ break
+ }
+ if v.Args[0].Op != OpStructMake4 {
+ break
+ }
+ x := v.Args[0].Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (StructSelect [2] (StructMake4 _ _ x _))
+ // cond:
+ // result: x
+ for {
+ if v.AuxInt != 2 {
+ break
+ }
+ if v.Args[0].Op != OpStructMake4 {
+ break
+ }
+ x := v.Args[0].Args[2]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (StructSelect [3] (StructMake4 _ _ _ x))
+ // cond:
+ // result: x
+ for {
+ if v.AuxInt != 3 {
+ break
+ }
+ if v.Args[0].Op != OpStructMake4 {
+ break
+ }
+ x := v.Args[0].Args[3]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (StructSelect [i] (Load <t> ptr mem))
+ // cond: !config.fe.CanSSA(t)
+ // result: @v.Args[0].Block (Load <v.Type> (OffPtr <v.Type.PtrTo()> [t.FieldOff(i)] ptr) mem)
+ for {
+ i := v.AuxInt
+ if v.Args[0].Op != OpLoad {
+ break
+ }
+ t := v.Args[0].Type
+ ptr := v.Args[0].Args[0]
+ mem := v.Args[0].Args[1]
+ if !(!config.fe.CanSSA(t)) {
+ break
+ }
+ b = v.Args[0].Block
+ v0 := b.NewValue0(v.Line, OpLoad, v.Type)
+ v.reset(OpCopy)
+ v.AddArg(v0)
+ v1 := b.NewValue0(v.Line, OpOffPtr, v.Type.PtrTo())
+ v.reset(OpCopy)
+ v.AddArg(v1)
+ v1.AuxInt = t.FieldOff(i)
+ v1.AddArg(ptr)
+ v0.AddArg(v1)
+ v0.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpSub16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Sub16 (Const16 [c]) (Const16 [d]))
+ // cond:
+ // result: (Const16 [c-d])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst16)
+ v.AuxInt = c - d
+ return true
+ }
+ // match: (Sub16 x (Const16 <t> [c]))
+ // cond: x.Op != OpConst16
+ // result: (Add16 (Const16 <t> [-c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst16) {
+ break
+ }
+ v.reset(OpAdd16)
+ v0 := b.NewValue0(v.Line, OpConst16, t)
+ v0.AuxInt = -c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Sub16 x x)
+ // cond:
+ // result: (Const16 [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpConst16)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Sub16 (Add16 x y) x)
+ // cond:
+ // result: y
+ for {
+ if v.Args[0].Op != OpAdd16 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = y.Type
+ v.AddArg(y)
+ return true
+ }
+ // match: (Sub16 (Add16 x y) y)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpAdd16 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ if v.Args[1] != y {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpSub32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Sub32 (Const32 [c]) (Const32 [d]))
+ // cond:
+ // result: (Const32 [c-d])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst32)
+ v.AuxInt = c - d
+ return true
+ }
+ // match: (Sub32 x (Const32 <t> [c]))
+ // cond: x.Op != OpConst32
+ // result: (Add32 (Const32 <t> [-c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst32) {
+ break
+ }
+ v.reset(OpAdd32)
+ v0 := b.NewValue0(v.Line, OpConst32, t)
+ v0.AuxInt = -c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Sub32 x x)
+ // cond:
+ // result: (Const32 [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpConst32)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Sub32 (Add32 x y) x)
+ // cond:
+ // result: y
+ for {
+ if v.Args[0].Op != OpAdd32 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = y.Type
+ v.AddArg(y)
+ return true
+ }
+ // match: (Sub32 (Add32 x y) y)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpAdd32 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ if v.Args[1] != y {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpSub64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Sub64 (Const64 [c]) (Const64 [d]))
+ // cond:
+ // result: (Const64 [c-d])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst64)
+ v.AuxInt = c - d
+ return true
+ }
+ // match: (Sub64 x (Const64 <t> [c]))
+ // cond: x.Op != OpConst64
+ // result: (Add64 (Const64 <t> [-c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst64) {
+ break
+ }
+ v.reset(OpAdd64)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = -c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Sub64 x x)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Sub64 (Add64 x y) x)
+ // cond:
+ // result: y
+ for {
+ if v.Args[0].Op != OpAdd64 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = y.Type
+ v.AddArg(y)
+ return true
+ }
+ // match: (Sub64 (Add64 x y) y)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpAdd64 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ if v.Args[1] != y {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpSub8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Sub8 (Const8 [c]) (Const8 [d]))
+ // cond:
+ // result: (Const8 [c-d])
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ d := v.Args[1].AuxInt
+ v.reset(OpConst8)
+ v.AuxInt = c - d
+ return true
+ }
+ // match: (Sub8 x (Const8 <t> [c]))
+ // cond: x.Op != OpConst8
+ // result: (Add8 (Const8 <t> [-c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst8) {
+ break
+ }
+ v.reset(OpAdd8)
+ v0 := b.NewValue0(v.Line, OpConst8, t)
+ v0.AuxInt = -c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Sub8 x x)
+ // cond:
+ // result: (Const8 [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpConst8)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Sub8 (Add8 x y) x)
+ // cond:
+ // result: y
+ for {
+ if v.Args[0].Op != OpAdd8 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = y.Type
+ v.AddArg(y)
+ return true
+ }
+ // match: (Sub8 (Add8 x y) y)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpAdd8 {
+ break
+ }
+ x := v.Args[0].Args[0]
+ y := v.Args[0].Args[1]
+ if v.Args[1] != y {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpTrunc16to8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Trunc16to8 (Const16 [c]))
+ // cond:
+ // result: (Const8 [int64(int8(c))])
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpConst8)
+ v.AuxInt = int64(int8(c))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpTrunc32to16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Trunc32to16 (Const32 [c]))
+ // cond:
+ // result: (Const16 [int64(int16(c))])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpConst16)
+ v.AuxInt = int64(int16(c))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpTrunc32to8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Trunc32to8 (Const32 [c]))
+ // cond:
+ // result: (Const8 [int64(int8(c))])
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpConst8)
+ v.AuxInt = int64(int8(c))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpTrunc64to16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Trunc64to16 (Const64 [c]))
+ // cond:
+ // result: (Const16 [int64(int16(c))])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpConst16)
+ v.AuxInt = int64(int16(c))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpTrunc64to32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Trunc64to32 (Const64 [c]))
+ // cond:
+ // result: (Const32 [int64(int32(c))])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpConst32)
+ v.AuxInt = int64(int32(c))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpTrunc64to8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Trunc64to8 (Const64 [c]))
+ // cond:
+ // result: (Const8 [int64(int8(c))])
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ c := v.Args[0].AuxInt
+ v.reset(OpConst8)
+ v.AuxInt = int64(int8(c))
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpXor16(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Xor16 x (Const16 <t> [c]))
+ // cond: x.Op != OpConst16
+ // result: (Xor16 (Const16 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst16 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst16) {
+ break
+ }
+ v.reset(OpXor16)
+ v0 := b.NewValue0(v.Line, OpConst16, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Xor16 x x)
+ // cond:
+ // result: (Const16 [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpConst16)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Xor16 (Const16 [0]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConst16 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpXor32(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Xor32 x (Const32 <t> [c]))
+ // cond: x.Op != OpConst32
+ // result: (Xor32 (Const32 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst32 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst32) {
+ break
+ }
+ v.reset(OpXor32)
+ v0 := b.NewValue0(v.Line, OpConst32, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Xor32 x x)
+ // cond:
+ // result: (Const32 [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpConst32)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Xor32 (Const32 [0]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConst32 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpXor64(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Xor64 x (Const64 <t> [c]))
+ // cond: x.Op != OpConst64
+ // result: (Xor64 (Const64 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst64 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst64) {
+ break
+ }
+ v.reset(OpXor64)
+ v0 := b.NewValue0(v.Line, OpConst64, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Xor64 x x)
+ // cond:
+ // result: (Const64 [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpConst64)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Xor64 (Const64 [0]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConst64 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteValuegeneric_OpXor8(v *Value, config *Config) bool {
+ b := v.Block
+ _ = b
+ // match: (Xor8 x (Const8 <t> [c]))
+ // cond: x.Op != OpConst8
+ // result: (Xor8 (Const8 <t> [c]) x)
+ for {
+ x := v.Args[0]
+ if v.Args[1].Op != OpConst8 {
+ break
+ }
+ t := v.Args[1].Type
+ c := v.Args[1].AuxInt
+ if !(x.Op != OpConst8) {
+ break
+ }
+ v.reset(OpXor8)
+ v0 := b.NewValue0(v.Line, OpConst8, t)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ v.AddArg(x)
+ return true
+ }
+ // match: (Xor8 x x)
+ // cond:
+ // result: (Const8 [0])
+ for {
+ x := v.Args[0]
+ if v.Args[1] != x {
+ break
+ }
+ v.reset(OpConst8)
+ v.AuxInt = 0
+ return true
+ }
+ // match: (Xor8 (Const8 [0]) x)
+ // cond:
+ // result: x
+ for {
+ if v.Args[0].Op != OpConst8 {
+ break
+ }
+ if v.Args[0].AuxInt != 0 {
+ break
+ }
+ x := v.Args[1]
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ return false
+}
+func rewriteBlockgeneric(b *Block) bool {
+ switch b.Kind {
+ case BlockCheck:
+ // match: (Check (NilCheck (GetG _) _) next)
+ // cond:
+ // result: (Plain nil next)
+ for {
+ v := b.Control
+ if v.Op != OpNilCheck {
+ break
+ }
+ if v.Args[0].Op != OpGetG {
+ break
+ }
+ next := b.Succs[0]
+ b.Kind = BlockPlain
+ b.Control = nil
+ b.Succs[0] = next
+ b.Likely = BranchUnknown
+ return true
+ }
+ case BlockIf:
+ // match: (If (Not cond) yes no)
+ // cond:
+ // result: (If cond no yes)
+ for {
+ v := b.Control
+ if v.Op != OpNot {
+ break
+ }
+ cond := v.Args[0]
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ b.Kind = BlockIf
+ b.Control = cond
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ // match: (If (ConstBool [c]) yes no)
+ // cond: c == 1
+ // result: (First nil yes no)
+ for {
+ v := b.Control
+ if v.Op != OpConstBool {
+ break
+ }
+ c := v.AuxInt
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ if !(c == 1) {
+ break
+ }
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = yes
+ b.Succs[1] = no
+ return true
+ }
+ // match: (If (ConstBool [c]) yes no)
+ // cond: c == 0
+ // result: (First nil no yes)
+ for {
+ v := b.Control
+ if v.Op != OpConstBool {
+ break
+ }
+ c := v.AuxInt
+ yes := b.Succs[0]
+ no := b.Succs[1]
+ if !(c == 0) {
+ break
+ }
+ b.Kind = BlockFirst
+ b.Control = nil
+ b.Succs[0] = no
+ b.Succs[1] = yes
+ b.Likely *= -1
+ return true
+ }
+ }
+ return false
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+const (
+ ScorePhi = iota // towards top of block
+ ScoreVarDef
+ ScoreMemory
+ ScoreDefault
+ ScoreFlags
+ ScoreControl // towards bottom of block
+
+ ScoreCount // not a real score
+)
+
+// Schedule the Values in each Block. After this phase returns, the
+// order of b.Values matters and is the order in which those values
+// will appear in the assembly output. For now it generates a
+// reasonable valid schedule using a priority queue. TODO(khr):
+// schedule smarter.
+func schedule(f *Func) {
+ // For each value, the number of times it is used in the block
+ // by values that have not been scheduled yet.
+ uses := make([]int, f.NumValues())
+
+ // "priority" for a value
+ score := make([]uint8, f.NumValues())
+
+ // scheduling order. We queue values in this list in reverse order.
+ var order []*Value
+
+ // priority queue of legally schedulable (0 unscheduled uses) values
+ var priq [ScoreCount][]*Value
+
+ // maps mem values to the next live memory value
+ nextMem := make([]*Value, f.NumValues())
+ // additional pretend arguments for each Value. Used to enforce load/store ordering.
+ additionalArgs := make([][]*Value, f.NumValues())
+
+ for _, b := range f.Blocks {
+ // Find store chain for block.
+ // Store chains for different blocks overwrite each other, so
+ // the calculated store chain is good only for this block.
+ for _, v := range b.Values {
+ if v.Op != OpPhi && v.Type.IsMemory() {
+ for _, w := range v.Args {
+ if w.Type.IsMemory() {
+ nextMem[w.ID] = v
+ }
+ }
+ }
+ }
+
+ // Compute uses.
+ for _, v := range b.Values {
+ if v.Op == OpPhi {
+ // If a value is used by a phi, it does not induce
+ // a scheduling edge because that use is from the
+ // previous iteration.
+ continue
+ }
+ for _, w := range v.Args {
+ if w.Block == b {
+ uses[w.ID]++
+ }
+ // Any load must come before the following store.
+ if v.Type.IsMemory() || !w.Type.IsMemory() {
+ continue // not a load
+ }
+ s := nextMem[w.ID]
+ if s == nil || s.Block != b {
+ continue
+ }
+ additionalArgs[s.ID] = append(additionalArgs[s.ID], v)
+ uses[v.ID]++
+ }
+ }
+ // Compute score. Larger numbers are scheduled closer to the end of the block.
+ for _, v := range b.Values {
+ switch {
+ case v.Op == OpAMD64LoweredGetClosurePtr:
+ // We also score GetLoweredClosurePtr as early as possible to ensure that the
+ // context register is not stomped. GetLoweredClosurePtr should only appear
+ // in the entry block where there are no phi functions, so there is no
+ // conflict or ambiguity here.
+ if b != f.Entry {
+ f.Fatalf("LoweredGetClosurePtr appeared outside of entry block, b=%s", b.String())
+ }
+ score[v.ID] = ScorePhi
+ case v.Op == OpPhi:
+ // We want all the phis first.
+ score[v.ID] = ScorePhi
+ case v.Op == OpVarDef:
+ // We want all the vardefs next.
+ score[v.ID] = ScoreVarDef
+ case v.Type.IsMemory():
+ // Schedule stores as early as possible. This tends to
+ // reduce register pressure. It also helps make sure
+ // VARDEF ops are scheduled before the corresponding LEA.
+ score[v.ID] = ScoreMemory
+ case v.Type.IsFlags():
+ // Schedule flag register generation as late as possible.
+ // This makes sure that we only have one live flags
+ // value at a time.
+ score[v.ID] = ScoreFlags
+ default:
+ score[v.ID] = ScoreDefault
+ }
+ }
+ if b.Control != nil && b.Control.Op != OpPhi {
+ // Force the control value to be scheduled at the end,
+ // unless it is a phi value (which must be first).
+ score[b.Control.ID] = ScoreControl
+
+ // Schedule values dependent on the control value at the end.
+ // This reduces the number of register spills. We don't find
+ // all values that depend on the control, just values with a
+ // direct dependency. This is cheaper and in testing there
+ // was no difference in the number of spills.
+ for _, v := range b.Values {
+ if v.Op != OpPhi {
+ for _, a := range v.Args {
+ if a == b.Control {
+ score[v.ID] = ScoreControl
+ }
+ }
+ }
+ }
+ }
+
+ // Initialize priority queue with schedulable values.
+ for i := range priq {
+ priq[i] = priq[i][:0]
+ }
+ for _, v := range b.Values {
+ if uses[v.ID] == 0 {
+ s := score[v.ID]
+ priq[s] = append(priq[s], v)
+ }
+ }
+
+ // Schedule highest priority value, update use counts, repeat.
+ order = order[:0]
+ for {
+ // Find highest priority schedulable value.
+ var v *Value
+ for i := len(priq) - 1; i >= 0; i-- {
+ n := len(priq[i])
+ if n == 0 {
+ continue
+ }
+ v = priq[i][n-1]
+ priq[i] = priq[i][:n-1]
+ break
+ }
+ if v == nil {
+ break
+ }
+
+ // Add it to the schedule.
+ order = append(order, v)
+
+ // Update use counts of arguments.
+ for _, w := range v.Args {
+ if w.Block != b {
+ continue
+ }
+ uses[w.ID]--
+ if uses[w.ID] == 0 {
+ // All uses scheduled, w is now schedulable.
+ s := score[w.ID]
+ priq[s] = append(priq[s], w)
+ }
+ }
+ for _, w := range additionalArgs[v.ID] {
+ uses[w.ID]--
+ if uses[w.ID] == 0 {
+ // All uses scheduled, w is now schedulable.
+ s := score[w.ID]
+ priq[s] = append(priq[s], w)
+ }
+ }
+ }
+ if len(order) != len(b.Values) {
+ f.Fatalf("schedule does not include all values")
+ }
+ for i := 0; i < len(b.Values); i++ {
+ b.Values[i] = order[len(b.Values)-1-i]
+ }
+ }
+
+ f.scheduled = true
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import "testing"
+
+func TestSchedule(t *testing.T) {
+ c := testConfig(t)
+ cases := []fun{
+ Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem0", OpInitMem, TypeMem, 0, nil),
+ Valu("ptr", OpConst64, TypeInt64, 0xABCD, nil),
+ Valu("v", OpConst64, TypeInt64, 12, nil),
+ Valu("mem1", OpStore, TypeMem, 8, nil, "ptr", "v", "mem0"),
+ Valu("mem2", OpStore, TypeMem, 8, nil, "ptr", "v", "mem1"),
+ Valu("mem3", OpStore, TypeInt64, 8, nil, "ptr", "sum", "mem2"),
+ Valu("l1", OpLoad, TypeInt64, 0, nil, "ptr", "mem1"),
+ Valu("l2", OpLoad, TypeInt64, 0, nil, "ptr", "mem2"),
+ Valu("sum", OpAdd64, TypeInt64, 0, nil, "l1", "l2"),
+ Goto("exit")),
+ Bloc("exit",
+ Exit("mem3"))),
+ }
+ for _, c := range cases {
+ schedule(c.f)
+ if !isSingleLiveMem(c.f) {
+ t.Error("single-live-mem restriction not enforced by schedule for func:")
+ printFunc(c.f)
+ }
+ }
+}
+
+func isSingleLiveMem(f *Func) bool {
+ for _, b := range f.Blocks {
+ var liveMem *Value
+ for _, v := range b.Values {
+ for _, w := range v.Args {
+ if w.Type.IsMemory() {
+ if liveMem == nil {
+ liveMem = w
+ continue
+ }
+ if w != liveMem {
+ return false
+ }
+ }
+ }
+ if v.Type.IsMemory() {
+ liveMem = v
+ }
+ }
+ }
+ return true
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import (
+ "testing"
+)
+
+func TestShiftConstAMD64(t *testing.T) {
+ c := testConfig(t)
+ fun := makeConstShiftFunc(c, 18, OpLsh64x64, TypeUInt64)
+ checkOpcodeCounts(t, fun.f, map[Op]int{OpAMD64SHLQconst: 1, OpAMD64CMPQconst: 0, OpAMD64ANDQconst: 0})
+ fun.f.Free()
+ fun = makeConstShiftFunc(c, 66, OpLsh64x64, TypeUInt64)
+ checkOpcodeCounts(t, fun.f, map[Op]int{OpAMD64SHLQconst: 0, OpAMD64CMPQconst: 0, OpAMD64ANDQconst: 0})
+ fun.f.Free()
+ fun = makeConstShiftFunc(c, 18, OpRsh64Ux64, TypeUInt64)
+ checkOpcodeCounts(t, fun.f, map[Op]int{OpAMD64SHRQconst: 1, OpAMD64CMPQconst: 0, OpAMD64ANDQconst: 0})
+ fun.f.Free()
+ fun = makeConstShiftFunc(c, 66, OpRsh64Ux64, TypeUInt64)
+ checkOpcodeCounts(t, fun.f, map[Op]int{OpAMD64SHRQconst: 0, OpAMD64CMPQconst: 0, OpAMD64ANDQconst: 0})
+ fun.f.Free()
+ fun = makeConstShiftFunc(c, 18, OpRsh64x64, TypeInt64)
+ checkOpcodeCounts(t, fun.f, map[Op]int{OpAMD64SARQconst: 1, OpAMD64CMPQconst: 0})
+ fun.f.Free()
+ fun = makeConstShiftFunc(c, 66, OpRsh64x64, TypeInt64)
+ checkOpcodeCounts(t, fun.f, map[Op]int{OpAMD64SARQconst: 1, OpAMD64CMPQconst: 0})
+ fun.f.Free()
+}
+
+func makeConstShiftFunc(c *Config, amount int64, op Op, typ Type) fun {
+ ptyp := &TypeImpl{Size_: 8, Ptr: true, Name: "ptr"}
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("SP", OpSP, TypeUInt64, 0, nil),
+ Valu("argptr", OpOffPtr, ptyp, 8, nil, "SP"),
+ Valu("resptr", OpOffPtr, ptyp, 16, nil, "SP"),
+ Valu("load", OpLoad, typ, 0, nil, "argptr", "mem"),
+ Valu("c", OpConst64, TypeUInt64, amount, nil),
+ Valu("shift", op, typ, 0, nil, "load", "c"),
+ Valu("store", OpStore, TypeMem, 8, nil, "resptr", "shift", "mem"),
+ Exit("store")))
+ Compile(fun.f)
+ return fun
+}
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// Shortcircuit finds situations where branch directions
+// are always correlated and rewrites the CFG to take
+// advantage of that fact.
+// This optimization is useful for compiling && and || expressions.
+func shortcircuit(f *Func) {
+ // Step 1: Replace a phi arg with a constant if that arg
+ // is the control value of a preceding If block.
+ // b1:
+ // If a goto b2 else b3
+ // b2: <- b1 ...
+ // x = phi(a, ...)
+ //
+ // We can replace the "a" in the phi with the constant true.
+ ct := f.ConstBool(f.Entry.Line, f.Config.fe.TypeBool(), true)
+ cf := f.ConstBool(f.Entry.Line, f.Config.fe.TypeBool(), false)
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ if v.Op != OpPhi {
+ continue
+ }
+ if !v.Type.IsBoolean() {
+ continue
+ }
+ for i, a := range v.Args {
+ p := b.Preds[i]
+ if p.Kind != BlockIf {
+ continue
+ }
+ if p.Control != a {
+ continue
+ }
+ if p.Succs[0] == b {
+ v.Args[i] = ct
+ } else {
+ v.Args[i] = cf
+ }
+ }
+ }
+ }
+
+ // Step 2: Compute which values are live across blocks.
+ live := make([]bool, f.NumValues())
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ for _, a := range v.Args {
+ if a.Block != v.Block {
+ live[a.ID] = true
+ }
+ }
+ }
+ if b.Control != nil && b.Control.Block != b {
+ live[b.Control.ID] = true
+ }
+ }
+
+ // Step 3: Redirect control flow around known branches.
+ // p:
+ // ... goto b ...
+ // b: <- p ...
+ // v = phi(true, ...)
+ // if v goto t else u
+ // We can redirect p to go directly to t instead of b.
+ // (If v is not live after b).
+ for _, b := range f.Blocks {
+ if b.Kind != BlockIf {
+ continue
+ }
+ if len(b.Values) != 1 {
+ continue
+ }
+ v := b.Values[0]
+ if v.Op != OpPhi {
+ continue
+ }
+ if b.Control != v {
+ continue
+ }
+ if live[v.ID] {
+ continue
+ }
+ for i := 0; i < len(v.Args); i++ {
+ a := v.Args[i]
+ if a.Op != OpConstBool {
+ continue
+ }
+
+ // The predecessor we come in from.
+ p := b.Preds[i]
+ // The successor we always go to when coming in
+ // from that predecessor.
+ t := b.Succs[1-a.AuxInt]
+
+ // Change the edge p->b to p->t.
+ for j, x := range p.Succs {
+ if x == b {
+ p.Succs[j] = t
+ break
+ }
+ }
+
+ // Fix up t to have one more predecessor.
+ j := predIdx(t, b)
+ t.Preds = append(t.Preds, p)
+ for _, w := range t.Values {
+ if w.Op != OpPhi {
+ continue
+ }
+ w.Args = append(w.Args, w.Args[j])
+ }
+
+ // Fix up b to have one less predecessor.
+ n := len(b.Preds) - 1
+ b.Preds[i] = b.Preds[n]
+ b.Preds[n] = nil
+ b.Preds = b.Preds[:n]
+ v.Args[i] = v.Args[n]
+ v.Args[n] = nil
+ v.Args = v.Args[:n]
+ if n == 1 {
+ v.Op = OpCopy
+ // No longer a phi, stop optimizing here.
+ break
+ }
+ i--
+ }
+ }
+}
+
+// predIdx returns the index where p appears in the predecessor list of b.
+// p must be in the predecessor list of b.
+func predIdx(b, p *Block) int {
+ for i, x := range b.Preds {
+ if x == p {
+ return i
+ }
+ }
+ panic("predecessor not found")
+}
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import "testing"
+
+func TestShortCircuit(t *testing.T) {
+ c := testConfig(t)
+
+ fun := Fun(c, "entry",
+ Bloc("entry",
+ Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("arg1", OpArg, TypeInt64, 0, nil),
+ Valu("arg2", OpArg, TypeInt64, 0, nil),
+ Valu("arg3", OpArg, TypeInt64, 0, nil),
+ Goto("b1")),
+ Bloc("b1",
+ Valu("cmp1", OpLess64, TypeBool, 0, nil, "arg1", "arg2"),
+ If("cmp1", "b2", "b3")),
+ Bloc("b2",
+ Valu("cmp2", OpLess64, TypeBool, 0, nil, "arg2", "arg3"),
+ Goto("b3")),
+ Bloc("b3",
+ Valu("phi2", OpPhi, TypeBool, 0, nil, "cmp1", "cmp2"),
+ If("phi2", "b4", "b5")),
+ Bloc("b4",
+ Valu("cmp3", OpLess64, TypeBool, 0, nil, "arg3", "arg1"),
+ Goto("b5")),
+ Bloc("b5",
+ Valu("phi3", OpPhi, TypeBool, 0, nil, "phi2", "cmp3"),
+ If("phi3", "b6", "b7")),
+ Bloc("b6",
+ Exit("mem")),
+ Bloc("b7",
+ Exit("mem")))
+
+ CheckFunc(fun.f)
+ shortcircuit(fun.f)
+ CheckFunc(fun.f)
+
+ for _, b := range fun.f.Blocks {
+ for _, v := range b.Values {
+ if v.Op == OpPhi {
+ t.Errorf("phi %s remains", v)
+ }
+ }
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// from http://research.swtch.com/sparse
+// in turn, from Briggs and Torczon
+
+type sparseEntry struct {
+ key ID
+ val int32
+}
+
+type sparseMap struct {
+ dense []sparseEntry
+ sparse []int
+}
+
+// newSparseMap returns a sparseMap that can map
+// integers between 0 and n-1 to int32s.
+func newSparseMap(n int) *sparseMap {
+ return &sparseMap{nil, make([]int, n)}
+}
+
+func (s *sparseMap) size() int {
+ return len(s.dense)
+}
+
+func (s *sparseMap) contains(k ID) bool {
+ i := s.sparse[k]
+ return i < len(s.dense) && s.dense[i].key == k
+}
+
+func (s *sparseMap) get(k ID) int32 {
+ i := s.sparse[k]
+ if i < len(s.dense) && s.dense[i].key == k {
+ return s.dense[i].val
+ }
+ return -1
+}
+
+func (s *sparseMap) set(k ID, v int32) {
+ i := s.sparse[k]
+ if i < len(s.dense) && s.dense[i].key == k {
+ s.dense[i].val = v
+ return
+ }
+ s.dense = append(s.dense, sparseEntry{k, v})
+ s.sparse[k] = len(s.dense) - 1
+}
+
+func (s *sparseMap) remove(k ID) {
+ i := s.sparse[k]
+ if i < len(s.dense) && s.dense[i].key == k {
+ y := s.dense[len(s.dense)-1]
+ s.dense[i] = y
+ s.sparse[y.key] = i
+ s.dense = s.dense[:len(s.dense)-1]
+ }
+}
+
+func (s *sparseMap) clear() {
+ s.dense = s.dense[:0]
+}
+
+func (s *sparseMap) contents() []sparseEntry {
+ return s.dense
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// from http://research.swtch.com/sparse
+// in turn, from Briggs and Torczon
+
+type sparseSet struct {
+ dense []ID
+ sparse []int
+}
+
+// newSparseSet returns a sparseSet that can represent
+// integers between 0 and n-1
+func newSparseSet(n int) *sparseSet {
+ return &sparseSet{nil, make([]int, n)}
+}
+
+func (s *sparseSet) cap() int {
+ return len(s.sparse)
+}
+
+func (s *sparseSet) size() int {
+ return len(s.dense)
+}
+
+func (s *sparseSet) contains(x ID) bool {
+ i := s.sparse[x]
+ return i < len(s.dense) && s.dense[i] == x
+}
+
+func (s *sparseSet) add(x ID) {
+ i := s.sparse[x]
+ if i < len(s.dense) && s.dense[i] == x {
+ return
+ }
+ s.dense = append(s.dense, x)
+ s.sparse[x] = len(s.dense) - 1
+}
+
+func (s *sparseSet) addAll(a []ID) {
+ for _, x := range a {
+ s.add(x)
+ }
+}
+
+func (s *sparseSet) addAllValues(a []*Value) {
+ for _, v := range a {
+ s.add(v.ID)
+ }
+}
+
+func (s *sparseSet) remove(x ID) {
+ i := s.sparse[x]
+ if i < len(s.dense) && s.dense[i] == x {
+ y := s.dense[len(s.dense)-1]
+ s.dense[i] = y
+ s.sparse[y] = i
+ s.dense = s.dense[:len(s.dense)-1]
+ }
+}
+
+// pop removes an arbitrary element from the set.
+// The set must be nonempty.
+func (s *sparseSet) pop() ID {
+ x := s.dense[len(s.dense)-1]
+ s.dense = s.dense[:len(s.dense)-1]
+ return x
+}
+
+func (s *sparseSet) clear() {
+ s.dense = s.dense[:0]
+}
+
+func (s *sparseSet) contents() []ID {
+ return s.dense
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+type sparseTreeNode struct {
+ child *Block
+ sibling *Block
+ parent *Block
+
+ // Every block has 6 numbers associated with it:
+ // entry-1, entry, entry+1, exit-1, and exit, exit+1.
+ // entry and exit are conceptually the top of the block (phi functions)
+ // entry+1 and exit-1 are conceptually the bottom of the block (ordinary defs)
+ // entry-1 and exit+1 are conceptually "just before" the block (conditions flowing in)
+ //
+ // This simplifies life if we wish to query information about x
+ // when x is both an input to and output of a block.
+ entry, exit int32
+}
+
+const (
+ // When used to lookup up definitions in a sparse tree,
+ // these adjustments to a block's entry (+adjust) and
+ // exit (-adjust) numbers allow a distinction to be made
+ // between assignments (typically branch-dependent
+ // conditionals) occurring "before" phi functions, the
+ // phi functions, and at the bottom of a block.
+ ADJUST_BEFORE = -1 // defined before phi
+ ADJUST_TOP = 0 // defined by phi
+ ADJUST_BOTTOM = 1 // defined within block
+)
+
+// A sparseTree is a tree of Blocks.
+// It allows rapid ancestor queries,
+// such as whether one block dominates another.
+type sparseTree []sparseTreeNode
+
+// newSparseTree creates a sparseTree from a block-to-parent map (array indexed by Block.ID)
+func newSparseTree(f *Func, parentOf []*Block) sparseTree {
+ t := make(sparseTree, f.NumBlocks())
+ for _, b := range f.Blocks {
+ n := &t[b.ID]
+ if p := parentOf[b.ID]; p != nil {
+ n.parent = p
+ n.sibling = t[p.ID].child
+ t[p.ID].child = b
+ }
+ }
+ t.numberBlock(f.Entry, 1)
+ return t
+}
+
+// numberBlock assigns entry and exit numbers for b and b's
+// children in an in-order walk from a gappy sequence, where n
+// is the first number not yet assigned or reserved. N should
+// be larger than zero. For each entry and exit number, the
+// values one larger and smaller are reserved to indicate
+// "strictly above" and "strictly below". numberBlock returns
+// the smallest number not yet assigned or reserved (i.e., the
+// exit number of the last block visited, plus two, because
+// last.exit+1 is a reserved value.)
+//
+// examples:
+//
+// single node tree Root, call with n=1
+// entry=2 Root exit=5; returns 7
+//
+// two node tree, Root->Child, call with n=1
+// entry=2 Root exit=11; returns 13
+// entry=5 Child exit=8
+//
+// three node tree, Root->(Left, Right), call with n=1
+// entry=2 Root exit=17; returns 19
+// entry=5 Left exit=8; entry=11 Right exit=14
+//
+// This is the in-order sequence of assigned and reserved numbers
+// for the last example:
+// root left left right right root
+// 1 2e 3 | 4 5e 6 | 7 8x 9 | 10 11e 12 | 13 14x 15 | 16 17x 18
+
+func (t sparseTree) numberBlock(b *Block, n int32) int32 {
+ // reserve n for entry-1, assign n+1 to entry
+ n++
+ t[b.ID].entry = n
+ // reserve n+1 for entry+1, n+2 is next free number
+ n += 2
+ for c := t[b.ID].child; c != nil; c = t[c.ID].sibling {
+ n = t.numberBlock(c, n) // preserves n = next free number
+ }
+ // reserve n for exit-1, assign n+1 to exit
+ n++
+ t[b.ID].exit = n
+ // reserve n+1 for exit+1, n+2 is next free number, returned.
+ return n + 2
+}
+
+// Sibling returns a sibling of x in the dominator tree (i.e.,
+// a node with the same immediate dominator) or nil if there
+// are no remaining siblings in the arbitrary but repeatable
+// order chosen. Because the Child-Sibling order is used
+// to assign entry and exit numbers in the treewalk, those
+// numbers are also consistent with this order (i.e.,
+// Sibling(x) has entry number larger than x's exit number).
+func (t sparseTree) Sibling(x *Block) *Block {
+ return t[x.ID].sibling
+}
+
+// Child returns a child of x in the dominator tree, or
+// nil if there are none. The choice of first child is
+// arbitrary but repeatable.
+func (t sparseTree) Child(x *Block) *Block {
+ return t[x.ID].child
+}
+
+// isAncestorEq reports whether x is an ancestor of or equal to y.
+func (t sparseTree) isAncestorEq(x, y *Block) bool {
+ xx := &t[x.ID]
+ yy := &t[y.ID]
+ return xx.entry <= yy.entry && yy.exit <= xx.exit
+}
+
+// isAncestor reports whether x is a strict ancestor of y.
+func (t sparseTree) isAncestor(x, y *Block) bool {
+ xx := &t[x.ID]
+ yy := &t[y.ID]
+ return xx.entry < yy.entry && yy.exit < xx.exit
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// TODO: live at start of block instead?
+
+package ssa
+
+import "fmt"
+
+const stackDebug = false // TODO: compiler flag
+
+type stackAllocState struct {
+ f *Func
+ values []stackValState
+ live [][]ID // live[b.id] = live values at the end of block b.
+ interfere [][]ID // interfere[v.id] = values that interfere with v.
+}
+
+type stackValState struct {
+ typ Type
+ spill *Value
+ needSlot bool
+}
+
+// stackalloc allocates storage in the stack frame for
+// all Values that did not get a register.
+// Returns a map from block ID to the stack values live at the end of that block.
+func stackalloc(f *Func, spillLive [][]ID) [][]ID {
+ if stackDebug {
+ fmt.Println("before stackalloc")
+ fmt.Println(f.String())
+ }
+ var s stackAllocState
+ s.init(f, spillLive)
+ s.stackalloc()
+ return s.live
+}
+
+func (s *stackAllocState) init(f *Func, spillLive [][]ID) {
+ s.f = f
+
+ // Initialize value information.
+ s.values = make([]stackValState, f.NumValues())
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ s.values[v.ID].typ = v.Type
+ s.values[v.ID].needSlot = !v.Type.IsMemory() && !v.Type.IsVoid() && !v.Type.IsFlags() && f.getHome(v.ID) == nil && !v.rematerializeable()
+ if stackDebug && s.values[v.ID].needSlot {
+ fmt.Printf("%s needs a stack slot\n", v)
+ }
+ if v.Op == OpStoreReg {
+ s.values[v.Args[0].ID].spill = v
+ }
+ }
+ }
+
+ // Compute liveness info for values needing a slot.
+ s.computeLive(spillLive)
+
+ // Build interference graph among values needing a slot.
+ s.buildInterferenceGraph()
+}
+
+func (s *stackAllocState) stackalloc() {
+ f := s.f
+
+ // Build map from values to their names, if any.
+ // A value may be associated with more than one name (e.g. after
+ // the assignment i=j). This step picks one name per value arbitrarily.
+ names := make([]LocalSlot, f.NumValues())
+ for _, name := range f.Names {
+ // Note: not "range f.NamedValues" above, because
+ // that would be nondeterministic.
+ for _, v := range f.NamedValues[name] {
+ names[v.ID] = name
+ }
+ }
+
+ // Allocate args to their assigned locations.
+ for _, v := range f.Entry.Values {
+ if v.Op != OpArg {
+ continue
+ }
+ loc := LocalSlot{v.Aux.(GCNode), v.Type, v.AuxInt}
+ if stackDebug {
+ fmt.Printf("stackalloc %s to %s\n", v, loc.Name())
+ }
+ f.setHome(v, loc)
+ }
+
+ // For each type, we keep track of all the stack slots we
+ // have allocated for that type.
+ // TODO: share slots among equivalent types. We would need to
+ // only share among types with the same GC signature. See the
+ // type.Equal calls below for where this matters.
+ locations := map[Type][]LocalSlot{}
+
+ // Each time we assign a stack slot to a value v, we remember
+ // the slot we used via an index into locations[v.Type].
+ slots := make([]int, f.NumValues())
+ for i := f.NumValues() - 1; i >= 0; i-- {
+ slots[i] = -1
+ }
+
+ // Pick a stack slot for each value needing one.
+ used := make([]bool, f.NumValues())
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ if !s.values[v.ID].needSlot {
+ continue
+ }
+ if v.Op == OpArg {
+ continue // already picked
+ }
+
+ // If this is a named value, try to use the name as
+ // the spill location.
+ var name LocalSlot
+ if v.Op == OpStoreReg {
+ name = names[v.Args[0].ID]
+ } else {
+ name = names[v.ID]
+ }
+ if name.N != nil && v.Type.Equal(name.Type) {
+ for _, id := range s.interfere[v.ID] {
+ h := f.getHome(id)
+ if h != nil && h.(LocalSlot) == name {
+ // A variable can interfere with itself.
+ // It is rare, but but it can happen.
+ goto noname
+ }
+ }
+ if stackDebug {
+ fmt.Printf("stackalloc %s to %s\n", v, name.Name())
+ }
+ f.setHome(v, name)
+ continue
+ }
+
+ noname:
+ // Set of stack slots we could reuse.
+ locs := locations[v.Type]
+ // Mark all positions in locs used by interfering values.
+ for i := 0; i < len(locs); i++ {
+ used[i] = false
+ }
+ for _, xid := range s.interfere[v.ID] {
+ slot := slots[xid]
+ if slot >= 0 {
+ used[slot] = true
+ }
+ }
+ // Find an unused stack slot.
+ var i int
+ for i = 0; i < len(locs); i++ {
+ if !used[i] {
+ break
+ }
+ }
+ // If there is no unused stack slot, allocate a new one.
+ if i == len(locs) {
+ locs = append(locs, LocalSlot{N: f.Config.fe.Auto(v.Type), Type: v.Type, Off: 0})
+ locations[v.Type] = locs
+ }
+ // Use the stack variable at that index for v.
+ loc := locs[i]
+ if stackDebug {
+ fmt.Printf("stackalloc %s to %s\n", v, loc.Name())
+ }
+ f.setHome(v, loc)
+ slots[v.ID] = i
+ }
+ }
+}
+
+// computeLive computes a map from block ID to a list of
+// stack-slot-needing value IDs live at the end of that block.
+// TODO: this could be quadratic if lots of variables are live across lots of
+// basic blocks. Figure out a way to make this function (or, more precisely, the user
+// of this function) require only linear size & time.
+func (s *stackAllocState) computeLive(spillLive [][]ID) {
+ s.live = make([][]ID, s.f.NumBlocks())
+ var phis []*Value
+ live := s.f.newSparseSet(s.f.NumValues())
+ defer s.f.retSparseSet(live)
+ t := s.f.newSparseSet(s.f.NumValues())
+ defer s.f.retSparseSet(t)
+
+ // Instead of iterating over f.Blocks, iterate over their postordering.
+ // Liveness information flows backward, so starting at the end
+ // increases the probability that we will stabilize quickly.
+ po := postorder(s.f)
+ for {
+ changed := false
+ for _, b := range po {
+ // Start with known live values at the end of the block
+ live.clear()
+ live.addAll(s.live[b.ID])
+
+ // Propagate backwards to the start of the block
+ phis = phis[:0]
+ for i := len(b.Values) - 1; i >= 0; i-- {
+ v := b.Values[i]
+ live.remove(v.ID)
+ if v.Op == OpPhi {
+ // Save phi for later.
+ // Note: its args might need a stack slot even though
+ // the phi itself doesn't. So don't use needSlot.
+ if !v.Type.IsMemory() && !v.Type.IsVoid() {
+ phis = append(phis, v)
+ }
+ continue
+ }
+ for _, a := range v.Args {
+ if s.values[a.ID].needSlot {
+ live.add(a.ID)
+ }
+ }
+ }
+
+ // for each predecessor of b, expand its list of live-at-end values
+ // invariant: s contains the values live at the start of b (excluding phi inputs)
+ for i, p := range b.Preds {
+ t.clear()
+ t.addAll(s.live[p.ID])
+ t.addAll(live.contents())
+ t.addAll(spillLive[p.ID])
+ for _, v := range phis {
+ a := v.Args[i]
+ if s.values[a.ID].needSlot {
+ t.add(a.ID)
+ }
+ if spill := s.values[a.ID].spill; spill != nil {
+ //TODO: remove? Subsumed by SpillUse?
+ t.add(spill.ID)
+ }
+ }
+ if t.size() == len(s.live[p.ID]) {
+ continue
+ }
+ // grow p's live set
+ s.live[p.ID] = append(s.live[p.ID][:0], t.contents()...)
+ changed = true
+ }
+ }
+
+ if !changed {
+ break
+ }
+ }
+ if stackDebug {
+ for _, b := range s.f.Blocks {
+ fmt.Printf("stacklive %s %v\n", b, s.live[b.ID])
+ }
+ }
+}
+
+func (f *Func) getHome(vid ID) Location {
+ if int(vid) >= len(f.RegAlloc) {
+ return nil
+ }
+ return f.RegAlloc[vid]
+}
+
+func (f *Func) setHome(v *Value, loc Location) {
+ for v.ID >= ID(len(f.RegAlloc)) {
+ f.RegAlloc = append(f.RegAlloc, nil)
+ }
+ f.RegAlloc[v.ID] = loc
+}
+
+func (s *stackAllocState) buildInterferenceGraph() {
+ f := s.f
+ s.interfere = make([][]ID, f.NumValues())
+ live := f.newSparseSet(f.NumValues())
+ defer f.retSparseSet(live)
+ for _, b := range f.Blocks {
+ // Propagate liveness backwards to the start of the block.
+ // Two values interfere if one is defined while the other is live.
+ live.clear()
+ live.addAll(s.live[b.ID])
+ for i := len(b.Values) - 1; i >= 0; i-- {
+ v := b.Values[i]
+ if s.values[v.ID].needSlot {
+ live.remove(v.ID)
+ for _, id := range live.contents() {
+ if s.values[v.ID].typ.Equal(s.values[id].typ) {
+ s.interfere[v.ID] = append(s.interfere[v.ID], id)
+ s.interfere[id] = append(s.interfere[id], v.ID)
+ }
+ }
+ }
+ for _, a := range v.Args {
+ if s.values[a.ID].needSlot {
+ live.add(a.ID)
+ }
+ }
+ if v.Op == OpArg && s.values[v.ID].needSlot {
+ // OpArg is an input argument which is pre-spilled.
+ // We add back v.ID here because we want this value
+ // to appear live even before this point. Being live
+ // all the way to the start of the entry block prevents other
+ // values from being allocated to the same slot and clobbering
+ // the input value before we have a chance to load it.
+ live.add(v.ID)
+ }
+ }
+ }
+ if stackDebug {
+ for vid, i := range s.interfere {
+ if len(i) > 0 {
+ fmt.Printf("v%d interferes with", vid)
+ for _, x := range i {
+ fmt.Printf(" v%d", x)
+ }
+ fmt.Println()
+ }
+ }
+ }
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// tighten moves Values closer to the Blocks in which they are used.
+// This can reduce the amount of register spilling required,
+// if it doesn't also create more live values.
+// For now, it handles only the trivial case in which a
+// Value with one or fewer args is only used in a single Block,
+// and not in a phi value.
+// TODO: Do something smarter.
+// A Value can be moved to any block that
+// dominates all blocks in which it is used.
+// Figure out when that will be an improvement.
+func tighten(f *Func) {
+ // For each value, the number of blocks in which it is used.
+ uses := make([]int32, f.NumValues())
+
+ // For each value, whether that value is ever an arg to a phi value.
+ phi := make([]bool, f.NumValues())
+
+ // For each value, one block in which that value is used.
+ home := make([]*Block, f.NumValues())
+
+ changed := true
+ for changed {
+ changed = false
+
+ // Reset uses
+ for i := range uses {
+ uses[i] = 0
+ }
+ // No need to reset home; any relevant values will be written anew anyway.
+ // No need to reset phi; once used in a phi, always used in a phi.
+
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ for _, w := range v.Args {
+ if v.Op == OpPhi {
+ phi[w.ID] = true
+ }
+ uses[w.ID]++
+ home[w.ID] = b
+ }
+ }
+ if b.Control != nil {
+ uses[b.Control.ID]++
+ home[b.Control.ID] = b
+ }
+ }
+
+ for _, b := range f.Blocks {
+ for i := 0; i < len(b.Values); i++ {
+ v := b.Values[i]
+ if v.Op == OpPhi || v.Op == OpGetClosurePtr || v.Op == OpConvert || v.Op == OpArg {
+ // GetClosurePtr & Arg must stay in entry block.
+ // OpConvert must not float over call sites.
+ // TODO do we instead need a dependence edge of some sort for OpConvert?
+ // Would memory do the trick, or do we need something else that relates
+ // to safe point operations?
+ continue
+ }
+ if len(v.Args) > 0 && v.Args[len(v.Args)-1].Type.IsMemory() {
+ // We can't move values which have a memory arg - it might
+ // make two memory values live across a block boundary.
+ continue
+ }
+ if uses[v.ID] == 1 && !phi[v.ID] && home[v.ID] != b && len(v.Args) < 2 {
+ // v is used in exactly one block, and it is not b.
+ // Furthermore, it takes at most one input,
+ // so moving it will not increase the
+ // number of live values anywhere.
+ // Move v to that block.
+ c := home[v.ID]
+ c.Values = append(c.Values, v)
+ v.Block = c
+ last := len(b.Values) - 1
+ b.Values[i] = b.Values[last]
+ b.Values[last] = nil
+ b.Values = b.Values[:last]
+ changed = true
+ }
+ }
+ }
+ }
+}
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// trim removes blocks with no code in them.
+// These blocks were inserted to remove critical edges.
+func trim(f *Func) {
+ i := 0
+ for _, b := range f.Blocks {
+ if b.Kind != BlockPlain || len(b.Values) != 0 || len(b.Preds) != 1 {
+ f.Blocks[i] = b
+ i++
+ continue
+ }
+ // TODO: handle len(b.Preds)>1 case.
+
+ // Splice b out of the graph.
+ pred := b.Preds[0]
+ succ := b.Succs[0]
+ for j, s := range pred.Succs {
+ if s == b {
+ pred.Succs[j] = succ
+ }
+ }
+ for j, p := range succ.Preds {
+ if p == b {
+ succ.Preds[j] = pred
+ }
+ }
+ }
+ for j := i; j < len(f.Blocks); j++ {
+ f.Blocks[j] = nil
+ }
+ f.Blocks = f.Blocks[:i]
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// TODO: use go/types instead?
+
+// A type interface used to import cmd/internal/gc:Type
+// Type instances are not guaranteed to be canonical.
+type Type interface {
+ Size() int64 // return the size in bytes
+ Alignment() int64
+
+ IsBoolean() bool // is a named or unnamed boolean type
+ IsInteger() bool // ... ditto for the others
+ IsSigned() bool
+ IsFloat() bool
+ IsComplex() bool
+ IsPtr() bool
+ IsString() bool
+ IsSlice() bool
+ IsArray() bool
+ IsStruct() bool
+ IsInterface() bool
+
+ IsMemory() bool // special ssa-package-only types
+ IsFlags() bool
+ IsVoid() bool
+
+ Elem() Type // given []T or *T or [n]T, return T
+ PtrTo() Type // given T, return *T
+
+ NumFields() int64 // # of fields of a struct
+ FieldType(i int64) Type // type of ith field of the struct
+ FieldOff(i int64) int64 // offset of ith field of the struct
+
+ NumElem() int64 // # of elements of an array
+
+ String() string
+ SimpleString() string // a coarser generic description of T, e.g. T's underlying type
+ Equal(Type) bool
+ Compare(Type) Cmp // compare types, returning one of CMPlt, CMPeq, CMPgt.
+}
+
+// Special compiler-only types.
+type CompilerType struct {
+ Name string
+ Memory bool
+ Flags bool
+ Void bool
+ Int128 bool
+}
+
+func (t *CompilerType) Size() int64 { return 0 } // Size in bytes
+func (t *CompilerType) Alignment() int64 { return 0 }
+func (t *CompilerType) IsBoolean() bool { return false }
+func (t *CompilerType) IsInteger() bool { return false }
+func (t *CompilerType) IsSigned() bool { return false }
+func (t *CompilerType) IsFloat() bool { return false }
+func (t *CompilerType) IsComplex() bool { return false }
+func (t *CompilerType) IsPtr() bool { return false }
+func (t *CompilerType) IsString() bool { return false }
+func (t *CompilerType) IsSlice() bool { return false }
+func (t *CompilerType) IsArray() bool { return false }
+func (t *CompilerType) IsStruct() bool { return false }
+func (t *CompilerType) IsInterface() bool { return false }
+func (t *CompilerType) IsMemory() bool { return t.Memory }
+func (t *CompilerType) IsFlags() bool { return t.Flags }
+func (t *CompilerType) IsVoid() bool { return t.Void }
+func (t *CompilerType) String() string { return t.Name }
+func (t *CompilerType) SimpleString() string { return t.Name }
+func (t *CompilerType) Elem() Type { panic("not implemented") }
+func (t *CompilerType) PtrTo() Type { panic("not implemented") }
+func (t *CompilerType) NumFields() int64 { panic("not implemented") }
+func (t *CompilerType) FieldType(i int64) Type { panic("not implemented") }
+func (t *CompilerType) FieldOff(i int64) int64 { panic("not implemented") }
+func (t *CompilerType) NumElem() int64 { panic("not implemented") }
+
+// Cmp is a comparison between values a and b.
+// -1 if a < b
+// 0 if a == b
+// 1 if a > b
+type Cmp int8
+
+const (
+ CMPlt = Cmp(-1)
+ CMPeq = Cmp(0)
+ CMPgt = Cmp(1)
+)
+
+func (t *CompilerType) Compare(u Type) Cmp {
+ x, ok := u.(*CompilerType)
+ // ssa.CompilerType is smaller than any other type
+ if !ok {
+ return CMPlt
+ }
+ if t == x {
+ return CMPeq
+ }
+ // desire fast sorting, not pretty sorting.
+ if len(t.Name) == len(x.Name) {
+ if t.Name == x.Name {
+ return CMPeq
+ }
+ if t.Name < x.Name {
+ return CMPlt
+ }
+ return CMPgt
+ }
+ if len(t.Name) > len(x.Name) {
+ return CMPgt
+ }
+ return CMPlt
+}
+
+func (t *CompilerType) Equal(u Type) bool {
+ x, ok := u.(*CompilerType)
+ if !ok {
+ return false
+ }
+ return x == t
+}
+
+var (
+ TypeInvalid = &CompilerType{Name: "invalid"}
+ TypeMem = &CompilerType{Name: "mem", Memory: true}
+ TypeFlags = &CompilerType{Name: "flags", Flags: true}
+ TypeVoid = &CompilerType{Name: "void", Void: true}
+ TypeInt128 = &CompilerType{Name: "int128", Int128: true}
+)
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// Stub implementation used for testing.
+type TypeImpl struct {
+ Size_ int64
+ Align int64
+ Boolean bool
+ Integer bool
+ Signed bool
+ Float bool
+ Complex bool
+ Ptr bool
+ string bool
+ slice bool
+ array bool
+ struct_ bool
+ inter bool
+ Elem_ Type
+
+ Name string
+}
+
+func (t *TypeImpl) Size() int64 { return t.Size_ }
+func (t *TypeImpl) Alignment() int64 { return t.Align }
+func (t *TypeImpl) IsBoolean() bool { return t.Boolean }
+func (t *TypeImpl) IsInteger() bool { return t.Integer }
+func (t *TypeImpl) IsSigned() bool { return t.Signed }
+func (t *TypeImpl) IsFloat() bool { return t.Float }
+func (t *TypeImpl) IsComplex() bool { return t.Complex }
+func (t *TypeImpl) IsPtr() bool { return t.Ptr }
+func (t *TypeImpl) IsString() bool { return t.string }
+func (t *TypeImpl) IsSlice() bool { return t.slice }
+func (t *TypeImpl) IsArray() bool { return t.array }
+func (t *TypeImpl) IsStruct() bool { return t.struct_ }
+func (t *TypeImpl) IsInterface() bool { return t.inter }
+func (t *TypeImpl) IsMemory() bool { return false }
+func (t *TypeImpl) IsFlags() bool { return false }
+func (t *TypeImpl) IsVoid() bool { return false }
+func (t *TypeImpl) String() string { return t.Name }
+func (t *TypeImpl) SimpleString() string { return t.Name }
+func (t *TypeImpl) Elem() Type { return t.Elem_ }
+func (t *TypeImpl) PtrTo() Type { panic("not implemented") }
+func (t *TypeImpl) NumFields() int64 { panic("not implemented") }
+func (t *TypeImpl) FieldType(i int64) Type { panic("not implemented") }
+func (t *TypeImpl) FieldOff(i int64) int64 { panic("not implemented") }
+func (t *TypeImpl) NumElem() int64 { panic("not implemented") }
+
+func (t *TypeImpl) Equal(u Type) bool {
+ x, ok := u.(*TypeImpl)
+ if !ok {
+ return false
+ }
+ return x == t
+}
+
+func (t *TypeImpl) Compare(u Type) Cmp {
+ x, ok := u.(*TypeImpl)
+ // ssa.CompilerType < ssa.TypeImpl < gc.Type
+ if !ok {
+ _, ok := u.(*CompilerType)
+ if ok {
+ return CMPgt
+ }
+ return CMPlt
+ }
+ if t == x {
+ return CMPeq
+ }
+ if t.Name < x.Name {
+ return CMPlt
+ }
+ if t.Name > x.Name {
+ return CMPgt
+ }
+ return CMPeq
+
+}
+
+var (
+ // shortcuts for commonly used basic types
+ TypeInt8 = &TypeImpl{Size_: 1, Align: 1, Integer: true, Signed: true, Name: "int8"}
+ TypeInt16 = &TypeImpl{Size_: 2, Align: 2, Integer: true, Signed: true, Name: "int16"}
+ TypeInt32 = &TypeImpl{Size_: 4, Align: 4, Integer: true, Signed: true, Name: "int32"}
+ TypeInt64 = &TypeImpl{Size_: 8, Align: 8, Integer: true, Signed: true, Name: "int64"}
+ TypeFloat32 = &TypeImpl{Size_: 4, Align: 4, Float: true, Name: "float32"}
+ TypeFloat64 = &TypeImpl{Size_: 8, Align: 8, Float: true, Name: "float64"}
+ TypeComplex64 = &TypeImpl{Size_: 8, Align: 4, Complex: true, Name: "complex64"}
+ TypeComplex128 = &TypeImpl{Size_: 16, Align: 8, Complex: true, Name: "complex128"}
+ TypeUInt8 = &TypeImpl{Size_: 1, Align: 1, Integer: true, Name: "uint8"}
+ TypeUInt16 = &TypeImpl{Size_: 2, Align: 2, Integer: true, Name: "uint16"}
+ TypeUInt32 = &TypeImpl{Size_: 4, Align: 4, Integer: true, Name: "uint32"}
+ TypeUInt64 = &TypeImpl{Size_: 8, Align: 8, Integer: true, Name: "uint64"}
+ TypeBool = &TypeImpl{Size_: 1, Align: 1, Boolean: true, Name: "bool"}
+ TypeBytePtr = &TypeImpl{Size_: 8, Align: 8, Ptr: true, Name: "*byte"}
+ TypeInt64Ptr = &TypeImpl{Size_: 8, Align: 8, Ptr: true, Name: "*int64"}
+)
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+import (
+ "fmt"
+ "math"
+)
+
+// A Value represents a value in the SSA representation of the program.
+// The ID and Type fields must not be modified. The remainder may be modified
+// if they preserve the value of the Value (e.g. changing a (mul 2 x) to an (add x x)).
+type Value struct {
+ // A unique identifier for the value. For performance we allocate these IDs
+ // densely starting at 1. There is no guarantee that there won't be occasional holes, though.
+ ID ID
+
+ // The operation that computes this value. See op.go.
+ Op Op
+
+ // The type of this value. Normally this will be a Go type, but there
+ // are a few other pseudo-types, see type.go.
+ Type Type
+
+ // Auxiliary info for this value. The type of this information depends on the opcode and type.
+ // AuxInt is used for integer values, Aux is used for other values.
+ AuxInt int64
+ Aux interface{}
+
+ // Arguments of this value
+ Args []*Value
+
+ // Containing basic block
+ Block *Block
+
+ // Source line number
+ Line int32
+
+ // Storage for the first two args
+ argstorage [2]*Value
+}
+
+// Examples:
+// Opcode aux args
+// OpAdd nil 2
+// OpConst string 0 string constant
+// OpConst int64 0 int64 constant
+// OpAddcq int64 1 amd64 op: v = arg[0] + constant
+
+// short form print. Just v#.
+func (v *Value) String() string {
+ if v == nil {
+ return "nil" // should never happen, but not panicking helps with debugging
+ }
+ return fmt.Sprintf("v%d", v.ID)
+}
+
+func (v *Value) AuxInt8() int8 {
+ if opcodeTable[v.Op].auxType != auxInt8 {
+ v.Fatalf("op %s doesn't have an int8 aux field", v.Op)
+ }
+ return int8(v.AuxInt)
+}
+
+func (v *Value) AuxInt16() int16 {
+ if opcodeTable[v.Op].auxType != auxInt16 {
+ v.Fatalf("op %s doesn't have an int16 aux field", v.Op)
+ }
+ return int16(v.AuxInt)
+}
+
+func (v *Value) AuxInt32() int32 {
+ if opcodeTable[v.Op].auxType != auxInt32 {
+ v.Fatalf("op %s doesn't have an int32 aux field", v.Op)
+ }
+ return int32(v.AuxInt)
+}
+
+// AuxInt2Int64 is used to sign extend the lower bits of AuxInt according to
+// the size of AuxInt specified in the opcode table.
+func (v *Value) AuxInt2Int64() int64 {
+ switch opcodeTable[v.Op].auxType {
+ case auxInt64:
+ return v.AuxInt
+ case auxInt32:
+ return int64(int32(v.AuxInt))
+ case auxInt16:
+ return int64(int16(v.AuxInt))
+ case auxInt8:
+ return int64(int8(v.AuxInt))
+ default:
+ v.Fatalf("op %s doesn't have an aux int field", v.Op)
+ return -1
+ }
+}
+
+func (v *Value) AuxFloat() float64 {
+ if opcodeTable[v.Op].auxType != auxFloat {
+ v.Fatalf("op %s doesn't have a float aux field", v.Op)
+ }
+ return math.Float64frombits(uint64(v.AuxInt))
+}
+func (v *Value) AuxValAndOff() ValAndOff {
+ if opcodeTable[v.Op].auxType != auxSymValAndOff {
+ v.Fatalf("op %s doesn't have a ValAndOff aux field", v.Op)
+ }
+ return ValAndOff(v.AuxInt)
+}
+
+// long form print. v# = opcode <type> [aux] args [: reg]
+func (v *Value) LongString() string {
+ s := fmt.Sprintf("v%d = %s", v.ID, v.Op.String())
+ s += " <" + v.Type.String() + ">"
+ switch opcodeTable[v.Op].auxType {
+ case auxBool:
+ if v.AuxInt == 0 {
+ s += " [false]"
+ } else {
+ s += " [true]"
+ }
+ case auxInt8:
+ s += fmt.Sprintf(" [%d]", v.AuxInt8())
+ case auxInt16:
+ s += fmt.Sprintf(" [%d]", v.AuxInt16())
+ case auxInt32:
+ s += fmt.Sprintf(" [%d]", v.AuxInt32())
+ case auxInt64:
+ s += fmt.Sprintf(" [%d]", v.AuxInt)
+ case auxFloat:
+ s += fmt.Sprintf(" [%g]", v.AuxFloat())
+ case auxString:
+ s += fmt.Sprintf(" {%s}", v.Aux)
+ case auxSym:
+ if v.Aux != nil {
+ s += fmt.Sprintf(" {%s}", v.Aux)
+ }
+ case auxSymOff:
+ if v.Aux != nil {
+ s += fmt.Sprintf(" {%s}", v.Aux)
+ }
+ s += fmt.Sprintf(" [%d]", v.AuxInt)
+ case auxSymValAndOff:
+ if v.Aux != nil {
+ s += fmt.Sprintf(" {%s}", v.Aux)
+ }
+ s += fmt.Sprintf(" [%s]", v.AuxValAndOff())
+ }
+ for _, a := range v.Args {
+ s += fmt.Sprintf(" %v", a)
+ }
+ r := v.Block.Func.RegAlloc
+ if int(v.ID) < len(r) && r[v.ID] != nil {
+ s += " : " + r[v.ID].Name()
+ }
+ return s
+}
+
+func (v *Value) AddArg(w *Value) {
+ if v.Args == nil {
+ v.resetArgs() // use argstorage
+ }
+ v.Args = append(v.Args, w)
+}
+func (v *Value) AddArgs(a ...*Value) {
+ if v.Args == nil {
+ v.resetArgs() // use argstorage
+ }
+ v.Args = append(v.Args, a...)
+}
+func (v *Value) SetArg(i int, w *Value) {
+ v.Args[i] = w
+}
+func (v *Value) RemoveArg(i int) {
+ copy(v.Args[i:], v.Args[i+1:])
+ v.Args[len(v.Args)-1] = nil // aid GC
+ v.Args = v.Args[:len(v.Args)-1]
+}
+func (v *Value) SetArgs1(a *Value) {
+ v.resetArgs()
+ v.AddArg(a)
+}
+func (v *Value) SetArgs2(a *Value, b *Value) {
+ v.resetArgs()
+ v.AddArg(a)
+ v.AddArg(b)
+}
+
+func (v *Value) resetArgs() {
+ v.argstorage[0] = nil
+ v.argstorage[1] = nil
+ v.Args = v.argstorage[:0]
+}
+
+func (v *Value) reset(op Op) {
+ v.Op = op
+ v.resetArgs()
+ v.AuxInt = 0
+ v.Aux = nil
+}
+
+// copyInto makes a new value identical to v and adds it to the end of b.
+func (v *Value) copyInto(b *Block) *Value {
+ c := b.NewValue0(v.Line, v.Op, v.Type)
+ c.Aux = v.Aux
+ c.AuxInt = v.AuxInt
+ c.AddArgs(v.Args...)
+ for _, a := range v.Args {
+ if a.Type.IsMemory() {
+ v.Fatalf("can't move a value with a memory arg %s", v.LongString())
+ }
+ }
+ return c
+}
+
+func (v *Value) Logf(msg string, args ...interface{}) { v.Block.Logf(msg, args...) }
+func (v *Value) Log() bool { return v.Block.Log() }
+func (v *Value) Fatalf(msg string, args ...interface{}) {
+ v.Block.Func.Config.Fatalf(v.Line, msg, args...)
+}
+func (v *Value) Unimplementedf(msg string, args ...interface{}) {
+ v.Block.Func.Config.Unimplementedf(v.Line, msg, args...)
+}
+
+// ExternSymbol is an aux value that encodes a variable's
+// constant offset from the static base pointer.
+type ExternSymbol struct {
+ Typ Type // Go type
+ Sym fmt.Stringer // A *gc.Sym referring to a global variable
+ // Note: the offset for an external symbol is not
+ // calculated until link time.
+}
+
+// ArgSymbol is an aux value that encodes an argument or result
+// variable's constant offset from FP (FP = SP + framesize).
+type ArgSymbol struct {
+ Typ Type // Go type
+ Node GCNode // A *gc.Node referring to the argument/result variable.
+}
+
+// AutoSymbol is an aux value that encodes a local variable's
+// constant offset from SP.
+type AutoSymbol struct {
+ Typ Type // Go type
+ Node GCNode // A *gc.Node referring to a local (auto) variable.
+}
+
+func (s *ExternSymbol) String() string {
+ return s.Sym.String()
+}
+
+func (s *ArgSymbol) String() string {
+ return s.Node.String()
+}
+
+func (s *AutoSymbol) String() string {
+ return s.Node.String()
+}
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// zcse does an initial pass of common-subexpression elimination on the
+// function for values with zero arguments to allow the more expensive cse
+// to begin with a reduced number of values. Values are just relinked,
+// nothing is deleted. A subsequent deadcode pass is required to actually
+// remove duplicate expressions.
+func zcse(f *Func) {
+ vals := make(map[vkey]*Value)
+
+ for _, b := range f.Blocks {
+ for i := 0; i < len(b.Values); {
+ v := b.Values[i]
+ next := true
+ if opcodeTable[v.Op].argLen == 0 {
+ key := vkey{v.Op, keyFor(v), v.Aux, typeStr(v)}
+ if vals[key] == nil {
+ vals[key] = v
+ if b != f.Entry {
+ // Move v to the entry block so it will dominate every block
+ // where we might use it. This prevents the need for any dominator
+ // calculations in this pass.
+ v.Block = f.Entry
+ f.Entry.Values = append(f.Entry.Values, v)
+ last := len(b.Values) - 1
+ b.Values[i] = b.Values[last]
+ b.Values[last] = nil
+ b.Values = b.Values[:last]
+
+ // process b.Values[i] again
+ next = false
+ }
+ }
+ }
+ if next {
+ i++
+ }
+ }
+ }
+
+ for _, b := range f.Blocks {
+ for _, v := range b.Values {
+ for i, a := range v.Args {
+ if opcodeTable[a.Op].argLen == 0 {
+ key := vkey{a.Op, keyFor(a), a.Aux, typeStr(a)}
+ if rv, ok := vals[key]; ok {
+ v.Args[i] = rv
+ }
+ }
+ }
+ }
+ }
+}
+
+// vkey is a type used to uniquely identify a zero arg value.
+type vkey struct {
+ op Op
+ ai int64 // aux int
+ ax interface{} // aux
+ t string // type
+}
+
+// typeStr returns a string version of the type of v.
+func typeStr(v *Value) string {
+ if v.Type == nil {
+ return ""
+ }
+ return v.Type.String()
+}
+
+// keyFor returns the AuxInt portion of a key structure uniquely identifying a
+// zero arg value for the supported ops.
+func keyFor(v *Value) int64 {
+ switch v.Op {
+ case OpConst64, OpConst64F, OpConst32F:
+ return v.AuxInt
+ case OpConst32:
+ return int64(int32(v.AuxInt))
+ case OpConst16:
+ return int64(int16(v.AuxInt))
+ case OpConst8, OpConstBool:
+ return int64(int8(v.AuxInt))
+ default:
+ return v.AuxInt
+ }
+}
split64(r, &lo2, &hi2)
}
- // Do op. Leave result in DX:AX.
+ // Do op. Leave result in DX:AX.
switch n.Op {
// TODO: Constants
case gc.OADD:
)
func defframe(ptxt *obj.Prog) {
- var n *gc.Node
-
// fill in argument size, stack size
ptxt.To.Type = obj.TYPE_TEXTSIZE
hi := int64(0)
lo := hi
ax := uint32(0)
- for l := gc.Curfn.Func.Dcl; l != nil; l = l.Next {
- n = l.N
+ for _, n := range gc.Curfn.Func.Dcl {
if !n.Name.Needzero {
continue
}
// The way the code generator uses floating-point
// registers, a move from F0 to F0 is intended as a no-op.
// On the x86, it's not: it pushes a second copy of F0
- // on the floating point stack. So toss it away here.
+ // on the floating point stack. So toss it away here.
// Also, F0 is the *only* register we ever evaluate
// into, so we should only see register/register as F0/F0.
/*
// MOVSD removal.
// We never use packed registers, so a MOVSD between registers
// can be replaced by MOVAPD, which moves the pair of float64s
- // instead of just the lower one. We only use the lower one, but
+ // instead of just the lower one. We only use the lower one, but
// the processor can do better if we do moves using both.
for r := g.Start; r != nil; r = r.Link {
p = r.Prog
}
if regtyp(&p.From) || p.From.Type == obj.TYPE_CONST {
- // move or artihmetic into partial register.
+ // move or arithmetic into partial register.
// from another register or constant can be movl.
// we don't switch to 32-bit arithmetic if it can
// change how the carry bit is set (and the carry bit is needed).
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
x86.AJPL: {Flags: gc.Cjmp | gc.UseCarry},
x86.AJPS: {Flags: gc.Cjmp | gc.UseCarry},
obj.AJMP: {Flags: gc.Jump | gc.Break | gc.KillCarry},
+ x86.ALEAW: {Flags: gc.LeftAddr | gc.RightWrite},
x86.ALEAL: {Flags: gc.LeftAddr | gc.RightWrite},
x86.AMOVBLSX: {Flags: gc.SizeL | gc.LeftRead | gc.RightWrite | gc.Conv},
x86.AMOVBLZX: {Flags: gc.SizeL | gc.LeftRead | gc.RightWrite | gc.Conv},
x86.AORW: {Flags: gc.SizeW | gc.LeftRead | RightRdwr | gc.SetCarry},
x86.APOPL: {Flags: gc.SizeL | gc.RightWrite},
x86.APUSHL: {Flags: gc.SizeL | gc.LeftRead},
+ x86.APXOR: {Flags: gc.SizeD | gc.LeftRead | RightRdwr},
x86.ARCLB: {Flags: gc.SizeB | gc.LeftRead | RightRdwr | gc.ShiftCX | gc.SetCarry | gc.UseCarry},
x86.ARCLL: {Flags: gc.SizeL | gc.LeftRead | RightRdwr | gc.ShiftCX | gc.SetCarry | gc.UseCarry},
x86.ARCLW: {Flags: gc.SizeW | gc.LeftRead | RightRdwr | gc.ShiftCX | gc.SetCarry | gc.UseCarry},
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
n.List = f.addCounters(n.Lbrace, n.Rbrace+1, n.List, true) // +1 to step past closing brace.
case *ast.IfStmt:
+ if n.Init != nil {
+ ast.Walk(f, n.Init)
+ }
+ ast.Walk(f, n.Cond)
ast.Walk(f, n.Body)
if n.Else == nil {
return nil
case *ast.SwitchStmt:
// Don't annotate an empty switch - creates a syntax error.
if n.Body == nil || len(n.Body.List) == 0 {
+ if n.Init != nil {
+ ast.Walk(f, n.Init)
+ }
+ if n.Tag != nil {
+ ast.Walk(f, n.Tag)
+ }
return nil
}
case *ast.TypeSwitchStmt:
// Don't annotate an empty type switch - creates a syntax error.
if n.Body == nil || len(n.Body.List) == 0 {
+ if n.Init != nil {
+ ast.Walk(f, n.Init)
+ }
+ ast.Walk(f, n.Assign)
return nil
}
}
var slashslash = []byte("//")
// initialComments returns the prefix of content containing only
-// whitespace and line comments. Any +build directives must appear
-// within this region. This approach is more reliable than using
+// whitespace and line comments. Any +build directives must appear
+// within this region. This approach is more reliable than using
// go/printer to print a modified AST containing comments.
//
func initialComments(content []byte) []byte {
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
testSelect2()
testPanic()
testEmptySwitches()
+ testFunctionLiteral()
}
// The indexes of the counters in testPanic are known to main.go
<-c
check(LINE, 1)
}
+
+func testFunctionLiteral() {
+ a := func(f func()) error {
+ f()
+ f()
+ return nil
+ }
+
+ b := func(f func()) bool {
+ f()
+ f()
+ return true
+ }
+
+ check(LINE, 1)
+ a(func() {
+ check(LINE, 2)
+ })
+
+ if err := a(func() {
+ check(LINE, 2)
+ }); err != nil {
+ }
+
+ switch b(func() {
+ check(LINE, 2)
+ }) {
+ }
+}
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
import (
"bytes"
+ "encoding/json"
"flag"
"fmt"
"os"
}
// The $GOROOT/VERSION.cache file is a cache to avoid invoking
- // git every time we run this command. Unlike VERSION, it gets
+ // git every time we run this command. Unlike VERSION, it gets
// deleted by the clean command.
path = pathf("%s/VERSION.cache", goroot)
if isfile(path) {
// Create object directory.
// We keep it in pkg/ so that all the generated binaries
- // are in one tree. If pkg/obj/libgc.a exists, it is a dreg from
- // before we used subdirectories of obj. Delete all of obj
+ // are in one tree. If pkg/obj/libgc.a exists, it is a dreg from
+ // before we used subdirectories of obj. Delete all of obj
// to clean up.
if p := pathf("%s/pkg/obj/libgc.a", goroot); isfile(p) {
xremoveall(pathf("%s/pkg/obj", goroot))
{"runtime/internal/sys", []string{
"zversion.go",
}},
+ {"go/build", []string{
+ "zcgo.go",
+ }},
}
// depsuffix records the allowed suffixes for source files.
}{
{"zdefaultcc.go", mkzdefaultcc},
{"zversion.go", mkzversion},
+ {"zcgo.go", mkzcgo},
// not generated anymore, but delete the file if we see it
{"enam.c", nil},
}
return !matchtag(tag[1:])
}
- return tag == goos || tag == goarch || tag == "cmd_go_bootstrap" || tag == "go1.1" || (goos == "android" && tag == "linux")
+ return tag == "gc" || tag == goos || tag == goarch || tag == "cmd_go_bootstrap" || tag == "go1.1" || (goos == "android" && tag == "linux")
}
// shouldbuild reports whether we should build this file.
if p == "" {
continue
}
- if strings.Contains(p, "package documentation") {
+ code := p
+ i := strings.Index(code, "//")
+ if i > 0 {
+ code = strings.TrimSpace(code[:i])
+ }
+ if code == "package documentation" {
return false
}
- if strings.Contains(p, "package main") && dir != "cmd/go" && dir != "cmd/cgo" {
+ if code == "package main" && dir != "cmd/go" && dir != "cmd/cgo" {
return false
}
if !strings.HasPrefix(p, "//") {
if !strings.Contains(p, "+build") {
continue
}
- fields := splitfields(p)
- if len(fields) < 2 || fields[1] != "+build" {
+ fields := splitfields(p[2:])
+ if len(fields) < 1 || fields[0] != "+build" {
continue
}
- for _, p := range fields[2:] {
+ for _, p := range fields[1:] {
if matchfield(p) {
goto fieldmatch
}
"clean deletes all built files\n" +
"env [-p] print environment (-p: include $PATH)\n" +
"install [dir] install individual directory\n" +
+ "list [-json] list all supported platforms\n" +
"test [-h] run Go test(s)\n" +
"version print Go version\n" +
"\n" +
}
}
-// Copied from go/build/build.go.
// Cannot use go/build directly because cmd/dist for a new release
// builds against an old release's go/build, which may be out of sync.
+// To reduce duplication, we generate the list for go/build from this.
+//
+// We list all supported platforms in this list, so that this is the
+// single point of truth for supported platforms. This list is used
+// by 'go tool dist list'.
var cgoEnabled = map[string]bool{
"darwin/386": true,
"darwin/amd64": true,
"dragonfly/amd64": true,
"freebsd/386": true,
"freebsd/amd64": true,
+ "freebsd/arm": false,
"linux/386": true,
"linux/amd64": true,
"linux/arm": true,
"linux/arm64": true,
+ "linux/ppc64": false,
"linux/ppc64le": true,
+ "linux/mips64": false,
+ "linux/mips64le": false,
"android/386": true,
"android/amd64": true,
"android/arm": true,
+ "android/arm64": true,
+ "nacl/386": false,
+ "nacl/amd64p32": false,
+ "nacl/arm": false,
"netbsd/386": true,
"netbsd/amd64": true,
"netbsd/arm": true,
"openbsd/386": true,
"openbsd/amd64": true,
+ "openbsd/arm": false,
+ "plan9/386": false,
+ "plan9/amd64": false,
+ "plan9/arm": false,
"solaris/amd64": true,
"windows/386": true,
"windows/amd64": true,
fatal("current directory %s is not under %s", pwd, real_src)
}
pwd = pwd[len(real_src):]
- // guard againt xrealwd return the directory without the trailing /
+ // guard against xrealwd returning the directory without the trailing /
pwd = strings.TrimPrefix(pwd, "/")
return pwd
xflagparse(0)
xprintf("%s\n", findgoversion())
}
+
+// cmdlist lists all supported platforms.
+func cmdlist() {
+ jsonFlag := flag.Bool("json", false, "produce JSON output")
+ xflagparse(0)
+
+ var plats []string
+ for p := range cgoEnabled {
+ plats = append(plats, p)
+ }
+ sort.Strings(plats)
+
+ if !*jsonFlag {
+ for _, p := range plats {
+ xprintf("%s\n", p)
+ }
+ return
+ }
+
+ type jsonResult struct {
+ GOOS string
+ GOARCH string
+ CgoSupported bool
+ }
+ var results []jsonResult
+ for _, p := range plats {
+ fields := strings.Split(p, "/")
+ results = append(results, jsonResult{
+ GOOS: fields[0],
+ GOARCH: fields[1],
+ CgoSupported: cgoEnabled[p]})
+ }
+ out, err := json.MarshalIndent(results, "", "\t")
+ if err != nil {
+ fatal("json marshal error: %v", err)
+ }
+ if _, err := os.Stdout.Write(out); err != nil {
+ fatal("write failed: %v", err)
+ }
+}
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
-import "fmt"
+import (
+ "bytes"
+ "fmt"
+)
/*
* Helpers for building cmd/go and cmd/cgo.
file = file[:i] + "c" + file[i:]
writefile(out, file, writeSkipSame)
}
+
+// mkzcgo writes zcgo.go for go/build package:
+//
+// package build
+// var cgoEnabled = map[string]bool{}
+//
+// It is invoked to write go/build/zcgo.go.
+func mkzcgo(dir, file string) {
+ var buf bytes.Buffer
+
+ fmt.Fprintf(&buf,
+ "// auto generated by go tool dist\n"+
+ "\n"+
+ "package build\n"+
+ "\n"+
+ "var cgoEnabled = map[string]bool{\n")
+ for plat, hasCgo := range cgoEnabled {
+ if hasCgo {
+ fmt.Fprintf(&buf, "\t%q: true,\n", plat)
+ }
+ }
+ fmt.Fprintf(&buf, "}")
+
+ writefile(buf.String(), file, writeSkipSame)
+}
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// stackGuardMultiplier returns a multiplier to apply to the default
-// stack guard size. Larger multipliers are used for non-optimized
+// stack guard size. Larger multipliers are used for non-optimized
// builds that have larger stack frames.
func stackGuardMultiplier() int {
for _, s := range strings.Split(os.Getenv("GO_GCFLAGS"), " ") {
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
"compile/internal/gc",
"compile/internal/mips64",
"compile/internal/ppc64",
+ "compile/internal/ssa",
"compile/internal/x86",
"internal/gcprog",
"internal/obj",
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
{"clean", cmdclean},
{"env", cmdenv},
{"install", cmdinstall},
+ {"list", cmdlist},
{"test", cmdtest},
{"version", cmdversion},
}
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
})
// Test that internal linking of standard packages does not
- // require libgcc. This ensures that we can install a Go
+ // require libgcc. This ensures that we can install a Go
// release on a system that does not have a C compiler
// installed and still build Go programs (that don't use cgo).
for _, pkg := range cgoPackages {
return nil
},
})
+ fortran := os.Getenv("FC")
+ if fortran == "" {
+ fortran, _ = exec.LookPath("gfortran")
+ }
+ if fortran != "" {
+ t.tests = append(t.tests, distTest{
+ name: "cgo_fortran",
+ heading: "../misc/cgo/fortran",
+ fn: func(dt *distTest) error {
+ t.addCmd(dt, "misc/cgo/fortran", "./test.bash", fortran)
+ return nil
+ },
+ })
+ }
}
if t.cgoEnabled && t.goos != "android" && !t.iOS() {
// TODO(crawshaw): reenable on android and iOS
t.addCmd(dt, "src", "go", "test", "-race", "-i", "runtime/race", "flag", "os/exec")
t.addCmd(dt, "src", "go", "test", "-race", "-run=Output", "runtime/race")
t.addCmd(dt, "src", "go", "test", "-race", "-short", "-run=TestParse|TestEcho", "flag", "os/exec")
+ // We don't want the following line, because it
+ // slows down all.bash (by 10 seconds on my laptop).
+ // The race builder should catch any error here, but doesn't.
+ // TODO(iant): Figure out how to catch this.
+ // t.addCmd(dt, "src", "go", "test", "-race", "-run=TestParallelTest", "cmd/go")
if t.cgoEnabled {
env := mergeEnvLists([]string{"GOTRACEBACK=2"}, os.Environ())
cmd := t.addCmd(dt, "misc/cgo/test", "go", "test", "-race", "-short")
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
`type ExportedType struct`, // Type definition.
`Comment before exported field.*\n.*ExportedField +int` +
`.*Comment on line with exported field.`,
+ `ExportedEmbeddedType.*Comment on line with exported embedded field.`,
`Has unexported fields`,
`func \(ExportedType\) ExportedMethod\(a int\) bool`,
`const ExportedTypedConstant ExportedType = iota`, // Must include associated constant.
},
[]string{
`unexportedField`, // No unexported field.
+ `int.*embedded`, // No unexported embedded field.
`Comment about exported method.`, // No comment about exported method.
`unexportedMethod`, // No unexported method.
`unexportedTypedConstant`, // No unexported constant.
`Comment about exported type`, // Include comment.
`type ExportedType struct`, // Type definition.
`Comment before exported field.*\n.*ExportedField +int`,
- `unexportedField int.*Comment on line with unexported field.`,
+ `unexportedField.*int.*Comment on line with unexported field.`,
+ `ExportedEmbeddedType.*Comment on line with exported embedded field.`,
+ `\*ExportedEmbeddedType.*Comment on line with exported embedded \*field.`,
+ `unexportedType.*Comment on line with unexported embedded field.`,
+ `\*unexportedType.*Comment on line with unexported embedded \*field.`,
`func \(ExportedType\) unexportedMethod\(a int\) bool`,
`unexportedTypedConstant`,
},
{"", "", "", true},
{"/usr/gopher", "/usr/gopher", "/usr/gopher", true},
{"/usr/gopher/bar", "/usr/gopher", "bar", true},
- {"/usr/gopher", "/usr/gopher", "/usr/gopher", true},
{"/usr/gopherflakes", "/usr/gopher", "/usr/gopherflakes", false},
{"/usr/gopher/bar", "/usr/zot", "/usr/gopher/bar", false},
}
trimmed := false
list := make([]*ast.Field, 0, len(fields.List))
for _, field := range fields.List {
+ names := field.Names
+ if len(names) == 0 {
+ // Embedded type. Use the name of the type. It must be of type ident or *ident.
+ // Nothing else is allowed.
+ switch ident := field.Type.(type) {
+ case *ast.Ident:
+ names = []*ast.Ident{ident}
+ case *ast.StarExpr:
+ // Must have the form *identifier.
+ if ident, ok := ident.X.(*ast.Ident); ok {
+ names = []*ast.Ident{ident}
+ }
+ }
+ if names == nil {
+ // Can only happen if AST is incorrect. Safe to continue with a nil list.
+ log.Print("invalid program: unexpected type for embedded field")
+ }
+ }
// Trims if any is unexported. Good enough in practice.
ok := true
- for _, name := range field.Names {
+ for _, name := range names {
if !isExported(name.Name) {
trimmed = true
ok = false
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Comment about exported type.
type ExportedType struct {
// Comment before exported field.
- ExportedField int // Comment on line with exported field.
- unexportedField int // Comment on line with unexported field.
+ ExportedField int // Comment on line with exported field.
+ unexportedField int // Comment on line with unexported field.
+ ExportedEmbeddedType // Comment on line with exported embedded field.
+ *ExportedEmbeddedType // Comment on line with exported embedded *field.
+ unexportedType // Comment on line with unexported embedded field.
+ *unexportedType // Comment on line with unexported embedded *field.
}
// Comment about exported method.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Print AST. We did that after each fix, so this appears
// redundant, but it is necessary to generate gofmt-compatible
- // source code in a few cases. The official gofmt style is the
+ // source code in a few cases. The official gofmt style is the
// output of the printer run on a standard AST generated by the parser,
// but the source we generated inside the loop above is the
// output of the printer run on a mangled AST generated by a fixer.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// The fact that it is partial is very important: the input is
// an AST and a description of some type information to
// assume about one or more packages, but not all the
-// packages that the program imports. The checker is
+// packages that the program imports. The checker is
// expected to do as much as it can with what it has been
-// given. There is not enough information supplied to do
+// given. There is not enough information supplied to do
// a full type check, but the type checker is expected to
// apply information that can be derived from variable
// declarations, function and method returns, and type switches
// TODO(rsc,gri): Replace with go/typechecker.
// Doing that could be an interesting test case for go/typechecker:
// the constraints about working with partial information will
-// likely exercise it in interesting ways. The ideal interface would
+// likely exercise it in interesting ways. The ideal interface would
// be to pass typecheck a map from importpath to package API text
// (Go source code), but for now we use data structures (TypeConfig, Type).
//
// The strings mostly use gofmt form.
//
// A Field or FieldList has as its type a comma-separated list
-// of the types of the fields. For example, the field list
+// of the types of the fields. For example, the field list
// x, y, z int
// has type "int, int, int".
// propagate the type to all the uses.
// The !isDecl case is a cheat here, but it makes
// up in some cases for not paying attention to
- // struct fields. The real type checker will be
+ // struct fields. The real type checker will be
// more accurate so we won't need the cheat.
if id, ok := n.(*ast.Ident); ok && id.Obj != nil && (isDecl || typeof[id.Obj] == "") {
typeof[id.Obj] = typ
typeof[n] = all
case *ast.ValueSpec:
- // var declaration. Use type if present.
+ // var declaration. Use type if present.
if n.Type != nil {
t := typeof[n.Type]
if !isType(t) {
// Convert between function type strings and lists of types.
// Using strings makes this a little harder, but it makes
-// a lot of the rest of the code easier. This will all go away
+// a lot of the rest of the code easier. This will all go away
// when we can use go/typechecker directly.
// splitFunc splits "func(x,y,z) (a,b,c)" into ["x", "y", "z"] and ["a", "b", "c"].
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
being checked out for the first time by 'go get': those are always
placed in the main GOPATH, never in a vendor subtree.
-In Go 1.5, as an experiment, setting the environment variable
-GO15VENDOREXPERIMENT=1 enabled these features.
-As of Go 1.6 they are on by default. To turn them off, set
-GO15VENDOREXPERIMENT=0. In Go 1.7, the environment
-variable will stop having any effect.
-
See https://golang.org/s/go15vendor for details.
installed in a location other than where it is built.
File names in stack traces are rewritten from GOROOT to
GOROOT_FINAL.
- GO15VENDOREXPERIMENT
- Set to 0 to disable vendoring semantics.
GO_EXTLINK_ENABLED
Whether the linker should use external linking mode
when using -linkmode=auto with code that uses cgo.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// libname returns the filename to use for the shared library when using
-// -buildmode=shared. The rules we use are:
+// -buildmode=shared. The rules we use are:
// Use arguments for special 'meta' packages:
// std --> libstd.so
// std cmd --> libstd,cmd.so
goarch string
goos string
exeSuffix string
+ gopath []string
)
func init() {
if goos == "windows" {
exeSuffix = ".exe"
}
+ gopath = filepath.SplitList(buildContext.GOPATH)
}
// A builder holds global state about a build.
work string // the temporary work directory (ends in filepath.Separator)
actionCache map[cacheKey]*action // a cache of already-constructed actions
mkdirCache map[string]bool // a cache of created directories
+ flagCache map[string]bool // a cache of supported compiler flags
print func(args ...interface{}) (int, error)
output sync.Mutex
// Synthesize fake "directory" that only shows the named files,
// to make it look like this is a standard package or
- // command directory. So that local imports resolve
+ // command directory. So that local imports resolve
// consistently, the files must all be in the same directory.
var dirent []os.FileInfo
var dir string
// If we are not doing a cross-build, then record the binary we'll
// generate for cgo as a dependency of the build of any package
// using cgo, to make sure we do not overwrite the binary while
- // a package is using it. If this is a cross-build, then the cgo we
+ // a package is using it. If this is a cross-build, then the cgo we
// are writing is not the cgo we need to use.
if goos == runtime.GOOS && goarch == runtime.GOARCH && !buildRace && !buildMSan {
if (len(p.CgoFiles) > 0 || p.Standard && p.ImportPath == "runtime/cgo") && !buildLinkshared && buildBuildmode != "shared" {
}
if p.local && p.target == "" {
- // Imported via local path. No permanent target.
+ // Imported via local path. No permanent target.
mode = modeBuild
}
work := p.pkgdir
// the name will show up in ps listings. If the caller has specified
// a name, use that instead of a.out. The binary is generated
// in an otherwise empty subdirectory named exe to avoid
- // naming conflicts. The only possible conflict is if we were
+ // naming conflicts. The only possible conflict is if we were
// to create a top-level package named exe.
name := "a.out"
if p.exeName != "" {
// The original implementation here was a true queue
// (using a channel) but it had the effect of getting
// distracted by low-level leaf actions to the detriment
- // of completing higher-level actions. The order of
+ // of completing higher-level actions. The order of
// work does not matter much to overall execution time,
// but when running "go test std" it is nice to see each test
- // results as soon as possible. The priorities assigned
+ // results as soon as possible. The priorities assigned
// ensure that, all else being equal, the execution prefers
// to do what it would have done first in a simple depth-first
// dependency order traversal.
return fmt.Errorf("can't build package %s because it contains Objective-C files (%s) but it's not using cgo nor SWIG",
a.p.ImportPath, strings.Join(a.p.MFiles, ","))
}
+ // Same as above for Fortran files
+ if len(a.p.FFiles) > 0 && !a.p.usesCgo() && !a.p.usesSwig() {
+ return fmt.Errorf("can't build package %s because it contains Fortran files (%s) but it's not using cgo nor SWIG",
+ a.p.ImportPath, strings.Join(a.p.FFiles, ","))
+ }
defer func() {
if err != nil && err != errPrintedOutput {
err = fmt.Errorf("go build %s: %v", a.p.ImportPath, err)
if a.cgo != nil && a.cgo.target != "" {
cgoExe = a.cgo.target
}
- outGo, outObj, err := b.cgo(a.p, cgoExe, obj, pcCFLAGS, pcLDFLAGS, cgofiles, gccfiles, cxxfiles, a.p.MFiles)
+ outGo, outObj, err := b.cgo(a.p, cgoExe, obj, pcCFLAGS, pcLDFLAGS, cgofiles, gccfiles, cxxfiles, a.p.MFiles, a.p.FFiles)
if err != nil {
return err
}
// NOTE(rsc): On Windows, it is critically important that the
// gcc-compiled objects (cgoObjects) be listed after the ordinary
- // objects in the archive. I do not know why this is.
+ // objects in the archive. I do not know why this is.
// https://golang.org/issue/2601
objects = append(objects, cgoObjects...)
}
// remove object dir to keep the amount of
- // garbage down in a large build. On an operating system
+ // garbage down in a large build. On an operating system
// with aggressive buffering, cleaning incrementally like
// this keeps the intermediate objects from hitting the disk.
if !buildWork {
inc = append(inc, flag, b.work)
// Finally, look in the installed package directories for each action.
+ // First add the package dirs corresponding to GOPATH entries
+ // in the original GOPATH order.
+ need := map[string]*build.Package{}
+ for _, a1 := range all {
+ if a1.p != nil && a1.pkgdir == a1.p.build.PkgRoot {
+ need[a1.p.build.Root] = a1.p.build
+ }
+ }
+ for _, root := range gopath {
+ if p := need[root]; p != nil && !incMap[p.PkgRoot] {
+ incMap[p.PkgRoot] = true
+ inc = append(inc, flag, p.PkgTargetRoot)
+ }
+ }
+
+ // Then add anything that's left.
for _, a1 := range all {
if a1.p == nil {
continue
df, err := os.OpenFile(dst, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, perm)
if err != nil && toolIsWindows {
// Windows does not allow deletion of a binary file
- // while it is executing. Try to move it out of the way.
+ // while it is executing. Try to move it out of the way.
// If the move fails, which is likely, we'll try again the
// next time we do an install of this binary.
if err := os.Rename(dst, dst+"~"); err == nil {
// The output is expected to contain references to 'dir', usually
// the source directory for the package that has failed to build.
// showOutput rewrites mentions of dir with a relative path to dir
-// when the relative path is shorter. This is usually more pleasant.
+// when the relative path is shorter. This is usually more pleasant.
// For example, if fmt doesn't compile and we are in src/html,
// the output is
//
// errPrintedOutput is a special error indicating that a command failed
// but that it generated output as well, and that output has already
// been printed, so there's no point showing 'exit status 1' or whatever
-// the wait status was. The main executor, builder.do, knows not to
+// the wait status was. The main executor, builder.do, knows not to
// print this error.
var errPrintedOutput = errors.New("already printed output - no need to show error")
err := cmd.Run()
// cmd.Run will fail on Unix if some other process has the binary
- // we want to run open for writing. This can happen here because
+ // we want to run open for writing. This can happen here because
// we build and install the cgo command and then run it.
// If another command was kicked off while we were writing the
// cgo binary, the child process for that command may be holding
// The answer is that running a command is fork and exec.
// A child forked while the cgo fd is open inherits that fd.
// Until the child has called exec, it holds the fd open and the
- // kernel will not let us run cgo. Even if the child were to close
+ // kernel will not let us run cgo. Even if the child were to close
// the fd explicitly, it would still be open from the time of the fork
// until the time of the explicit close, and the race would remain.
//
// On Unix systems, this results in ETXTBSY, which formats
// as "text file busy". Rather than hard-code specific error cases,
- // we just look for that string. If this happens, sleep a little
- // and try again. We let this happen three times, with increasing
+ // we just look for that string. If this happens, sleep a little
+ // and try again. We let this happen three times, with increasing
// sleep lengths: 100+200+400 ms = 0.7 seconds.
//
// An alternate solution might be to split the cmd.Run into
// separate cmd.Start and cmd.Wait, and then use an RWLock
// to make sure that copyFile only executes when no cmd.Start
- // call is in progress. However, cmd.Start (really syscall.forkExec)
+ // call is in progress. However, cmd.Start (really syscall.forkExec)
// only guarantees that when it returns, the exec is committed to
- // happen and succeed. It uses a close-on-exec file descriptor
+ // happen and succeed. It uses a close-on-exec file descriptor
// itself to determine this, so we know that when cmd.Start returns,
// at least one close-on-exec file descriptor has been closed.
// However, we cannot be sure that all of them have been closed,
// so the program might still encounter ETXTBSY even with such
- // an RWLock. The race window would be smaller, perhaps, but not
+ // an RWLock. The race window would be smaller, perhaps, but not
// guaranteed to be gone.
//
// Sleeping when we observe the race seems to be the most reliable
b.exec.Lock()
defer b.exec.Unlock()
// We can be a little aggressive about being
- // sure directories exist. Skip repeated calls.
+ // sure directories exist. Skip repeated calls.
if b.mkdirCache[dir] {
return nil
}
// so that it can give good error messages about forward declarations.
// Exceptions: a few standard packages have forward declarations for
// pieces supplied behind-the-scenes by package runtime.
- extFiles := len(p.CgoFiles) + len(p.CFiles) + len(p.CXXFiles) + len(p.MFiles) + len(p.SFiles) + len(p.SysoFiles) + len(p.SwigFiles) + len(p.SwigCXXFiles)
+ extFiles := len(p.CgoFiles) + len(p.CFiles) + len(p.CXXFiles) + len(p.MFiles) + len(p.FFiles) + len(p.SFiles) + len(p.SysoFiles) + len(p.SwigFiles) + len(p.SwigCXXFiles)
if p.Standard {
switch p.ImportPath {
case "bytes", "net", "os", "runtime/pprof", "sync", "time":
usesCgo := false
cxx := len(root.p.CXXFiles) > 0 || len(root.p.SwigCXXFiles) > 0
objc := len(root.p.MFiles) > 0
+ fortran := len(root.p.FFiles) > 0
actionsSeen := make(map[*action]bool)
// Make a pre-order depth-first traversal of the action graph, taking note of
if len(a.p.MFiles) > 0 {
objc = true
}
+ if len(a.p.FFiles) > 0 {
+ fortran = true
+ }
}
ldflags = append(ldflags, "-Wl,--whole-archive")
// initialization code.
//
// The user remains responsible for linking against
- // -lgo -lpthread -lm in the final link. We can't use
+ // -lgo -lpthread -lm in the final link. We can't use
// -r to pick them up because we can't combine
// split-stack and non-split-stack code in a single -r
// link, and libgo picks up non-split-stack code from
if objc {
ldflags = append(ldflags, "-lobjc")
}
+ if fortran {
+ fc := os.Getenv("FC")
+ if fc == "" {
+ fc = "gfortran"
+ }
+ // support gfortran out of the box and let others pass the correct link options
+ // via CGO_LDFLAGS
+ if strings.Contains(fc, "gfortran") {
+ ldflags = append(ldflags, "-lgfortran")
+ }
+ }
}
if err := b.run(".", root.p.ImportPath, nil, tools.linker(), "-o", out, ofiles, ldflags, buildGccgoflags); err != nil {
return b.ccompile(p, out, flags, cxxfile, b.gxxCmd(p.Dir))
}
+// gfortran runs the gfortran Fortran compiler to create an object from a single Fortran file.
+func (b *builder) gfortran(p *Package, out string, flags []string, ffile string) error {
+ return b.ccompile(p, out, flags, ffile, b.gfortranCmd(p.Dir))
+}
+
// ccompile runs the given C or C++ compiler and creates an object from a single source file.
func (b *builder) ccompile(p *Package, out string, flags []string, file string, compiler []string) error {
file = mkAbs(p.Dir, file)
return b.ccompilerCmd("CXX", defaultCXX, objdir)
}
+// gfortranCmd returns a gfortran command line prefix.
+func (b *builder) gfortranCmd(objdir string) []string {
+ return b.ccompilerCmd("FC", "gfortran", objdir)
+}
+
// ccompilerCmd returns a command line prefix for the given environment
// variable and using the default command when the variable is empty.
func (b *builder) ccompilerCmd(envvar, defcmd, objdir string) []string {
// disable word wrapping in error messages
a = append(a, "-fmessage-length=0")
+ // Tell gcc not to include the work directory in object files.
+ if b.gccSupportsFlag("-fdebug-prefix-map=a=b") {
+ a = append(a, "-fdebug-prefix-map="+b.work+"=/tmp/go-build")
+ }
+
+ // Tell gcc not to include flags in object files, which defeats the
+ // point of -fdebug-prefix-map above.
+ if b.gccSupportsFlag("-gno-record-gcc-switches") {
+ a = append(a, "-gno-record-gcc-switches")
+ }
+
// On OS X, some of the compilers behave as if -fno-common
// is always set, and the Mach-O linker in 6l/8l assumes this.
// See https://golang.org/issue/3253.
// -no-pie must be passed when doing a partial link with -Wl,-r. But -no-pie is
// not supported by all compilers.
func (b *builder) gccSupportsNoPie() bool {
- if goos != "linux" {
- // On some BSD platforms, error messages from the
- // compiler make it to the console despite cmd.Std*
- // all being nil. As -no-pie is only required on linux
- // systems so far, we only test there.
- return false
+ return b.gccSupportsFlag("-no-pie")
+}
+
+// gccSupportsFlag checks to see if the compiler supports a flag.
+func (b *builder) gccSupportsFlag(flag string) bool {
+ b.exec.Lock()
+ defer b.exec.Unlock()
+ if b, ok := b.flagCache[flag]; ok {
+ return b
}
- src := filepath.Join(b.work, "trivial.c")
- if err := ioutil.WriteFile(src, []byte{}, 0666); err != nil {
- return false
+ if b.flagCache == nil {
+ src := filepath.Join(b.work, "trivial.c")
+ if err := ioutil.WriteFile(src, []byte{}, 0666); err != nil {
+ return false
+ }
+ b.flagCache = make(map[string]bool)
}
- cmdArgs := b.gccCmd(b.work)
- cmdArgs = append(cmdArgs, "-no-pie", "-c", "trivial.c")
+ cmdArgs := append(envList("CC", defaultCC), flag, "-c", "trivial.c")
if buildN || buildX {
b.showcmd(b.work, "%s", joinUnambiguously(cmdArgs))
if buildN {
cmd.Dir = b.work
cmd.Env = envForDir(cmd.Dir, os.Environ())
out, err := cmd.CombinedOutput()
- return err == nil && !bytes.Contains(out, []byte("unrecognized"))
+ supported := err == nil && !bytes.Contains(out, []byte("unrecognized"))
+ b.flagCache[flag] = supported
+ return supported
}
// gccArchArgs returns arguments to pass to gcc based on the architecture.
return strings.Fields(v)
}
-// Return the flags to use when invoking the C or C++ compilers, or cgo.
-func (b *builder) cflags(p *Package, def bool) (cppflags, cflags, cxxflags, ldflags []string) {
+// Return the flags to use when invoking the C, C++ or Fortran compilers, or cgo.
+func (b *builder) cflags(p *Package, def bool) (cppflags, cflags, cxxflags, fflags, ldflags []string) {
var defaults string
if def {
defaults = "-g -O2"
cppflags = stringList(envList("CGO_CPPFLAGS", ""), p.CgoCPPFLAGS)
cflags = stringList(envList("CGO_CFLAGS", defaults), p.CgoCFLAGS)
cxxflags = stringList(envList("CGO_CXXFLAGS", defaults), p.CgoCXXFLAGS)
+ fflags = stringList(envList("CGO_FFLAGS", defaults), p.CgoFFLAGS)
ldflags = stringList(envList("CGO_LDFLAGS", defaults), p.CgoLDFLAGS)
return
}
var cgoRe = regexp.MustCompile(`[/\\:]`)
-func (b *builder) cgo(p *Package, cgoExe, obj string, pcCFLAGS, pcLDFLAGS, cgofiles, gccfiles, gxxfiles, mfiles []string) (outGo, outObj []string, err error) {
- cgoCPPFLAGS, cgoCFLAGS, cgoCXXFLAGS, cgoLDFLAGS := b.cflags(p, true)
- _, cgoexeCFLAGS, _, _ := b.cflags(p, false)
+func (b *builder) cgo(p *Package, cgoExe, obj string, pcCFLAGS, pcLDFLAGS, cgofiles, gccfiles, gxxfiles, mfiles, ffiles []string) (outGo, outObj []string, err error) {
+ cgoCPPFLAGS, cgoCFLAGS, cgoCXXFLAGS, cgoFFLAGS, cgoLDFLAGS := b.cflags(p, true)
+ _, cgoexeCFLAGS, _, _, _ := b.cflags(p, false)
cgoCPPFLAGS = append(cgoCPPFLAGS, pcCFLAGS...)
cgoLDFLAGS = append(cgoLDFLAGS, pcLDFLAGS...)
// If we are compiling Objective-C code, then we need to link against libobjc
cgoLDFLAGS = append(cgoLDFLAGS, "-lobjc")
}
+ // Likewise for Fortran, except there are many Fortran compilers.
+ // Support gfortran out of the box and let others pass the correct link options
+ // via CGO_LDFLAGS
+ if len(ffiles) > 0 {
+ fc := os.Getenv("FC")
+ if fc == "" {
+ fc = "gfortran"
+ }
+ if strings.Contains(fc, "gfortran") {
+ cgoLDFLAGS = append(cgoLDFLAGS, "-lgfortran")
+ }
+ }
+
if buildMSan && p.ImportPath != "runtime/cgo" {
cgoCFLAGS = append([]string{"-fsanitize=memory"}, cgoCFLAGS...)
cgoLDFLAGS = append([]string{"-fsanitize=memory"}, cgoLDFLAGS...)
case strings.HasPrefix(f, "-fsanitize="):
continue
// runpath flags not applicable unless building a shared
- // object or executable; see issue 12115 for details. This
+ // object or executable; see issue 12115 for details. This
// is necessary as Go currently does not offer a way to
// specify the set of LDFLAGS that only apply to shared
// objects.
outObj = append(outObj, ofile)
}
+ fflags := stringList(cgoCPPFLAGS, cgoFFLAGS)
+ for _, file := range ffiles {
+ // Append .o to the file, just in case the pkg has file.c and file.f
+ ofile := obj + cgoRe.ReplaceAllString(file, "_") + ".o"
+ if err := b.gfortran(p, ofile, fflags, file); err != nil {
+ return nil, nil, err
+ }
+ linkobj = append(linkobj, ofile)
+ outObj = append(outObj, ofile)
+ }
+
linkobj = append(linkobj, p.SysoFiles...)
dynobj := obj + "_cgo_.o"
pie := (goarch == "arm" && goos == "linux") || goos == "android"
return swigCheck
}
+// Find the value to pass for the -intgosize option to swig.
+var (
+ swigIntSizeOnce sync.Once
+ swigIntSize string
+ swigIntSizeError error
+)
+
// This code fails to build if sizeof(int) <= 32
const swigIntSizeCode = `
package main
`
// Determine the size of int on the target system for the -intgosize option
-// of swig >= 2.0.9
-func (b *builder) swigIntSize(obj string) (intsize string, err error) {
+// of swig >= 2.0.9. Run only once.
+func (b *builder) swigDoIntSize(obj string) (intsize string, err error) {
if buildN {
return "$INTBITS", nil
}
return "64", nil
}
+// Determine the size of int on the target system for the -intgosize option
+// of swig >= 2.0.9.
+func (b *builder) swigIntSize(obj string) (intsize string, err error) {
+ swigIntSizeOnce.Do(func() {
+ swigIntSize, swigIntSizeError = b.swigDoIntSize(obj)
+ })
+ return swigIntSize, swigIntSizeError
+}
+
// Run SWIG on one SWIG input file.
func (b *builder) swigOne(p *Package, file, obj string, pcCFLAGS []string, cxx bool, intgosize string) (outGo, outC string, err error) {
- cgoCPPFLAGS, cgoCFLAGS, cgoCXXFLAGS, _ := b.cflags(p, true)
+ cgoCPPFLAGS, cgoCFLAGS, cgoCXXFLAGS, _, _ := b.cflags(p, true)
var cflags []string
if cxx {
cflags = stringList(cgoCPPFLAGS, pcCFLAGS, cgoCXXFLAGS)
// disableBuildID adjusts a linker command line to avoid creating a
// build ID when creating an object file rather than an executable or
-// shared library. Some systems, such as Ubuntu, always add
+// shared library. Some systems, such as Ubuntu, always add
// --build-id to every link, but we don't want a build ID when we are
-// producing an object file. On some of those system a plain -r (not
+// producing an object file. On some of those system a plain -r (not
// -Wl,-r) will turn off --build-id, but clang 3.0 doesn't support a
-// plain -r. I don't know how to turn off --build-id when using clang
-// other than passing a trailing --build-id=none. So that is what we
+// plain -r. I don't know how to turn off --build-id when using clang
+// other than passing a trailing --build-id=none. So that is what we
// do, but only on systems likely to support it, which is to say,
// systems that normally use gold or the GNU linker.
func (b *builder) disableBuildID(ldflags []string) []string {
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
var b builder
b.init()
- vendorExpValue := "0"
- if go15VendorExperiment {
- vendorExpValue = "1"
- }
-
env := []envVar{
{"GOARCH", goarch},
{"GOBIN", gobin},
{"GORACE", os.Getenv("GORACE")},
{"GOROOT", goroot},
{"GOTOOLDIR", toolDir},
- {"GO15VENDOREXPERIMENT", vendorExpValue},
// disable escape codes in clang errors
{"TERM", "dumb"},
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Code we downloaded and all code that depends on it
// needs to be evicted from the package cache so that
- // the information will be recomputed. Instead of keeping
+ // the information will be recomputed. Instead of keeping
// track of the reverse dependency information, evict
// everything.
for name := range packageCache {
delete(packageCache, name)
}
+ // In order to rebuild packages information completely,
+ // we need to clear commands cache. Command packages are
+ // referring to evicted packages from the package cache.
+ // This leads to duplicated loads of the standard packages.
+ for name := range cmdCache {
+ delete(cmdCache, name)
+ }
+
args = importPaths(args)
packagesForBuild(args)
}
// downloadPaths prepares the list of paths to pass to download.
-// It expands ... patterns that can be expanded. If there is no match
+// It expands ... patterns that can be expanded. If there is no match
// for a particular pattern, downloadPaths leaves it in the result list,
// in the hope that we can figure out the repository from the
// initial ...-free prefix.
if strings.Contains(a, "...") {
var expand []string
// Use matchPackagesInFS to avoid printing
- // warnings. They will be printed by the
+ // warnings. They will be printed by the
// eventual call to importPaths instead.
if build.IsLocalImport(a) {
expand = matchPackagesInFS(a)
return
}
- // Warn that code.google.com is shutting down. We
+ // Warn that code.google.com is shutting down. We
// issue the warning here because this is where we
// have the import stack.
if strings.HasPrefix(p.ImportPath, "code.google.com") {
}
if p.build.SrcRoot != "" {
- // Directory exists. Look for checkout along path to src.
+ // Directory exists. Look for checkout along path to src.
vcs, rootPath, err = vcsForDir(p)
if err != nil {
return err
}
if p.build.SrcRoot == "" {
- // Package not found. Put in first directory of $GOPATH.
+ // Package not found. Put in first directory of $GOPATH.
list := filepath.SplitList(buildContext.GOPATH)
if len(list) == 0 {
return fmt.Errorf("cannot download, $GOPATH not set. For more details see: go help gopath")
return fmt.Errorf("%s exists but is not a directory", meta)
}
if err != nil {
- // Metadata directory does not exist. Prepare to checkout new copy.
+ // Metadata directory does not exist. Prepare to checkout new copy.
// Some version control tools require the target directory not to exist.
// We require that too, just to avoid stepping on existing work.
if _, err := os.Stat(root); err == nil {
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
"fmt"
"go/build"
"go/format"
+ "internal/race"
"internal/testenv"
"io"
"io/ioutil"
flag.Parse()
if canRun {
- out, err := exec.Command("go", "build", "-tags", "testgo", "-o", "testgo"+exeSuffix).CombinedOutput()
+ args := []string{"build", "-tags", "testgo", "-o", "testgo" + exeSuffix}
+ if race.Enabled {
+ args = append(args, "-race")
+ }
+ out, err := exec.Command("go", args...).CombinedOutput()
if err != nil {
fmt.Fprintf(os.Stderr, "building testgo failed: %v\n%s", err, out)
os.Exit(2)
os.Exit(r)
}
-// The length of an mtime tick on this system. This is an estimate of
+// The length of an mtime tick on this system. This is an estimate of
// how long we need to sleep to ensure that the mtime of two files is
// different.
// We used to try to be clever but that didn't always work (see golang.org/issue/12205).
return wd
}
-// cd changes the current directory to the named directory. Note that
+// cd changes the current directory to the named directory. Note that
// using this means that the test must not be run in parallel with any
// other tests.
func (tg *testgoData) cd(dir string) {
}
// doGrepMatch looks for a regular expression in a buffer, and returns
-// whether it is found. The regular expression is matched against
+// whether it is found. The regular expression is matched against
// each line separately, as with the grep command.
func (tg *testgoData) doGrepMatch(match string, b *bytes.Buffer) bool {
if !tg.ran {
}
// doGrep looks for a regular expression in a buffer and fails if it
-// is not found. The name argument is the name of the output we are
+// is not found. The name argument is the name of the output we are
// searching, "output" or "error". The msg argument is logged on
// failure.
func (tg *testgoData) doGrep(match string, b *bytes.Buffer, name, msg string) {
}
// doGrepNot looks for a regular expression in a buffer and fails if
-// it is found. The name and msg arguments are as for doGrep.
+// it is found. The name and msg arguments are as for doGrep.
func (tg *testgoData) doGrepNot(match string, b *bytes.Buffer, name, msg string) {
if tg.doGrepMatch(match, b) {
tg.t.Log(msg)
}
// creatingTemp records that the test plans to create a temporary file
-// or directory. If the file or directory exists already, it will be
-// removed. When the test completes, the file or directory will be
+// or directory. If the file or directory exists already, it will be
+// removed. When the test completes, the file or directory will be
// removed if it exists.
func (tg *testgoData) creatingTemp(path string) {
if filepath.IsAbs(path) && !strings.HasPrefix(path, tg.tempdir) {
tg.temps = append(tg.temps, path)
}
-// makeTempdir makes a temporary directory for a run of testgo. If
+// makeTempdir makes a temporary directory for a run of testgo. If
// the temporary directory was already created, this does nothing.
func (tg *testgoData) makeTempdir() {
if tg.tempdir == "" {
}
if vcs == "git" {
// git will ask for a username and password when we
- // run go get -d -f -u. An empty username and
- // password will work. Prevent asking by setting
+ // run go get -d -f -u. An empty username and
+ // password will work. Prevent asking by setting
// GIT_ASKPASS.
tg.creatingTemp("sink" + exeSuffix)
tg.tempFile("src/sink/sink.go", `package main; func main() {}`)
func main() {
println(extern)
}`)
- tg.run("run", "-ldflags", `-X main.extern "hello world"`, tg.path("main.go"))
- tg.grepStderr("^hello world", `ldflags -X main.extern 'hello world' failed`)
+ tg.run("run", "-ldflags", `-X "main.extern=hello world"`, tg.path("main.go"))
+ tg.grepStderr("^hello world", `ldflags -X "main.extern=hello world"' failed`)
}
func TestGoTestCpuprofileLeavesBinaryBehind(t *testing.T) {
tg := testgo(t)
defer tg.cleanup()
- tg.setenv("GO15VENDOREXPERIMENT", "1")
tg.tempDir("gopath/src/dir1/vendor/v")
tg.tempFile("gopath/src/dir1/p.go", "package main\nimport _ `v`\nfunc main(){}")
tg.tempFile("gopath/src/dir1/vendor/v/v.go", "package v")
// cmd/go: go test -a foo does not rebuild regexp.
func TestIssue6844(t *testing.T) {
if testing.Short() {
- t.Skip("don't rebuild the standard libary in short mode")
+ t.Skip("don't rebuild the standard library in short mode")
}
tg := testgo(t)
tg.run("get", "bazil.org/fuse/fs/fstestutil")
}
-// Test that you can not import a main package.
+// Test that you cannot import a main package.
func TestIssue4210(t *testing.T) {
tg := testgo(t)
defer tg.cleanup()
tg.grepStderr("no install location for.*gopath2.src.test: hidden by .*gopath1.src.test", "missing error")
}
+func TestGoBuildGOPATHOrder(t *testing.T) {
+ // golang.org/issue/14176#issuecomment-179895769
+ // golang.org/issue/14192
+ // -I arguments to compiler could end up not in GOPATH order,
+ // leading to unexpected import resolution in the compiler.
+ // This is still not a complete fix (see golang.org/issue/14271 and next test)
+ // but it is clearly OK and enough to fix both of the two reported
+ // instances of the underlying problem. It will have to do for now.
+
+ tg := testgo(t)
+ defer tg.cleanup()
+ tg.makeTempdir()
+ tg.setenv("GOPATH", tg.path("p1")+string(filepath.ListSeparator)+tg.path("p2"))
+
+ tg.tempFile("p1/src/foo/foo.go", "package foo\n")
+ tg.tempFile("p2/src/baz/baz.go", "package baz\n")
+ tg.tempFile("p2/pkg/"+runtime.GOOS+"_"+runtime.GOARCH+"/foo.a", "bad\n")
+ tg.tempFile("p1/src/bar/bar.go", `
+ package bar
+ import _ "baz"
+ import _ "foo"
+ `)
+
+ tg.run("install", "-x", "bar")
+}
+
+func TestGoBuildGOPATHOrderBroken(t *testing.T) {
+ // This test is known not to work.
+ // See golang.org/issue/14271.
+ t.Skip("golang.org/issue/14271")
+
+ tg := testgo(t)
+ defer tg.cleanup()
+ tg.makeTempdir()
+
+ tg.tempFile("p1/src/foo/foo.go", "package foo\n")
+ tg.tempFile("p2/src/baz/baz.go", "package baz\n")
+ tg.tempFile("p1/pkg/"+runtime.GOOS+"_"+runtime.GOARCH+"/baz.a", "bad\n")
+ tg.tempFile("p2/pkg/"+runtime.GOOS+"_"+runtime.GOARCH+"/foo.a", "bad\n")
+ tg.tempFile("p1/src/bar/bar.go", `
+ package bar
+ import _ "baz"
+ import _ "foo"
+ `)
+
+ colon := string(filepath.ListSeparator)
+ tg.setenv("GOPATH", tg.path("p1")+colon+tg.path("p2"))
+ tg.run("install", "-x", "bar")
+
+ tg.setenv("GOPATH", tg.path("p2")+colon+tg.path("p1"))
+ tg.run("install", "-x", "bar")
+}
+
func TestIssue11709(t *testing.T) {
tg := testgo(t)
defer tg.cleanup()
tg.grepStdout("runtime/internal/sys", "did not find required dependency of "+pkg+" on runtime/internal/sys")
}
}
+
+// For issue 14337.
+func TestParallelTest(t *testing.T) {
+ tg := testgo(t)
+ defer tg.cleanup()
+ tg.makeTempdir()
+ const testSrc = `package package_test
+ import (
+ "testing"
+ )
+ func TestTest(t *testing.T) {
+ }`
+ tg.tempFile("src/p1/p1_test.go", strings.Replace(testSrc, "package_test", "p1_test", 1))
+ tg.tempFile("src/p2/p2_test.go", strings.Replace(testSrc, "package_test", "p2_test", 1))
+ tg.tempFile("src/p3/p3_test.go", strings.Replace(testSrc, "package_test", "p3_test", 1))
+ tg.tempFile("src/p4/p4_test.go", strings.Replace(testSrc, "package_test", "p4_test", 1))
+ tg.setenv("GOPATH", tg.path("."))
+ tg.run("test", "-p=4", "p1", "p2", "p3", "p4")
+}
+
+func TestCgoConsistentResults(t *testing.T) {
+ if !canCgo {
+ t.Skip("skipping because cgo not enabled")
+ }
+ if runtime.GOOS == "solaris" {
+ // See https://golang.org/issue/13247
+ t.Skip("skipping because Solaris builds are known to be inconsistent; see #13247")
+ }
+
+ tg := testgo(t)
+ defer tg.cleanup()
+ tg.parallel()
+ tg.makeTempdir()
+ tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata"))
+ exe1 := tg.path("cgotest1" + exeSuffix)
+ exe2 := tg.path("cgotest2" + exeSuffix)
+ tg.run("build", "-o", exe1, "cgotest")
+ tg.run("build", "-x", "-o", exe2, "cgotest")
+ b1, err := ioutil.ReadFile(exe1)
+ tg.must(err)
+ b2, err := ioutil.ReadFile(exe2)
+ tg.must(err)
+
+ if !tg.doGrepMatch(`-fdebug-prefix-map=\$WORK`, &tg.stderr) {
+ t.Skip("skipping because C compiler does not support -fdebug-prefix-map")
+ }
+ if !bytes.Equal(b1, b2) {
+ t.Error("building cgotest twice did not produce the same output")
+ }
+}
+
+// Issue 14444: go get -u .../ duplicate loads errors
+func TestGoGetUpdateAllDoesNotTryToLoadDuplicates(t *testing.T) {
+ testenv.MustHaveExternalNetwork(t)
+
+ tg := testgo(t)
+ defer tg.cleanup()
+ tg.makeTempdir()
+ tg.setenv("GOPATH", tg.path("."))
+ tg.run("get", "-u", ".../")
+ tg.grepStderrNot("duplicate loads of", "did not remove old packages from cache")
+}
+
+func TestFatalInBenchmarkCauseNonZeroExitStatus(t *testing.T) {
+ tg := testgo(t)
+ defer tg.cleanup()
+ tg.runFail("test", "-bench", ".", "./testdata/src/benchfatal")
+ tg.grepBothNot("^ok", "test passed unexpectedly")
+ tg.grepBoth("FAIL.*benchfatal", "test did not run everything")
+}
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
being checked out for the first time by 'go get': those are always
placed in the main GOPATH, never in a vendor subtree.
-In Go 1.5, as an experiment, setting the environment variable
-GO15VENDOREXPERIMENT=1 enabled these features.
-As of Go 1.6 they are on by default. To turn them off, set
-GO15VENDOREXPERIMENT=0. In Go 1.7, the environment
-variable will stop having any effect.
-
See https://golang.org/s/go15vendor for details.
`,
}
installed in a location other than where it is built.
File names in stack traces are rewritten from GOROOT to
GOROOT_FINAL.
- GO15VENDOREXPERIMENT
- Set to 0 to disable vendoring semantics.
GO_EXTLINK_ENABLED
Whether the linker should use external linking mode
when using -linkmode=auto with code that uses cgo.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
CXXFiles []string // .cc, .cxx and .cpp source files
MFiles []string // .m source files
HFiles []string // .h, .hh, .hpp and .hxx source files
+ FFiles []string // .f, .F, .for and .f90 Fortran source files
SFiles []string // .s source files
SwigFiles []string // .swig files
SwigCXXFiles []string // .swigcxx files
CgoCFLAGS []string // cgo: flags for C compiler
CgoCPPFLAGS []string // cgo: flags for C preprocessor
CgoCXXFLAGS []string // cgo: flags for C++ compiler
+ CgoFFLAGS []string // cgo: flags for Fortran compiler
CgoLDFLAGS []string // cgo: flags for linker
CgoPkgConfig []string // cgo: pkg-config names
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
for _, a := range args {
// Arguments are supposed to be import paths, but
// as a courtesy to Windows developers, rewrite \ to /
- // in command-line arguments. Handles .\... and so on.
+ // in command-line arguments. Handles .\... and so on.
if filepath.Separator == '\\' {
a = strings.Replace(a, `\`, `/`, -1)
}
// mergeEnvLists merges the two environment lists such that
// variables with the same name in "in" replace those in "out".
+// This always returns a newly allocated slice.
func mergeEnvLists(in, out []string) []string {
+ out = append([]string(nil), out...)
NextVar:
for _, inkv := range in {
k := strings.SplitAfterN(inkv, "=", 2)[0]
}
// matchPattern(pattern)(name) reports whether
-// name matches pattern. Pattern is a limited glob
+// name matches pattern. Pattern is a limited glob
// pattern in which '...' means 'any string' and there
// is no other special syntax.
func matchPattern(pattern string) func(name string) bool {
// allPackagesInFS is like allPackages but is passed a pattern
// beginning ./ or ../, meaning it should scan the tree rooted
-// at the given directory. There are ... in the pattern too.
+// at the given directory. There are ... in the pattern too.
func allPackagesInFS(pattern string) []string {
pkgs := matchPackagesInFS(pattern)
if len(pkgs) == 0 {
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
var elfGoNote = []byte("Go\x00\x00")
// The Go build ID is stored in a note described by an ELF PT_NOTE prog
-// header. The caller has already opened filename, to get f, and read
+// header. The caller has already opened filename, to get f, and read
// at least 4 kB out, in data.
func readELFGoBuildID(filename string, f *os.File, data []byte) (buildid string, err error) {
// Assume the note content is in the data, already read.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// A Package describes a single package found in a directory.
type Package struct {
// Note: These fields are part of the go command's public API.
- // See list.go. It is okay to add fields, but not to change or
- // remove existing ones. Keep in sync with list.go
+ // See list.go. It is okay to add fields, but not to change or
+ // remove existing ones. Keep in sync with list.go
Dir string `json:",omitempty"` // directory containing package sources
ImportPath string `json:",omitempty"` // import path of package in dir
ImportComment string `json:",omitempty"` // path in import comment on package statement
CXXFiles []string `json:",omitempty"` // .cc, .cpp and .cxx source files
MFiles []string `json:",omitempty"` // .m source files
HFiles []string `json:",omitempty"` // .h, .hh, .hpp and .hxx source files
+ FFiles []string `json:",omitempty"` // .f, .F, .for and .f90 Fortran source files
SFiles []string `json:",omitempty"` // .s source files
SwigFiles []string `json:",omitempty"` // .swig files
SwigCXXFiles []string `json:",omitempty"` // .swigcxx files
CgoCFLAGS []string `json:",omitempty"` // cgo: flags for C compiler
CgoCPPFLAGS []string `json:",omitempty"` // cgo: flags for C preprocessor
CgoCXXFLAGS []string `json:",omitempty"` // cgo: flags for C++ compiler
+ CgoFFLAGS []string `json:",omitempty"` // cgo: flags for Fortran compiler
CgoLDFLAGS []string `json:",omitempty"` // cgo: flags for linker
CgoPkgConfig []string `json:",omitempty"` // cgo: pkg-config names
p.CXXFiles = pp.CXXFiles
p.MFiles = pp.MFiles
p.HFiles = pp.HFiles
+ p.FFiles = pp.FFiles
p.SFiles = pp.SFiles
p.SwigFiles = pp.SwigFiles
p.SwigCXXFiles = pp.SwigCXXFiles
return fmt.Sprintf("%s\npackage %s\n", p.Err, strings.Join(p.ImportStack, "\n\timports "))
}
if p.Pos != "" {
- // Omit import stack. The full path to the file where the error
+ // Omit import stack. The full path to the file where the error
// is the most important thing.
return p.Pos + ": " + p.Err
}
return loadPackage(arg, stk)
}
-// The Go 1.5 vendoring experiment was enabled by setting GO15VENDOREXPERIMENT=1.
-// In Go 1.6 this is on by default and is disabled by setting GO15VENDOREXPERIMENT=0.
-// In Go 1.7 the variable will stop having any effect.
-// The variable is obnoxiously long so that years from now when people find it in
-// their profiles and wonder what it does, there is some chance that a web search
-// might answer the question.
-// There is a copy of this variable in src/go/build/build.go. Delete that one when this one goes away.
-var go15VendorExperiment = os.Getenv("GO15VENDOREXPERIMENT") != "0"
-
// dirToImportPath returns the pseudo-import path we use for a package
-// outside the Go path. It begins with _/ and then contains the full path
-// to the directory. If the package lives in c:\home\gopher\my\pkg then
+// outside the Go path. It begins with _/ and then contains the full path
+// to the directory. If the package lives in c:\home\gopher\my\pkg then
// the pseudo-import path is _/c_/home/gopher/my/pkg.
// Using a pseudo-import path like this makes the ./ imports no longer
// a special case, so that all the code to deal with ordinary imports works
// TODO: After Go 1, decide when to pass build.AllowBinary here.
// See issue 3268 for mistakes to avoid.
buildMode := build.ImportComment
- if !go15VendorExperiment || mode&useVendor == 0 || path != origPath {
+ if mode&useVendor == 0 || path != origPath {
// Not vendoring, or we already found the vendored path.
buildMode |= build.IgnoreVendor
}
bp.BinDir = gobin
}
if err == nil && !isLocal && bp.ImportComment != "" && bp.ImportComment != path &&
- (!go15VendorExperiment || (!strings.Contains(path, "/vendor/") && !strings.HasPrefix(path, "vendor/"))) {
+ !strings.Contains(path, "/vendor/") && !strings.HasPrefix(path, "vendor/") {
err = fmt.Errorf("code in directory %s expects import %q", bp.Dir, bp.ImportComment)
}
p.load(stk, bp, err)
// x/vendor/path, vendor/path, or else stay path if none of those exist.
// vendoredImportPath returns the expanded path or, if no expansion is found, the original.
func vendoredImportPath(parent *Package, path string) (found string) {
- if parent == nil || parent.Root == "" || !go15VendorExperiment {
+ if parent == nil || parent.Root == "" {
return path
}
}
// reusePackage reuses package p to satisfy the import at the top
-// of the import stack stk. If this use causes an import loop,
+// of the import stack stk. If this use causes an import loop,
// reusePackage updates p's error information to record the loop.
func reusePackage(p *Package, stk *importStack) *Package {
// We use p.imports==nil to detect a package that
// If the import is allowed, disallowVendor returns the original package p.
// If not, it returns a new package containing just an appropriate error.
func disallowVendor(srcDir, path string, p *Package, stk *importStack) *Package {
- if !go15VendorExperiment {
- return p
- }
-
// The stack includes p.ImportPath.
// If that's the only thing on the stack, we started
// with a name given on the command line, not an
// Prepare error with \n before each message.
// When printed in something like context: %v
// this will put the leading file positions each on
- // its own line. It will also show all the errors
+ // its own line. It will also show all the errors
// instead of just the first, as err.Error does.
var buf bytes.Buffer
for _, e := range err {
p.CXXFiles,
p.MFiles,
p.HFiles,
+ p.FFiles,
p.SFiles,
p.SysoFiles,
p.SwigFiles,
}
}
}
- if p.Standard && !p1.Standard && p.Error == nil {
+ if p.Standard && p.Error == nil && !p1.Standard && p1.Error == nil {
p.Error = &PackageError{
ImportStack: stk.copy(),
Err: fmt.Sprintf("non-standard import %q in standard package %q", path, p.ImportPath),
// an explicit data comparison. Specifically, we build a list of the
// inputs to the build, compute its SHA1 hash, and record that as the
// ``build ID'' in the generated object. At the next build, we can
-// recompute the buid ID and compare it to the one in the generated
+// recompute the build ID and compare it to the one in the generated
// object. If they differ, the list of inputs has changed, so the object
// is out of date and must be rebuilt.
//
}
// A package without Go sources means we only found
- // the installed .a file. Since we don't know how to rebuild
- // it, it can't be stale, even if -a is set. This enables binary-only
+ // the installed .a file. Since we don't know how to rebuild
+ // it, it can't be stale, even if -a is set. This enables binary-only
// distributions of Go packages, although such binaries are
// only useful with the specific version of the toolchain that
// created them.
// As a courtesy to developers installing new versions of the compiler
// frequently, define that packages are stale if they are
// older than the compiler, and commands if they are older than
- // the linker. This heuristic will not work if the binaries are
+ // the linker. This heuristic will not work if the binaries are
// back-dated, as some binary distributions may do, but it does handle
// a very common case.
// See issue 3036.
// to test for write access, and then skip GOPATH roots we don't have write
// access to. But hopefully we can just use the mtimes always.
- srcs := stringList(p.GoFiles, p.CFiles, p.CXXFiles, p.MFiles, p.HFiles, p.SFiles, p.CgoFiles, p.SysoFiles, p.SwigFiles, p.SwigCXXFiles)
+ srcs := stringList(p.GoFiles, p.CFiles, p.CXXFiles, p.MFiles, p.HFiles, p.FFiles, p.SFiles, p.CgoFiles, p.SysoFiles, p.SwigFiles, p.SwigCXXFiles)
for _, src := range srcs {
if olderThan(filepath.Join(p.Dir, src)) {
return true
var cmdCache = map[string]*Package{}
// loadPackage is like loadImport but is used for command-line arguments,
-// not for paths found in import statements. In addition to ordinary import paths,
+// not for paths found in import statements. In addition to ordinary import paths,
// loadPackage accepts pseudo-paths beginning with cmd/ to denote commands
// in the Go command directory, as well as paths to those directories.
func loadPackage(arg string, stk *importStack) *Package {
// command line arguments 'args'. If a named package
// cannot be loaded at all (for example, if the directory does not exist),
// then packages prints an error and does not include that
-// package in the results. However, if errors occur trying
+// package in the results. However, if errors occur trying
// to load dependencies of a named package, the named
// package is still returned, with p.Incomplete = true
// and details in p.DepsErrors.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// runProgram is the action for running a binary that has already
-// been compiled. We ignore exit status.
+// been compiled. We ignore exit status.
func (b *builder) runProgram(a *action) error {
cmdline := stringList(findExecCmd(), a.deps[0].target, a.args)
if buildN || buildX {
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// If a test timeout was given and is parseable, set our kill timeout
- // to that timeout plus one minute. This is a backup alarm in case
+ // to that timeout plus one minute. This is a backup alarm in case
// the test wedges with a goroutine spinning and its background
// timer does not get a chance to fire.
if dt, err := time.ParseDuration(testTimeout); err == nil && dt > 0 {
// the usual place in the temporary tree, because then
// other tests will see it as the real package.
// Instead we make a _test directory under the import path
- // and then repeat the import path there. We tell the
+ // and then repeat the import path there. We tell the
// compiler and linker to look in that _test directory first.
//
// That is, if the package under test is unicode/utf8,
return nil
}
-// isTestMain tells whether fn is a TestMain(m *testing.M) function.
-func isTestMain(fn *ast.FuncDecl) bool {
- if fn.Name.String() != "TestMain" ||
- fn.Type.Results != nil && len(fn.Type.Results.List) > 0 ||
- fn.Type.Params == nil ||
+// isTestFunc tells whether fn has the type of a testing function. arg
+// specifies the parameter type we look for: B, M or T.
+func isTestFunc(fn *ast.FuncDecl, arg string) bool {
+ if fn.Type.Results != nil && len(fn.Type.Results.List) > 0 ||
+ fn.Type.Params.List == nil ||
len(fn.Type.Params.List) != 1 ||
len(fn.Type.Params.List[0].Names) > 1 {
return false
// We can't easily check that the type is *testing.M
// because we don't know how testing has been imported,
// but at least check that it's *M or *something.M.
- if name, ok := ptr.X.(*ast.Ident); ok && name.Name == "M" {
+ // Same applies for B and T.
+ if name, ok := ptr.X.(*ast.Ident); ok && name.Name == arg {
return true
}
- if sel, ok := ptr.X.(*ast.SelectorExpr); ok && sel.Sel.Name == "M" {
+ if sel, ok := ptr.X.(*ast.SelectorExpr); ok && sel.Sel.Name == arg {
return true
}
return false
}
name := n.Name.String()
switch {
- case isTestMain(n):
+ case name == "TestMain" && isTestFunc(n, "M"):
if t.TestMain != nil {
return errors.New("multiple definitions of TestMain")
}
t.TestMain = &testFunc{pkg, name, ""}
*doImport, *seen = true, true
case isTest(name, "Test"):
+ err := checkTestFunc(n, "T")
+ if err != nil {
+ return err
+ }
t.Tests = append(t.Tests, testFunc{pkg, name, ""})
*doImport, *seen = true, true
case isTest(name, "Benchmark"):
+ err := checkTestFunc(n, "B")
+ if err != nil {
+ return err
+ }
t.Benchmarks = append(t.Benchmarks, testFunc{pkg, name, ""})
*doImport, *seen = true, true
}
return nil
}
+func checkTestFunc(fn *ast.FuncDecl, arg string) error {
+ if !isTestFunc(fn, arg) {
+ name := fn.Name.String()
+ pos := testFileSet.Position(fn.Pos())
+ return fmt.Errorf("%s: wrong signature for %s, must be: func %s(%s *testing.%s)", pos, name, name, strings.ToLower(arg), arg)
+ }
+ return nil
+}
+
type byOrder []*doc.Example
func (x byOrder) Len() int { return len(x) }
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
--- /dev/null
+package benchfatal
+
+import "testing"
+
+func BenchmarkThatCallsFatal(b *testing.B) {
+ b.Fatal("called by benchmark")
+}
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
downloadCmd: []string{"pull"},
// We allow both tag and branch names as 'tags'
- // for selecting a version. This lets people have
+ // for selecting a version. This lets people have
// a go.release.r60 branch and a go1 branch
// and make changes in both, without constantly
// editing .hgtags.
// The parent of dir must exist; dir must not.
func (v *vcsCmd) create(dir, repo string) error {
for _, cmd := range v.createCmd {
- if !go15VendorExperiment && strings.Contains(cmd, "submodule") {
+ if strings.Contains(cmd, "submodule") {
continue
}
if err := v.run(".", cmd, "dir", dir, "repo", repo); err != nil {
// download downloads any new changes for the repo in dir.
func (v *vcsCmd) download(dir string) error {
for _, cmd := range v.downloadCmd {
- if !go15VendorExperiment && strings.Contains(cmd, "submodule") {
+ if strings.Contains(cmd, "submodule") {
continue
}
if err := v.run(dir, cmd); err != nil {
if tag == "" && v.tagSyncDefault != nil {
for _, cmd := range v.tagSyncDefault {
- if !go15VendorExperiment && strings.Contains(cmd, "submodule") {
+ if strings.Contains(cmd, "submodule") {
continue
}
if err := v.run(dir, cmd); err != nil {
}
for _, cmd := range v.tagSyncCmd {
- if !go15VendorExperiment && strings.Contains(cmd, "submodule") {
+ if strings.Contains(cmd, "submodule") {
continue
}
if err := v.run(dir, cmd, "tag", tag); err != nil {
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
tg := testgo(t)
defer tg.cleanup()
tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata"))
- tg.setenv("GO15VENDOREXPERIMENT", "1")
tg.run("list", "-f", "{{.ImportPath}} {{.Imports}}", "vend/...")
want := `
vend [vend/vendor/p r]
tg := testgo(t)
defer tg.cleanup()
tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata"))
- tg.setenv("GO15VENDOREXPERIMENT", "1")
tg.run("build", "vend/x")
}
tg := testgo(t)
defer tg.cleanup()
tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata"))
- tg.setenv("GO15VENDOREXPERIMENT", "1")
tg.cd(filepath.Join(tg.pwd(), "testdata/src/vend/hello"))
tg.run("run", "hello.go")
tg.grepStdout("hello, world", "missing hello world output")
}
gopath := changeVolume(filepath.Join(tg.pwd(), "testdata"), strings.ToLower)
tg.setenv("GOPATH", gopath)
- tg.setenv("GO15VENDOREXPERIMENT", "1")
cd := changeVolume(filepath.Join(tg.pwd(), "testdata/src/vend/hello"), strings.ToUpper)
tg.cd(cd)
tg.run("run", "hello.go")
tg := testgo(t)
defer tg.cleanup()
tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata"))
- tg.setenv("GO15VENDOREXPERIMENT", "1")
tg.cd(filepath.Join(tg.pwd(), "testdata/src/vend/hello"))
tg.run("test", "-v")
tg.grepStdout("TestMsgInternal", "missing use in internal test")
tg := testgo(t)
defer tg.cleanup()
tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata"))
- tg.setenv("GO15VENDOREXPERIMENT", "1")
tg.runFail("build", "vend/x/invalid")
tg.grepStderr("must be imported as foo", "missing vendor import error")
tg := testgo(t)
defer tg.cleanup()
tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata"))
- tg.setenv("GO15VENDOREXPERIMENT", "1")
tg.runFail("build", "vend/x/vendor/p/p")
package p
const C = 1`)
tg.setenv("GOPATH", tg.path("."))
- tg.setenv("GO15VENDOREXPERIMENT", "1")
tg.cd(tg.path("src/v"))
tg.run("run", "m.go")
tg.run("test")
defer tg.cleanup()
tg.makeTempdir()
tg.setenv("GOPATH", tg.path("."))
- tg.setenv("GO15VENDOREXPERIMENT", "1")
tg.run("get", "github.com/rsc/go-get-issue-11864")
tg.run("get", "-u", "github.com/rsc/go-get-issue-11864")
}
defer tg.cleanup()
tg.makeTempdir()
tg.setenv("GOPATH", tg.path("."))
- tg.setenv("GO15VENDOREXPERIMENT", "1")
tg.run("get", "-d", "github.com/rsc/go-get-issue-12612")
tg.run("get", "-u", "-d", "github.com/rsc/go-get-issue-12612")
}
tg := testgo(t)
defer tg.cleanup()
tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata/testvendor"))
- tg.setenv("GO15VENDOREXPERIMENT", "1")
tg.runFail("build", "p")
tg.grepStderr("must be imported as x", "did not fail to build p")
}
defer tg.cleanup()
tg.makeTempdir()
tg.setenv("GOPATH", tg.path("."))
- tg.setenv("GO15VENDOREXPERIMENT", "1")
tg.run("get", "github.com/rsc/go-get-issue-11864")
// build -i should work
defer tg.cleanup()
tg.makeTempdir()
tg.setenv("GOPATH", tg.path("."))
- tg.setenv("GO15VENDOREXPERIMENT", "1")
tg.run("get", "github.com/rsc/go-get-issue-11864")
tg.run("list", "-f", `{{join .TestImports "\n"}}`, "github.com/rsc/go-get-issue-11864/t")
tg := testgo(t)
defer tg.cleanup()
tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata/testvendor2"))
- tg.setenv("GO15VENDOREXPERIMENT", "1")
tg.cd(filepath.Join(tg.pwd(), "testdata/testvendor2/src/p"))
tg.runFail("build", "p.go")
tg.grepStderrNot("panic", "panicked")
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
if err == nil && isGoFile(f) {
err = processFile(path, nil, os.Stdout, false)
}
- if err != nil {
+ // Don't complain if a file was deleted in the meantime (i.e.
+ // the directory changed concurrently while running gofmt).
+ if err != nil && !os.IsNotExist(err) {
report(err)
}
return nil
) {
// Try as whole source file.
file, err = parser.ParseFile(fset, filename, src, parserMode)
- // If there's no error, return. If the error is that the source file didn't begin with a
+ // If there's no error, return. If the error is that the source file didn't begin with a
// package line and source fragments are ok, fall through to
- // try as a source fragment. Stop and return on any other error.
+ // try as a source fragment. Stop and return on any other error.
if err == nil || !fragmentOk || !strings.Contains(err.Error(), "expected 'package'") {
return
}
// If this is a statement list, make it a source file
// by inserting a package clause and turning the list
- // into a function body. This handles expressions too.
+ // into a function body. This handles expressions too.
// Insert using a ;, not a newline, so that the line numbers
// in fsrc match the ones in src. Add an extra '\n' before the '}'
// to make sure comments are flushed before the '}'.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// recording wildcard submatches in m.
// If m == nil, match checks whether pattern == val.
func match(m map[string]reflect.Value, pattern, val reflect.Value) bool {
- // Wildcard matches any expression. If it appears multiple
+ // Wildcard matches any expression. If it appears multiple
// times in the pattern, it must match the same expression
// each time.
if m != nil && pattern.IsValid() && pattern.Type() == identType {
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
return errCorruptArchive
}
switch name {
- case "__.SYMDEF", "__.GOSYMDEF", "__.PKGDEF":
+ case "__.PKGDEF":
r.skip(size)
default:
oldLimit := r.limit
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// This is supposed to be something that stops execution.
// It's not supposed to be reached, ever, but if it is, we'd
- // like to be able to tell how we got there. Assemble as
+ // like to be able to tell how we got there. Assemble as
// 0xf7fabcfd which is guaranteed to raise undefined instruction
// exception.
case 96: /* UNDEF */
}
}
- // Replace TLS register fetches on older ARM procesors.
+ // Replace TLS register fetches on older ARM processors.
switch p.As {
// Treat MRC 15, 0, <reg>, C13, C0, 3 specially.
case AMRC:
if p.To.Offset&0xffff0fff == 0xee1d0f70 {
- // Because the instruction might be rewriten to a BL which returns in R0
+ // Because the instruction might be rewritten to a BL which returns in R0
// the register must be zero.
if p.To.Offset&0xf000 != 0 {
ctxt.Diag("%v: TLS MRC instruction must write to R0 as it might get translated into a BL instruction", p.Line())
// cmd/7c/7.out.h from Vita Nuova.
// https://code.google.com/p/ken-cc/source/browse/src/cmd/7c/7.out.h
//
-// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
+// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
// Portions Copyright © 1995-1997 C H Forsyth (forsyth@terzarima.net)
// Portions Copyright © 1997-1999 Vita Nuova Limited
// Portions Copyright © 2000-2007 Vita Nuova Holdings Limited (www.vitanuova.com)
// Portions Copyright © 2004,2006 Bruce Ellis
// Portions Copyright © 2005-2007 C H Forsyth (forsyth@terzarima.net)
// Revisions Copyright © 2000-2007 Lucent Technologies Inc. and others
-// Portions Copyright © 2009 The Go Authors. All rights reserved.
+// Portions Copyright © 2009 The Go Authors. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
)
// Not registers, but flags that can be combined with regular register
-// constants to indicate extended register conversion. When checking,
+// constants to indicate extended register conversion. When checking,
// you should subtract obj.RBaseARM64 first. From this difference, bit 11
// indicates extended register, bits 8-10 select the conversion mode.
const REG_EXT = obj.RBaseARM64 + 1<<11
// cmd/7l/asm.c, cmd/7l/asmout.c, cmd/7l/optab.c, cmd/7l/span.c, cmd/ld/sub.c, cmd/ld/mod.c, from Vita Nuova.
// https://code.google.com/p/ken-cc/source/browse/
//
-// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
+// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
// Portions Copyright © 1995-1997 C H Forsyth (forsyth@terzarima.net)
// Portions Copyright © 1997-1999 Vita Nuova Limited
// Portions Copyright © 2000-2007 Vita Nuova Holdings Limited (www.vitanuova.com)
// Portions Copyright © 2004,2006 Bruce Ellis
// Portions Copyright © 2005-2007 C H Forsyth (forsyth@terzarima.net)
// Revisions Copyright © 2000-2007 Lucent Technologies Inc. and others
-// Portions Copyright © 2009 The Go Authors. All rights reserved.
+// Portions Copyright © 2009 The Go Authors. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// This is supposed to be something that stops execution.
// It's not supposed to be reached, ever, but if it is, we'd
- // like to be able to tell how we got there. Assemble as
+ // like to be able to tell how we got there. Assemble as
// 0xbea71700 which is guaranteed to raise undefined instruction
// exception.
case 90:
// cmd/7l/list.c and cmd/7l/sub.c from Vita Nuova.
// https://code.google.com/p/ken-cc/source/browse/
//
-// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
+// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
// Portions Copyright © 1995-1997 C H Forsyth (forsyth@terzarima.net)
// Portions Copyright © 1997-1999 Vita Nuova Limited
// Portions Copyright © 2000-2007 Vita Nuova Holdings Limited (www.vitanuova.com)
// Portions Copyright © 2004,2006 Bruce Ellis
// Portions Copyright © 2005-2007 C H Forsyth (forsyth@terzarima.net)
// Revisions Copyright © 2000-2007 Lucent Technologies Inc. and others
-// Portions Copyright © 2009 The Go Authors. All rights reserved.
+// Portions Copyright © 2009 The Go Authors. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// cmd/7l/noop.c, cmd/7l/obj.c, cmd/ld/pass.c from Vita Nuova.
// https://code.google.com/p/ken-cc/source/browse/
//
-// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
+// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
// Portions Copyright © 1995-1997 C H Forsyth (forsyth@terzarima.net)
// Portions Copyright © 1997-1999 Vita Nuova Limited
// Portions Copyright © 2000-2007 Vita Nuova Holdings Limited (www.vitanuova.com)
// Portions Copyright © 2004,2006 Bruce Ellis
// Portions Copyright © 2005-2007 C H Forsyth (forsyth@terzarima.net)
// Revisions Copyright © 2000-2007 Lucent Technologies Inc. and others
-// Portions Copyright © 2009 The Go Authors. All rights reserved.
+// Portions Copyright © 2009 The Go Authors. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
Spadj int32
As int16
Reg int16
- RegTo2 int16 // 2nd register output operand
- Mark uint16
+ RegTo2 int16 // 2nd register output operand
+ Mark uint16 // bitmask of arch-specific items
Optab uint16
Scond uint8
Back uint8
Ft uint8
Tt uint8
- Isize uint8
+ Isize uint8 // size of the instruction in bytes (x86 only)
Mode int8
Info ProgInfo
R_PLT1
R_PLT2
R_USEFIELD
+ // R_USETYPE resolves to an *rtype, but no relocation is created. The
+ // linker uses this as a signal that the pointed-to type information
+ // should be linked into the final binary, even if there are no other
+ // direct references. (This is used for types reachable by reflection.)
+ R_USETYPE
R_POWER_TOC
R_GOTPCREL
// R_JMPMIPS (only used on mips64) resolves to non-PC-relative target address
// have the inherent issue that a 32-bit (or 64-bit!) displacement cannot be
// stuffed into a 32-bit instruction, so an address needs to be spread across
// several instructions, and in turn this requires a sequence of relocations, each
- // updating a part of an instruction. This leads to relocation codes that are
+ // updating a part of an instruction. This leads to relocation codes that are
// inherently processor specific.
// Arm64.
Debugpcln int32
Flag_shared int32
Flag_dynlink bool
+ Flag_optimize bool
Bso *Biobuf
Pathname string
Windows int32
Data *LSym
Etext *LSym
Edata *LSym
+
+ // Cache of Progs
+ allocIdx int
+ progs [10000]Prog
}
func (ctxt *Link) Diag(format string, args ...interface{}) {
}
/*
- * test to see if 2 instrictions can be
+ * test to see if two instructions can be
* interchanged without changing semantics
*/
func depend(ctxt *obj.Link, sa, sb *Sch) bool {
// together, so that given (only) calls Push(10, "x.go", 1) and Pop(15),
// virtual line 12 corresponds to x.go line 3.
type LineHist struct {
- Top *LineStack // current top of stack
- Ranges []LineRange // ranges for lookup
- Dir string // directory to qualify relative paths
- TrimPathPrefix string // remove leading TrimPath from recorded file names
- GOROOT string // current GOROOT
- GOROOT_FINAL string // target GOROOT
+ Top *LineStack // current top of stack
+ Ranges []LineRange // ranges for lookup
+ Dir string // directory to qualify relative paths
+ TrimPathPrefix string // remove leading TrimPath from recorded file names
+ PrintFilenameOnly bool // ignore path when pretty-printing a line; internal use only
+ GOROOT string // current GOROOT
+ GOROOT_FINAL string // target GOROOT
}
// A LineStack is an entry in the recorded line history.
return "<unknown line number>"
}
- text := fmt.Sprintf("%s:%d", stk.File, stk.fileLineAt(lineno))
+ filename := stk.File
+ if h.PrintFilenameOnly {
+ filename = filepath.Base(filename)
+ }
+ text := fmt.Sprintf("%s:%d", filename, stk.fileLineAt(lineno))
if stk.Directive && stk.Parent != nil {
stk = stk.Parent
- text += fmt.Sprintf("[%s:%d]", stk.File, stk.fileLineAt(lineno))
+ filename = stk.File
+ if h.PrintFilenameOnly {
+ filename = filepath.Base(filename)
+ }
+ text += fmt.Sprintf("[%s:%d]", filename, stk.fileLineAt(lineno))
}
const showFullStack = false // was used by old C compilers
if showFullStack {
for stk.Parent != nil {
lineno = stk.Lineno - 1
stk = stk.Parent
- text += fmt.Sprintf(" %s:%d", stk.File, stk.fileLineAt(lineno))
+ text += fmt.Sprintf(" %s:%d", filename, stk.fileLineAt(lineno))
if stk.Directive && stk.Parent != nil {
stk = stk.Parent
- text += fmt.Sprintf("[%s:%d]", stk.File, stk.fileLineAt(lineno))
+ text += fmt.Sprintf("[%s:%d]", filename, stk.fileLineAt(lineno))
}
}
}
func Linkprfile(ctxt *Link, line int) {
fmt.Printf("%s ", ctxt.LineHist.LineString(line))
}
+
+func fieldtrack(ctxt *Link, cursym *LSym) {
+ p := cursym.Text
+ if p == nil || p.Link == nil { // handle external functions and ELF section symbols
+ return
+ }
+ ctxt.Cursym = cursym
+
+ for ; p != nil; p = p.Link {
+ if p.As == AUSEFIELD {
+ r := Addrel(ctxt.Cursym)
+ r.Off = 0
+ r.Siz = 0
+ r.Sym = p.From.Sym
+ r.Type = R_USEFIELD
+ }
+ }
+}
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
)
// The Go and C compilers, and the assembler, call writeobj to write
-// out a Go object file. The linker does not call this; the linker
+// out a Go object file. The linker does not call this; the linker
// does not write out object files.
func Writeobjdirect(ctxt *Link, b *Biobuf) {
Flushplist(ctxt)
}
func Flushplist(ctxt *Link) {
+ flushplist(ctxt, ctxt.Debugasm == 0)
+}
+func FlushplistNoFree(ctxt *Link) {
+ flushplist(ctxt, false)
+}
+func flushplist(ctxt *Link, freeProgs bool) {
var flag int
var s *LSym
var p *Prog
for s := text; s != nil; s = s.Next {
mkfwd(s)
linkpatch(ctxt, s)
- ctxt.Arch.Follow(ctxt, s)
+ if ctxt.Flag_optimize {
+ ctxt.Arch.Follow(ctxt, s)
+ }
ctxt.Arch.Preprocess(ctxt, s)
ctxt.Arch.Assemble(ctxt, s)
+ fieldtrack(ctxt, s)
linkpcln(ctxt, s)
+ if freeProgs {
+ s.Text = nil
+ s.Etext = nil
+ }
}
// Add to running list in ctxt.
- if ctxt.Etext == nil {
- ctxt.Text = text
- } else {
- ctxt.Etext.Next = text
+ if text != nil {
+ if ctxt.Text == nil {
+ ctxt.Text = text
+ } else {
+ ctxt.Etext.Next = text
+ }
+ ctxt.Etext = etext
}
- ctxt.Etext = etext
ctxt.Plist = nil
+ ctxt.Plast = nil
+ ctxt.Curp = nil
+ if freeProgs {
+ ctxt.freeProgs()
+ }
}
func Writeobjfile(ctxt *Link, b *Biobuf) {
p.Pcond = q
}
- for p := sym.Text; p != nil; p = p.Link {
- p.Mark = 0 /* initialization for follow */
- if p.Pcond != nil {
- p.Pcond = brloop(ctxt, p.Pcond)
+ if ctxt.Flag_optimize {
+ for p := sym.Text; p != nil; p = p.Link {
if p.Pcond != nil {
- if p.To.Type == TYPE_BRANCH {
- p.To.Offset = p.Pcond.Pc
+ p.Pcond = brloop(ctxt, p.Pcond)
+ if p.Pcond != nil {
+ if p.To.Type == TYPE_BRANCH {
+ p.To.Offset = p.Pcond.Pc
+ }
}
}
}
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
if ctxt.Flag_dynlink {
- // Avoid calling morestack via a PLT when dynamically linking. The
+ // Avoid calling morestack via a PLT when dynamically linking. The
// PLT stubs generated by the system linker on ppc64le when "std r2,
// 24(r1)" to save the TOC pointer in their callers stack
// frame. Unfortunately (and necessarily) morestack is called before
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
ctxt.Goarm = Getgoarm()
}
+ ctxt.Flag_optimize = true
return ctxt
}
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// This file defines flags attached to various functions
-// and data objects. The compilers, assemblers, and linker must
+// and data objects. The compilers, assemblers, and linker must
// all agree on these values.
package obj
// Deprecated: Not implemented, do not use.
NOPROF = 1
- // It is ok for the linker to get multiple of these symbols. It will
+ // It is ok for the linker to get multiple of these symbols. It will
// pick one of the duplicates to use.
DUPOK = 2
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
func (p *Prog) String() string {
+ if p == nil {
+ return "<nil Prog>"
+ }
+
if p.Ctxt == nil {
return "<Prog without ctxt>"
}
}
func (ctxt *Link) NewProg() *Prog {
- p := new(Prog) // should be the only call to this; all others should use ctxt.NewProg
+ var p *Prog
+ if i := ctxt.allocIdx; i < len(ctxt.progs) {
+ p = &ctxt.progs[i]
+ ctxt.allocIdx = i + 1
+ } else {
+ p = new(Prog) // should be the only call to this; all others should use ctxt.NewProg
+ }
p.Ctxt = ctxt
return p
}
+func (ctxt *Link) freeProgs() {
+ s := ctxt.progs[:ctxt.allocIdx]
+ for i := range s {
+ s[i] = Prog{}
+ }
+ ctxt.allocIdx = 0
+}
func (ctxt *Link) Line(n int) string {
return ctxt.LineHist.LineString(n)
if a.Index != REG_NONE {
str += fmt.Sprintf("(%v*%d)", Rconv(int(a.Index)), int(a.Scale))
}
+ if p.As == ATYPE && a.Gotype != nil {
+ str += fmt.Sprintf("%s", a.Gotype.Name)
+ }
case TYPE_CONST:
if a.Reg != 0 {
)
// RegisterRegister binds a pretty-printer (Rconv) for register
-// numbers to a given register number range. Lo is inclusive,
+// numbers to a given register number range. Lo is inclusive,
// hi exclusive (valid registers are lo through hi-1).
func RegisterRegister(lo, hi int, Rconv func(int) string) {
regSpace = append(regSpace, regSet{lo, hi, Rconv})
//go:generate go run ../stringer.go -i $GOFILE -o anames.go -p x86
+const (
+ /* mark flags */
+ DONE = 1 << iota
+ PRESERVEFLAGS // not allowed to clobber flags
+)
+
/*
* amd64
*/
AINTO
AIRETL
AIRETW
- AJCC
- AJCS
+ AJCC // >= unsigned
+ AJCS // < unsigned
AJCXZL
- AJEQ
- AJGE
- AJGT
- AJHI
- AJLE
- AJLS
- AJLT
- AJMI
- AJNE
- AJOC
- AJOS
- AJPC
- AJPL
- AJPS
+ AJEQ // == (zero)
+ AJGE // >= signed
+ AJGT // > signed
+ AJHI // > unsigned
+ AJLE // <= signed
+ AJLS // <= unsigned
+ AJLT // < signed
+ AJMI // sign bit set (negative)
+ AJNE // != (nonzero)
+ AJOC // overflow clear
+ AJOS // overflow set
+ AJPC // parity clear
+ AJPL // sign bit clear (positive)
+ AJPS // parity set
ALAHF
ALARL
ALARW
AFMOVX
AFMOVXP
- AFCOMB
- AFCOMBP
AFCOMD
AFCOMDP
AFCOMDPP
APADDUSW
APADDW
APAND
- APANDB
- APANDL
APANDN
- APANDSB
- APANDSW
- APANDUSB
- APANDUSW
- APANDW
APAVGB
APAVGW
APCMPEQB
APEXTRD
APEXTRQ
APEXTRW
- APFACC
- APFADD
- APFCMPEQ
- APFCMPGE
- APFCMPGT
- APFMAX
- APFMIN
- APFMUL
- APFNACC
- APFPNACC
- APFRCP
- APFRCPI2T
- APFRCPIT1
- APFRSQIT1
- APFRSQRT
- APFSUB
- APFSUBR
APHADDD
APHADDSW
APHADDW
APMOVZXWD
APMOVZXWQ
APMULDQ
- APMULHRW
APMULHUW
APMULHW
APMULLD
APSUBUSB
APSUBUSW
APSUBW
- APSWAPL
APUNPCKHBW
APUNPCKHLQ
APUNPCKHQDQ
AUNPCKLPS
AXORPD
AXORPS
-
- APF2IW
- APF2IL
- API2FW
- API2FL
ARETFW
ARETFL
ARETFQ
"FMOVWP",
"FMOVX",
"FMOVXP",
- "FCOMB",
- "FCOMBP",
"FCOMD",
"FCOMDP",
"FCOMDPP",
"PADDUSW",
"PADDW",
"PAND",
- "PANDB",
- "PANDL",
"PANDN",
- "PANDSB",
- "PANDSW",
- "PANDUSB",
- "PANDUSW",
- "PANDW",
"PAVGB",
"PAVGW",
"PCMPEQB",
"PEXTRD",
"PEXTRQ",
"PEXTRW",
- "PFACC",
- "PFADD",
- "PFCMPEQ",
- "PFCMPGE",
- "PFCMPGT",
- "PFMAX",
- "PFMIN",
- "PFMUL",
- "PFNACC",
- "PFPNACC",
- "PFRCP",
- "PFRCPI2T",
- "PFRCPIT1",
- "PFRSQIT1",
- "PFRSQRT",
- "PFSUB",
- "PFSUBR",
"PHADDD",
"PHADDSW",
"PHADDW",
"PMOVZXWD",
"PMOVZXWQ",
"PMULDQ",
- "PMULHRW",
"PMULHUW",
"PMULHW",
"PMULLD",
"PSUBUSB",
"PSUBUSW",
"PSUBW",
- "PSWAPL",
"PUNPCKHBW",
"PUNPCKHLQ",
"PUNPCKHQDQ",
"UNPCKLPS",
"XORPD",
"XORPS",
- "PF2IW",
- "PF2IL",
- "PI2FW",
- "PI2FL",
"RETFW",
"RETFL",
"RETFQ",
Zm2_r
Zm_r_xm
Zm_r_i_xm
- Zm_r_3d
Zm_r_xm_nr
Zr_m_xm_nr
Zibm_r /* mmx1,mmx2/mem64,imm8 */
{Yxr, Ynone, Yrl, Zm_r, 1},
}
-var ymfp = []ytab{
- {Ymm, Ynone, Ymr, Zm_r_3d, 1},
-}
-
var ymrxr = []ytab{
{Ymr, Ynone, Yxr, Zm_r, 1},
{Yxm, Ynone, Yxr, Zm_r_xm, 1},
{ACVTPD2PS, yxm, Pe, [23]uint8{0x5a}},
{ACVTPS2PL, yxcvm1, Px, [23]uint8{Pe, 0x5b, Pm, 0x2d}},
{ACVTPS2PD, yxm, Pm, [23]uint8{0x5a}},
- {API2FW, ymfp, Px, [23]uint8{0x0c}},
{ACVTSD2SL, yxcvfl, Pf2, [23]uint8{0x2d}},
{ACVTSD2SQ, yxcvfq, Pw, [23]uint8{Pf2, 0x2d}},
{ACVTSD2SS, yxm, Pf2, [23]uint8{0x5a}},
{APEXTRB, yextr, Pq, [23]uint8{0x3a, 0x14, 00}},
{APEXTRD, yextr, Pq, [23]uint8{0x3a, 0x16, 00}},
{APEXTRQ, yextr, Pq3, [23]uint8{0x3a, 0x16, 00}},
- {APF2IL, ymfp, Px, [23]uint8{0x1d}},
- {APF2IW, ymfp, Px, [23]uint8{0x1c}},
- {API2FL, ymfp, Px, [23]uint8{0x0d}},
- {APFACC, ymfp, Px, [23]uint8{0xae}},
- {APFADD, ymfp, Px, [23]uint8{0x9e}},
- {APFCMPEQ, ymfp, Px, [23]uint8{0xb0}},
- {APFCMPGE, ymfp, Px, [23]uint8{0x90}},
- {APFCMPGT, ymfp, Px, [23]uint8{0xa0}},
- {APFMAX, ymfp, Px, [23]uint8{0xa4}},
- {APFMIN, ymfp, Px, [23]uint8{0x94}},
- {APFMUL, ymfp, Px, [23]uint8{0xb4}},
- {APFNACC, ymfp, Px, [23]uint8{0x8a}},
- {APFPNACC, ymfp, Px, [23]uint8{0x8e}},
- {APFRCP, ymfp, Px, [23]uint8{0x96}},
- {APFRCPIT1, ymfp, Px, [23]uint8{0xa6}},
- {APFRCPI2T, ymfp, Px, [23]uint8{0xb6}},
- {APFRSQIT1, ymfp, Px, [23]uint8{0xa7}},
- {APFRSQRT, ymfp, Px, [23]uint8{0x97}},
- {APFSUB, ymfp, Px, [23]uint8{0x9a}},
- {APFSUBR, ymfp, Px, [23]uint8{0xaa}},
{APHADDD, ymmxmm0f38, Px, [23]uint8{0x0F, 0x38, 0x02, 0, 0x66, 0x0F, 0x38, 0x02, 0}},
{APHADDSW, yxm_q4, Pq4, [23]uint8{0x03}},
{APHADDW, yxm_q4, Pq4, [23]uint8{0x01}},
{APMOVZXWD, yxm_q4, Pq4, [23]uint8{0x33}},
{APMOVZXWQ, yxm_q4, Pq4, [23]uint8{0x34}},
{APMULDQ, yxm_q4, Pq4, [23]uint8{0x28}},
- {APMULHRW, ymfp, Px, [23]uint8{0xb7}},
{APMULHUW, ymm, Py1, [23]uint8{0xe4, Pe, 0xe4}},
{APMULHW, ymm, Py1, [23]uint8{0xe5, Pe, 0xe5}},
{APMULLD, yxm_q4, Pq4, [23]uint8{0x40}},
{APSUBUSB, yxm, Pe, [23]uint8{0xd8}},
{APSUBUSW, yxm, Pe, [23]uint8{0xd9}},
{APSUBW, yxm, Pe, [23]uint8{0xf9}},
- {APSWAPL, ymfp, Px, [23]uint8{0xbb}},
{APUNPCKHBW, ymm, Py1, [23]uint8{0x68, Pe, 0x68}},
{APUNPCKHLQ, ymm, Py1, [23]uint8{0x6a, Pe, 0x6a}},
{APUNPCKHQDQ, yxm, Pe, [23]uint8{0x6d}},
{AFCMOVNE, yfcmv, Px, [23]uint8{0xdb, 01}},
{AFCMOVNU, yfcmv, Px, [23]uint8{0xdb, 03}},
{AFCMOVUN, yfcmv, Px, [23]uint8{0xda, 03}},
- {AFCOMB, nil, 0, [23]uint8{}},
- {AFCOMBP, nil, 0, [23]uint8{}},
{AFCOMD, yfadd, Px, [23]uint8{0xdc, 02, 0xd8, 02, 0xdc, 02}}, /* botch */
{AFCOMDP, yfadd, Px, [23]uint8{0xdc, 03, 0xd8, 03, 0xdc, 03}}, /* botch */
{AFCOMDPP, ycompp, Px, [23]uint8{0xde, 03}},
// process forward jumps to p
for q = p.Rel; q != nil; q = q.Forwd {
- v = int32(p.Pc - (q.Pc + int64(q.Mark)))
+ v = int32(p.Pc - (q.Pc + int64(q.Isize)))
if q.Back&2 != 0 { // short
if v > 127 {
loop++
s.P[q.Pc+1] = byte(v)
}
} else {
- bp = s.P[q.Pc+int64(q.Mark)-4:]
+ bp = s.P[q.Pc+int64(q.Isize)-4:]
bp[0] = byte(v)
bp = bp[1:]
bp[0] = byte(v >> 8)
obj.Symgrow(ctxt, s, p.Pc+int64(m))
copy(s.P[p.Pc:][:m], ctxt.And[:m])
- p.Mark = uint16(m)
c += int32(m)
}
return Yxxx
case obj.TYPE_MEM:
- if a.Name != obj.NAME_NONE {
- if ctxt.Asmode == 64 && (a.Reg != REG_NONE || a.Index != REG_NONE || a.Scale != 0) {
+ if a.Index == REG_SP {
+ // Can't use SP as the index register
+ return Yxxx
+ }
+ if ctxt.Asmode == 64 {
+ switch a.Name {
+ case obj.NAME_EXTERN, obj.NAME_STATIC, obj.NAME_GOTREF:
+ // Global variables can't use index registers and their
+ // base register is %rip (%rip is encoded as REG_NONE).
+ if a.Reg != REG_NONE || a.Index != REG_NONE || a.Scale != 0 {
+ return Yxxx
+ }
+ case obj.NAME_AUTO, obj.NAME_PARAM:
+ // These names must have a base of SP. The old compiler
+ // uses 0 for the base register. SSA uses REG_SP.
+ if a.Reg != REG_SP && a.Reg != 0 {
+ return Yxxx
+ }
+ case obj.NAME_NONE:
+ // everything is ok
+ default:
+ // unknown name
return Yxxx
}
}
v = int64(int32(v))
}
if v == 0 {
+ if p.Mark&PRESERVEFLAGS != 0 {
+ // If PRESERVEFLAGS is set, avoid MOV $0, AX turning into XOR AX, AX.
+ return Yu7
+ }
return Yi0
}
if v == 1 {
ctxt.Andptr[0] = byte(p.To.Offset)
ctxt.Andptr = ctxt.Andptr[1:]
- case Zm_r_3d:
- ctxt.Andptr[0] = 0x0f
- ctxt.Andptr = ctxt.Andptr[1:]
- ctxt.Andptr[0] = 0x0f
- ctxt.Andptr = ctxt.Andptr[1:]
- asmand(ctxt, p, &p.From, &p.To)
- ctxt.Andptr[0] = byte(op)
- ctxt.Andptr = ctxt.Andptr[1:]
-
case Zibm_r, Zibr_m:
for {
tmp1 := z
ctxt.Andptr = ctxt.And[:]
ctxt.Asmode = int(p.Mode)
- if p.As == obj.AUSEFIELD {
- r := obj.Addrel(ctxt.Cursym)
- r.Off = 0
- r.Siz = 0
- r.Sym = p.From.Sym
- r.Type = obj.R_USEFIELD
- return
- }
-
if ctxt.Headtype == obj.Hnacl && p.Mode == 32 {
switch p.As {
case obj.ARET:
"math"
)
-func canuse1insntls(ctxt *obj.Link) bool {
+func CanUse1InsnTLS(ctxt *obj.Link) bool {
if isAndroid {
// For android, we use a disgusting hack that assumes
// the thread-local storage slot for g is allocated
// rewriting the instructions more comprehensively, and it only does because
// we only support a single TLS variable (g).
- if canuse1insntls(ctxt) {
+ if CanUse1InsnTLS(ctxt) {
// Reduce 2-instruction sequence to 1-instruction sequence.
// Sequences like
// MOVQ TLS, BX
}
}
- // Rewrite 0 to $0 in 3rd argment to CMPPS etc.
+ // Rewrite 0 to $0 in 3rd argument to CMPPS etc.
// That's what the tables expect.
switch p.As {
case ACMPPD, ACMPPS, ACMPSD, ACMPSS:
// Convert AMOVSS $(0), Xx to AXORPS Xx, Xx
case AMOVSS:
if p.From.Type == obj.TYPE_FCONST {
- if p.From.Val.(float64) == 0 {
+ // f == 0 can't be used here due to -0, so use Float64bits
+ if f := p.From.Val.(float64); math.Float64bits(f) == 0 {
if p.To.Type == obj.TYPE_REG && REG_X0 <= p.To.Reg && p.To.Reg <= REG_X15 {
p.As = AXORPS
p.From = p.To
case AMOVSD:
// Convert AMOVSD $(0), Xx to AXORPS Xx, Xx
if p.From.Type == obj.TYPE_FCONST {
- if p.From.Val.(float64) == 0 {
+ // f == 0 can't be used here due to -0, so use Float64bits
+ if f := p.From.Val.(float64); math.Float64bits(f) == 0 {
if p.To.Type == obj.TYPE_REG && REG_X0 <= p.To.Reg && p.To.Reg <= REG_X15 {
p.As = AXORPS
p.From = p.To
}
if p.From.Type == obj.TYPE_ADDR && p.From.Name == obj.NAME_EXTERN && !p.From.Sym.Local {
// $MOV $sym, Rx becomes $MOV sym@GOT, Rx
- // $MOV $sym+<off>, Rx becomes $MOV sym@GOT, Rx; $ADD <off>, Rx
+ // $MOV $sym+<off>, Rx becomes $MOV sym@GOT, Rx; $LEA <off>(Rx), Rx
// On 386 only, more complicated things like PUSHL $sym become $MOV sym@GOT, CX; PUSHL CX
cmplxdest := false
pAs := p.As
q := p
if p.From.Offset != 0 {
q = obj.Appendp(ctxt, p)
- q.As = add
- q.From.Type = obj.TYPE_CONST
+ q.As = lea
+ q.From.Type = obj.TYPE_MEM
+ q.From.Reg = p.To.Reg
q.From.Offset = p.From.Offset
q.To = p.To
p.From.Offset = 0
var bpsize int
if p.Mode == 64 && obj.Framepointer_enabled != 0 && autoffset > 0 {
- // Make room for to save a base pointer. If autoffset == 0,
+ // Make room for to save a base pointer. If autoffset == 0,
// this might do something special like a tail jump to
// another function, so in that case we omit this.
bpsize = ctxt.Arch.Ptrsize
q = p.Pcond
if q != nil && q.As != obj.ATEXT {
/* mark instruction as done and continue layout at target of jump */
- p.Mark = 1
+ p.Mark |= DONE
p = q
- if p.Mark == 0 {
+ if p.Mark&DONE == 0 {
goto loop
}
}
}
- if p.Mark != 0 {
+ if p.Mark&DONE != 0 {
/*
* p goes here, but already used it elsewhere.
* copy up to 4 instructions or else branch to other copy.
if nofollow(a) || pushpop(a) {
break // NOTE(rsc): arm does goto copy
}
- if q.Pcond == nil || q.Pcond.Mark != 0 {
+ if q.Pcond == nil || q.Pcond.Mark&DONE != 0 {
continue
}
if a == obj.ACALL || a == ALOOP {
q = obj.Copyp(ctxt, p)
p = p.Link
- q.Mark = 1
+ q.Mark |= DONE
(*last).Link = q
*last = q
- if int(q.As) != a || q.Pcond == nil || q.Pcond.Mark != 0 {
+ if int(q.As) != a || q.Pcond == nil || q.Pcond.Mark&DONE != 0 {
continue
}
q.Link = p
xfol(ctxt, q.Link, last)
p = q.Link
- if p.Mark != 0 {
+ if p.Mark&DONE != 0 {
return
}
goto loop
}
/* emit p */
- p.Mark = 1
+ p.Mark |= DONE
(*last).Link = p
*last = p
}
} else {
q = p.Link
- if q.Mark != 0 {
+ if q.Mark&DONE != 0 {
if a != ALOOP {
p.As = relinv(int16(a))
p.Link = p.Pcond
}
xfol(ctxt, p.Link, last)
- if p.Pcond.Mark != 0 {
+ if p.Pcond.Mark&DONE != 0 {
return
}
p = p.Pcond
MOVQ AX, AX -> MOVQ AX, AX
LEAQ name(SB), AX -> MOVQ name@GOT(SB), AX
-LEAQ name+10(SB), AX -> MOVQ name@GOT(SB), AX; ADDQ $10, AX
+LEAQ name+10(SB), AX -> MOVQ name@GOT(SB), AX; LEAQ 10(AX), AX
MOVQ $name(SB), AX -> MOVQ name@GOT(SB), AX
-MOVQ $name+10(SB), AX -> MOVQ name@GOT(SB), AX; ADDQ $10, AX
+MOVQ $name+10(SB), AX -> MOVQ name@GOT(SB), AX; LEAQ 10(AX), AX
MOVQ name(SB), AX -> NOP; MOVQ name@GOT(SB), R15; MOVQ (R15), AX
MOVQ name+10(SB), AX -> NOP; MOVQ name@GOT(SB), R15; MOVQ 10(R15), AX
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
"strings"
"text/tabwriter"
- "golang.org/x/arch/arm/armasm"
- "golang.org/x/arch/x86/x86asm"
+ "cmd/internal/unvendor/golang.org/x/arch/arm/armasm"
+ "cmd/internal/unvendor/golang.org/x/arch/x86/x86asm"
)
// Disasm is a disassembler for a given File.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
return
}
// The code is asking for the address of an external
- // function. We provide it with the address of the
+ // function. We provide it with the address of the
// correspondent GOT symbol.
addgotsym(targ)
// Instead, interpret the C declaration
// void *_Cvar_stderr = &stderr;
// as making _Cvar_stderr the name of a GOT entry
- // for stderr. This is separate from the usual GOT entry,
+ // for stderr. This is separate from the usual GOT entry,
// just in case the C code assigns to the variable,
// and of course it only works for single pointers,
// but we only need to support cgo and that's all it needs.
// To do lazy symbol lookup right, we're supposed
// to tell the dynamic loader which library each
// symbol comes from and format the link info
- // section just so. I'm too lazy (ha!) to do that
+ // section just so. I'm too lazy (ha!) to do that
// so for now we'll just use non-lazy pointers,
// which don't need to be told which library to use.
//
ld.Thearch.Lput = ld.Lputl
ld.Thearch.Wput = ld.Wputl
ld.Thearch.Vput = ld.Vputl
+ ld.Thearch.Append16 = ld.Append16l
+ ld.Thearch.Append32 = ld.Append32l
+ ld.Thearch.Append64 = ld.Append64l
ld.Thearch.Linuxdynld = "/lib64/ld-linux-x86-64.so.2"
ld.Thearch.Freebsddynld = "/libexec/ld-elf.so.1"
}
// Preserve highest 8 bits of a, and do addition to lower 24-bit
-// of a and b; used to adjust ARM branch intruction's target
+// of a and b; used to adjust ARM branch instruction's target
func braddoff(a int32, b int32) int32 {
return int32((uint32(a))&0xff000000 | 0x00ffffff&uint32(a+b))
}
ld.Thearch.Lput = ld.Lputl
ld.Thearch.Wput = ld.Wputl
ld.Thearch.Vput = ld.Vputl
+ ld.Thearch.Append16 = ld.Append16l
+ ld.Thearch.Append32 = ld.Append32l
+ ld.Thearch.Append64 = ld.Append64l
ld.Thearch.Linuxdynld = "/lib/ld-linux.so.3" // 2 for OABI, 3 for EABI
ld.Thearch.Freebsddynld = "/usr/libexec/ld-elf.so.1"
}
case obj.Hdarwin: /* apple MACH */
- ld.Debug['w'] = 1 // disable DWARF generataion
+ ld.Debug['w'] = 1 // disable DWARF generation
ld.Machoinit()
ld.HEADR = ld.INITIAL_MACHO_HEADR
if ld.INITTEXT == -1 {
rs := r.Xsym
// ld64 has a bug handling MACHO_ARM64_RELOC_UNSIGNED with !extern relocation.
- // see cmd/internal/ld/data.go for details. The workarond is that don't use !extern
+ // see cmd/internal/ld/data.go for details. The workaround is that don't use !extern
// UNSIGNED relocation at all.
if rs.Type == obj.SHOSTOBJ || r.Type == obj.R_CALLARM64 || r.Type == obj.R_ADDRARM64 || r.Type == obj.R_ADDR {
if rs.Dynid < 0 {
ld.Thearch.Lput = ld.Lputl
ld.Thearch.Wput = ld.Wputl
ld.Thearch.Vput = ld.Vputl
+ ld.Thearch.Append16 = ld.Append16l
+ ld.Thearch.Append32 = ld.Append32l
+ ld.Thearch.Append64 = ld.Append64l
ld.Thearch.Linuxdynld = "/lib/ld-linux-aarch64.so.1"
}
// hostArchive reads an archive file holding host objects and links in
-// required objects. The general format is the same as a Go archive
+// required objects. The general format is the same as a Go archive
// file, but it has an armap listing symbols and the objects that
-// define them. This is used for the compiler support library
+// define them. This is used for the compiler support library
// libgcc.a.
func hostArchive(name string) {
f, err := obj.Bopenr(name)
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// For ppc64, we want to interleave the .got and .toc sections
- // from input files. Both are type SELFGOT, so in that case
+ // from input files. Both are type SELFGOT, so in that case
// fall through to the name comparison (conveniently, .got
// sorts before .toc).
if s1.Type != obj.SELFGOT && s1.Size != s2.Size {
}
eaddr := addr + size
- var ep []byte
var p []byte
for ; sym != nil; sym = sym.Next {
if sym.Type&obj.SSUB != 0 {
errorexit()
}
- for ; addr < sym.Value; addr++ {
- Cput(0)
+ if addr < sym.Value {
+ strnput("", int(sym.Value-addr))
+ addr = sym.Value
}
p = sym.P
- ep = p[len(sym.P):]
- for -cap(p) < -cap(ep) {
- Cput(uint8(p[0]))
- p = p[1:]
- }
+ Cwrite(p)
addr += int64(len(sym.P))
- for ; addr < sym.Value+sym.Size; addr++ {
- Cput(0)
+ if addr < sym.Value+sym.Size {
+ strnput("", int(sym.Value+sym.Size-addr))
+ addr = sym.Value + sym.Size
}
if addr != sym.Value+sym.Size {
Diag("phase error: addr=%#x value+size=%#x", int64(addr), int64(sym.Value)+sym.Size)
}
}
- for ; addr < eaddr; addr++ {
- Cput(0)
+ if addr < eaddr {
+ strnput("", int(eaddr-addr))
}
Cflush()
}
fmt.Fprintf(&Bso, "\t%.8x|\n", uint(eaddr))
}
-func strnput(s string, n int) {
- for ; n > 0 && s != ""; s = s[1:] {
- Cput(uint8(s[0]))
- n--
- }
+var zeros [512]byte
- for n > 0 {
- Cput(0)
- n--
+// strnput writes the first n bytes of s.
+// If n is larger then len(s),
+// it is padded with NUL bytes.
+func strnput(s string, n int) {
+ if len(s) >= n {
+ Cwritestring(s[:n])
+ } else {
+ Cwritestring(s)
+ n -= len(s)
+ for n > 0 {
+ if len(zeros) >= n {
+ Cwrite(zeros[:n])
+ return
+ } else {
+ Cwrite(zeros[:])
+ n -= len(zeros)
+ }
+ }
}
}
}
}
-func uleb128enc(v uint64, dst []byte) int {
- var c uint8
-
- length := uint8(0)
+func appendUleb128(b []byte, v uint64) []byte {
for {
- c = uint8(v & 0x7f)
+ c := uint8(v & 0x7f)
v >>= 7
if v != 0 {
c |= 0x80
}
- if dst != nil {
- dst[0] = byte(c)
- dst = dst[1:]
- }
- length++
+ b = append(b, c)
if c&0x80 == 0 {
break
}
}
-
- return int(length)
+ return b
}
-func sleb128enc(v int64, dst []byte) int {
- var c uint8
- var s uint8
-
- length := uint8(0)
+func appendSleb128(b []byte, v int64) []byte {
for {
- c = uint8(v & 0x7f)
- s = uint8(v & 0x40)
+ c := uint8(v & 0x7f)
+ s := uint8(v & 0x40)
v >>= 7
if (v != -1 || s == 0) && (v != 0 || s != 0) {
c |= 0x80
}
- if dst != nil {
- dst[0] = byte(c)
- dst = dst[1:]
- }
- length++
+ b = append(b, c)
if c&0x80 == 0 {
break
}
}
-
- return int(length)
+ return b
}
var encbuf [10]byte
func uleb128put(v int64) {
- n := uleb128enc(uint64(v), encbuf[:])
- Cwrite(encbuf[:n])
+ b := appendUleb128(encbuf[:0], uint64(v))
+ Cwrite(b)
}
func sleb128put(v int64) {
- n := sleb128enc(v, encbuf[:])
- Cwrite(encbuf[:n])
+ b := appendSleb128(encbuf[:0], v)
+ Cwrite(b)
}
/*
func newmemberoffsetattr(die *DWDie, offs int32) {
var block [20]byte
-
- i := 0
- block[i] = DW_OP_plus_uconst
- i++
- i += uleb128enc(uint64(offs), block[i:])
- newattr(die, DW_AT_data_member_location, DW_CLS_BLOCK, int64(i), block[:i])
+ b := append(block[:0], DW_OP_plus_uconst)
+ b = appendUleb128(b, uint64(offs))
+ newattr(die, DW_AT_data_member_location, DW_CLS_BLOCK, int64(len(b)), b)
}
// GDB doesn't like DW_FORM_addr for DW_AT_location, so emit a
}
if !strings.HasPrefix(gotype.Name, "type.") {
- Diag("dwarf: type name doesn't start with \".type\": %s", gotype.Name)
+ Diag("dwarf: type name doesn't start with \"type.\": %s", gotype.Name)
return mustFind(&dwtypes, "<unspecified>")
}
}
// Copies src's children into dst. Copies attributes by value.
-// DWAttr.data is copied as pointer only. If except is one of
+// DWAttr.data is copied as pointer only. If except is one of
// the top-level children, it will not be copied.
func copychildrenexcept(dst *DWDie, src *DWDie, except *DWDie) {
for src = src.child; src != nil; src = src.link {
func newcfaoffsetattr(die *DWDie, offs int32) {
var block [20]byte
+ b := append(block[:0], DW_OP_call_frame_cfa)
- i := 0
-
- block[i] = DW_OP_call_frame_cfa
- i++
if offs != 0 {
- block[i] = DW_OP_consts
- i++
- i += sleb128enc(int64(offs), block[i:])
- block[i] = DW_OP_plus
- i++
+ b = append(b, DW_OP_consts)
+ b = appendSleb128(b, int64(offs))
+ b = append(b, DW_OP_plus)
}
- newattr(die, DW_AT_location, DW_CLS_BLOCK, int64(i), block[:i])
+ newattr(die, DW_AT_location, DW_CLS_BLOCK, int64(len(b)), b)
}
func mkvarname(name string, da int) string {
DATAALIGNMENTFACTOR = -4
)
-func putpccfadelta(deltapc int64, cfa int64) {
- Cput(DW_CFA_def_cfa_offset_sf)
- sleb128put(cfa / DATAALIGNMENTFACTOR)
-
- if deltapc < 0x40 {
- Cput(uint8(DW_CFA_advance_loc + deltapc))
- } else if deltapc < 0x100 {
- Cput(DW_CFA_advance_loc1)
- Cput(uint8(deltapc))
- } else if deltapc < 0x10000 {
- Cput(DW_CFA_advance_loc2)
- Thearch.Wput(uint16(deltapc))
- } else {
- Cput(DW_CFA_advance_loc4)
- Thearch.Lput(uint32(deltapc))
+// appendPCDeltaCFA appends per-PC CFA deltas to b and returns the final slice.
+func appendPCDeltaCFA(b []byte, deltapc, cfa int64) []byte {
+ b = append(b, DW_CFA_def_cfa_offset_sf)
+ b = appendSleb128(b, cfa/DATAALIGNMENTFACTOR)
+
+ switch {
+ case deltapc < 0x40:
+ b = append(b, uint8(DW_CFA_advance_loc+deltapc))
+ case deltapc < 0x100:
+ b = append(b, DW_CFA_advance_loc1)
+ b = append(b, uint8(deltapc))
+ case deltapc < 0x10000:
+ b = append(b, DW_CFA_advance_loc2)
+ b = Thearch.Append16(b, uint16(deltapc))
+ default:
+ b = append(b, DW_CFA_advance_loc4)
+ b = Thearch.Append32(b, uint32(deltapc))
}
+ return b
}
func writeframes() {
strnput("", int(pad))
+ var deltaBuf []byte
var pcsp Pciter
for Ctxt.Cursym = Ctxt.Textp; Ctxt.Cursym != nil; Ctxt.Cursym = Ctxt.Cursym.Next {
s := Ctxt.Cursym
continue
}
- fdeo := Cpos()
-
- // Emit a FDE, Section 6.4.1, starting wit a placeholder.
- Thearch.Lput(0) // length, must be multiple of thearch.ptrsize
- Thearch.Lput(0) // Pointer to the CIE above, at offset 0
- addrput(0) // initial location
- addrput(0) // address range
-
+ // Emit a FDE, Section 6.4.1.
+ // First build the section contents into a byte buffer.
+ deltaBuf = deltaBuf[:0]
for pciterinit(Ctxt, &pcsp, &s.Pcln.Pcsp); pcsp.done == 0; pciternext(&pcsp) {
nextpc := pcsp.nextpc
}
if haslinkregister() {
- putpccfadelta(int64(nextpc)-int64(pcsp.pc), int64(pcsp.value))
+ deltaBuf = appendPCDeltaCFA(deltaBuf, int64(nextpc)-int64(pcsp.pc), int64(pcsp.value))
} else {
- putpccfadelta(int64(nextpc)-int64(pcsp.pc), int64(Thearch.Ptrsize)+int64(pcsp.value))
+ deltaBuf = appendPCDeltaCFA(deltaBuf, int64(nextpc)-int64(pcsp.pc), int64(Thearch.Ptrsize)+int64(pcsp.value))
}
}
+ pad := int(Rnd(int64(len(deltaBuf)), int64(Thearch.Ptrsize))) - len(deltaBuf)
+ deltaBuf = append(deltaBuf, zeros[:pad]...)
- fdesize := Cpos() - fdeo - 4 // exclude the length field.
- pad = Rnd(fdesize, int64(Thearch.Ptrsize)) - fdesize
- strnput("", int(pad))
- fdesize += pad
-
- // Emit the FDE header for real, Section 6.4.1.
- Cseek(fdeo)
-
- Thearch.Lput(uint32(fdesize))
+ // Emit the FDE header, Section 6.4.1.
+ // 4 bytes: length, must be multiple of thearch.ptrsize
+ // 4 bytes: Pointer to the CIE above, at offset 0
+ // ptrsize: initial location
+ // ptrsize: address range
+ Thearch.Lput(uint32(4 + 2*Thearch.Ptrsize + len(deltaBuf))) // length (excludes itself)
if Linkmode == LinkExternal {
- adddwarfrel(framesec, framesym, frameo, 4, 0)
- adddwarfrel(framesec, s, frameo, Thearch.Ptrsize, 0)
+ adddwarfrel(framesec, framesym, frameo, 4, 0) // CIE offset
+ adddwarfrel(framesec, s, frameo, Thearch.Ptrsize, 0) // initial location
} else {
- Thearch.Lput(0)
- addrput(s.Value)
+ Thearch.Lput(0) // CIE offset
+ addrput(s.Value) // initial location
}
+ addrput(s.Size) // address range
- addrput(s.Size)
- Cseek(fdeo + 4 + fdesize)
+ Cwrite(deltaBuf)
}
Cflush()
*/
func writearanges() int64 {
sectionstart := Cpos()
- headersize := int(Rnd(4+2+4+1+1, int64(Thearch.Ptrsize))) // don't count unit_length field itself
+ // The first tuple is aligned to a multiple of the size of a single tuple
+ // (twice the size of an address)
+ headersize := int(Rnd(4+2+4+1+1, int64(Thearch.Ptrsize*2))) // don't count unit_length field itself
for compunit := dwroot.child; compunit != nil; compunit = compunit.link {
b := getattr(compunit, DW_AT_low_pc)
sect = addmachodwarfsect(sect, ".debug_info")
infosym = Linklookup(Ctxt, ".debug_info", 0)
- infosym.Hide = 1
+ infosym.Hidden = true
abbrevsym = Linklookup(Ctxt, ".debug_abbrev", 0)
- abbrevsym.Hide = 1
+ abbrevsym.Hidden = true
linesym = Linklookup(Ctxt, ".debug_line", 0)
- linesym.Hide = 1
+ linesym.Hidden = true
framesym = Linklookup(Ctxt, ".debug_frame", 0)
- framesym.Hide = 1
+ framesym.Hidden = true
}
}
}
infosym = Linklookup(Ctxt, ".debug_info", 0)
- infosym.Hide = 1
+ infosym.Hidden = true
abbrevsym = Linklookup(Ctxt, ".debug_abbrev", 0)
- abbrevsym.Hide = 1
+ abbrevsym.Hidden = true
linesym = Linklookup(Ctxt, ".debug_line", 0)
- linesym.Hide = 1
+ linesym.Hidden = true
framesym = Linklookup(Ctxt, ".debug_frame", 0)
- framesym.Hide = 1
+ framesym.Hidden = true
}
}
-// Add section symbols for DWARF debug info. This is called before
+// Add section symbols for DWARF debug info. This is called before
// dwarfaddelfheaders.
func dwarfaddelfsectionsyms() {
if infosym != nil {
DW_CHILDREN_yes = 0x01
)
-// Not from the spec, but logicaly belongs here
+// Not from the spec, but logically belongs here
const (
DW_CLS_ADDRESS = 0x01 + iota
DW_CLS_BLOCK
if strings.HasPrefix(s.Name, "go.weak.") {
s.Special = 1 // do not lay out in data segment
s.Reachable = true
- s.Hide = 1
+ s.Hidden = true
}
}
for s := Ctxt.Allsym; s != nil; s = s.Allsym {
if strings.HasPrefix(s.Name, "go.track.") {
s.Special = 1 // do not lay out in data segment
- s.Hide = 1
+ s.Hidden = true
if s.Reachable {
buf.WriteString(s.Name[9:])
for p = s.Reachparent; p != nil; p = p.Reachparent {
}
if needSym != 0 {
- // local names and hidden visiblity global names are unique
- // and should only reference by its index, not name, so we
- // don't bother to add them into hash table
+ // local names and hidden global names are unique
+ // and should only be referenced by their index, not name, so we
+ // don't bother to add them into the hash table
s = linknewsym(Ctxt, sym.name, Ctxt.Version)
s.Type |= obj.SHIDDEN
}
// For i386 Mach-O PC-relative, the addend is written such that
- // it *is* the PC being subtracted. Use that to make
+ // it *is* the PC being subtracted. Use that to make
// it match our version of PC-relative.
if rel.pcrel != 0 && Thearch.Thechar == '8' {
rp.Add += int64(rp.Off) + int64(rp.Siz)
if sect.sh.Characteristics&(IMAGE_SCN_CNT_CODE|IMAGE_SCN_CNT_INITIALIZED_DATA|IMAGE_SCN_CNT_UNINITIALIZED_DATA) == 0 {
// This has been seen for .idata sections, which we
- // want to ignore. See issues 5106 and 5273.
+ // want to ignore. See issues 5106 and 5273.
continue
}
}
if sect.sh.Characteristics&(IMAGE_SCN_CNT_CODE|IMAGE_SCN_CNT_INITIALIZED_DATA|IMAGE_SCN_CNT_UNINITIALIZED_DATA) == 0 {
// This has been seen for .idata sections, which we
- // want to ignore. See issues 5106 and 5273.
+ // want to ignore. See issues 5106 and 5273.
continue
}
Gentext func()
Machoreloc1 func(*Reloc, int64) int
PEreloc1 func(*Reloc, int64) bool
- Lput func(uint32)
Wput func(uint16)
+ Lput func(uint32)
Vput func(uint64)
+ Append16 func(b []byte, v uint16) []byte
+ Append32 func(b []byte, v uint32) []byte
+ Append64 func(b []byte, v uint64) []byte
}
type Rpath struct {
Bso obj.Biobuf
)
-var coutbuf struct {
- *bufio.Writer
- f *os.File
+type outBuf struct {
+ w *bufio.Writer
+ f *os.File
+ off int64
}
-const (
- symname = "__.GOSYMDEF"
- pkgname = "__.PKGDEF"
-)
+func (w *outBuf) Write(p []byte) (n int, err error) {
+ n, err = w.w.Write(p)
+ w.off += int64(n)
+ return n, err
+}
+
+func (w *outBuf) WriteString(s string) (n int, err error) {
+ n, err = coutbuf.w.WriteString(s)
+ w.off += int64(n)
+ return n, err
+}
+
+var coutbuf outBuf
+
+const pkgname = "__.PKGDEF"
var (
// Set if we see an object compiled by the host compiler that is not
Exitf("cannot create %s: %v", outfile, err)
}
- coutbuf.Writer = bufio.NewWriter(f)
+ coutbuf.w = bufio.NewWriter(f)
coutbuf.f = f
if INITENTRY == "" {
if Linkmode == LinkExternal && !iscgo {
// This indicates a user requested -linkmode=external.
// The startup code uses an import of runtime/cgo to decide
- // whether to initialize the TLS. So give it one. This could
+ // whether to initialize the TLS. So give it one. This could
// be handled differently but it's an unusual case.
loadinternal("runtime/cgo")
fmt.Fprintf(&Bso, "%5.2f ldobj: %s (%s)\n", obj.Cputime(), lib.File, pkg)
}
Bso.Flush()
- var err error
- var f *obj.Biobuf
- f, err = obj.Bopenr(lib.File)
+ f, err := obj.Bopenr(lib.File)
if err != nil {
Exitf("cannot open file %s: %v", lib.File, err)
}
return
}
- /* skip over optional __.GOSYMDEF and process __.PKGDEF */
+ /* process __.PKGDEF */
off := obj.Boffset(f)
var arhdr ArHdr
goto out
}
- if strings.HasPrefix(arhdr.name, symname) {
- off += l
- l = nextar(f, off, &arhdr)
- if l <= 0 {
- Diag("%s: short read on archive file symbol header", lib.File)
- goto out
- }
- }
-
if !strings.HasPrefix(arhdr.name, pkgname) {
Diag("%s: cannot find package header", lib.File)
goto out
* the individual symbols that are unused.
*
* loading every object will also make it possible to
- * load foreign objects not referenced by __.GOSYMDEF.
+ * load foreign objects not referenced by __.PKGDEF.
*/
for {
l = nextar(f, off, &arhdr)
Exitf("cannot create %s: %v", p, err)
}
- coutbuf.Writer = bufio.NewWriter(f)
+ coutbuf.w = bufio.NewWriter(f)
coutbuf.f = f
}
// On Windows, given -o foo, GCC will append ".exe" to produce
// "foo.exe". We have decided that we want to honor the -o
- // option. To make this work, we append a '.' so that GCC
- // will decide that the file already has an extension. We
+ // option. To make this work, we append a '.' so that GCC
+ // will decide that the file already has an extension. We
// only want to do this when producing a Windows output file
// on a Windows host.
outopt := outfile
// clang, unlike GCC, passes -rdynamic to the linker
// even when linking with -static, causing a linker
- // error when using GNU ld. So take out -rdynamic if
- // we added it. We do it in this order, rather than
+ // error when using GNU ld. So take out -rdynamic if
+ // we added it. We do it in this order, rather than
// only adding -rdynamic later, so that -extldflags
// can override -rdynamic without using -static.
if Iself && p == "-static" {
return nil
}
-// ldobj loads an input object. If it is a host object (an object
-// compiled by a non-Go compiler) it returns the Hostobj pointer. If
+// ldobj loads an input object. If it is a host object (an object
+// compiled by a non-Go compiler) it returns the Hostobj pointer. If
// it is a Go object, it returns nil.
func ldobj(f *obj.Biobuf, pkg string, length int64, pn string, file string, whence int) *Hostobj {
eof := obj.Boffset(f) + length
return -1
}
- // Indirect call. Assume it is a call to a splitting function,
+ // Indirect call. Assume it is a call to a splitting function,
// so we have to make sure it can call morestack.
// Arrange the data structures to report both calls, so that
// if there is an error, stkprint shows all the steps involved.
}
func Cflush() {
- if err := coutbuf.Writer.Flush(); err != nil {
+ if err := coutbuf.w.Flush(); err != nil {
Exitf("flushing %s: %v", coutbuf.f.Name(), err)
}
}
func Cpos() int64 {
- off, err := coutbuf.f.Seek(0, 1)
- if err != nil {
- Exitf("seeking in output [0, 1]: %v", err)
- }
- return off + int64(coutbuf.Buffered())
+ return coutbuf.off
}
func Cseek(p int64) {
+ if p == coutbuf.off {
+ return
+ }
Cflush()
if _, err := coutbuf.f.Seek(p, 0); err != nil {
Exitf("seeking in output [0, 1]: %v", err)
}
+ coutbuf.off = p
+}
+
+func Cwritestring(s string) {
+ coutbuf.WriteString(s)
}
func Cwrite(p []byte) {
}
func Cput(c uint8) {
- coutbuf.WriteByte(c)
+ coutbuf.w.WriteByte(c)
+ coutbuf.off++
}
func usage() {
}
for s := Ctxt.Allsym; s != nil; s = s.Allsym {
- if s.Hide != 0 || ((s.Name == "" || s.Name[0] == '.') && s.Version == 0 && s.Name != ".rathole" && s.Name != ".TOC.") {
+ if s.Hidden || ((s.Name == "" || s.Name[0] == '.') && s.Version == 0 && s.Name != ".rathole" && s.Name != ".TOC.") {
continue
}
switch s.Type & obj.SMASK {
Cgoexport uint8
Special uint8
Stkcheck uint8
- Hide uint8
+ Hidden bool
Leaf uint8
Localentry uint8
Onlist uint8
Nhistfile int32
Filesyms *LSym
Moduledata *LSym
+ LSymBatch []LSym
}
// The smallest possible offset from the hardware stack pointer to a local
var nsortsym int
// Amount of space left for adding load commands
-// that refer to dynamic libraries. Because these have
+// that refer to dynamic libraries. Because these have
// to go in the Mach-O header, we can't just pick a
-// "big enough" header size. The initial header is
+// "big enough" header size. The initial header is
// one page, the non-dynamic library stuff takes
// up about 1300 bytes; we overestimate that as 2k.
var load_budget int = INITIAL_MACHO_HEADR - 2*1024
func Machoadddynlib(lib string) {
// Will need to store the library name rounded up
- // and 24 bytes of header metadata. If not enough
+ // and 24 bytes of header metadata. If not enough
// space, grab another page of initial space at the
// beginning of the output file.
load_budget -= (len(lib)+7)/8*8 + 24
if Linkmode == LinkInternal {
// For lldb, must say LC_VERSION_MIN_MACOSX or else
// it won't know that this Mach-O binary is from OS X
- // (could be iOS or WatchOS intead).
+ // (could be iOS or WatchOS instead).
// Go on iOS uses linkmode=external, and linkmode=external
// adds this itself. So we only need this code for linkmode=internal
// and we can assume OS X.
s4 := Linklookup(Ctxt, ".machosymstr", 0)
// Force the linkedit section to end on a 16-byte
- // boundary. This allows pure (non-cgo) Go binaries
+ // boundary. This allows pure (non-cgo) Go binaries
// to be code signed correctly.
//
// Apple's codesign_allocate (a helper utility for
// the codesign utility) can do this fine itself if
- // it is run on a dynamic Mach-O binary. However,
+ // it is run on a dynamic Mach-O binary. However,
// when it is run on a pure (non-cgo) Go binary, where
// the linkedit section is mostly empty, it fails to
// account for the extra padding that it itself adds
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
log.Fatalf("readsym out of sync")
}
t := rdint(f)
- name := expandpkg(rdstring(f), pkg)
+ name := rdsymName(f, pkg)
v := rdint(f)
if v != 0 && v != 1 {
log.Fatalf("invalid symbol version %d", v)
return uint8(n)
}
+// rdBuf is used by rdstring and rdsymName as scratch for reading strings.
+var rdBuf []byte
+var emptyPkg = []byte(`"".`)
+
func rdstring(f *obj.Biobuf) string {
- n := rdint64(f)
- p := make([]byte, n)
- obj.Bread(f, p)
- return string(p)
+ n := rdint(f)
+ if len(rdBuf) < n {
+ rdBuf = make([]byte, n)
+ }
+ obj.Bread(f, rdBuf[:n])
+ return string(rdBuf[:n])
}
+const rddataBufMax = 1 << 14
+
+var rddataBuf = make([]byte, rddataBufMax)
+
func rddata(f *obj.Biobuf) []byte {
- n := rdint64(f)
- p := make([]byte, n)
+ var p []byte
+ n := rdint(f)
+ if n > rddataBufMax {
+ p = make([]byte, n)
+ } else {
+ if len(rddataBuf) < n {
+ rddataBuf = make([]byte, rddataBufMax)
+ }
+ p = rddataBuf[:n:n]
+ rddataBuf = rddataBuf[n:]
+ }
obj.Bread(f, p)
return p
}
-var symbuf []byte
-
-func rdsym(ctxt *Link, f *obj.Biobuf, pkg string) *LSym {
+// rdsymName reads a symbol name, replacing all "". with pkg.
+func rdsymName(f *obj.Biobuf, pkg string) string {
n := rdint(f)
if n == 0 {
rdint64(f)
- return nil
+ return ""
+ }
+
+ if len(rdBuf) < n {
+ rdBuf = make([]byte, n, 2*n)
+ }
+ origName := rdBuf[:n]
+ obj.Bread(f, origName)
+ adjName := rdBuf[n:n]
+ for {
+ i := bytes.Index(origName, emptyPkg)
+ if i == -1 {
+ adjName = append(adjName, origName...)
+ break
+ }
+ adjName = append(adjName, origName[:i]...)
+ adjName = append(adjName, pkg...)
+ adjName = append(adjName, '.')
+ origName = origName[i+len(emptyPkg):]
+ }
+ name := string(adjName)
+ if len(adjName) > len(rdBuf) {
+ rdBuf = adjName // save the larger buffer for reuse
}
+ return name
+}
- if len(symbuf) < n {
- symbuf = make([]byte, n)
+func rdsym(ctxt *Link, f *obj.Biobuf, pkg string) *LSym {
+ name := rdsymName(f, pkg)
+ if name == "" {
+ return nil
}
- obj.Bread(f, symbuf[:n])
- p := string(symbuf[:n])
v := rdint(f)
if v != 0 {
v = ctxt.Version
}
- s := Linklookup(ctxt, expandpkg(p, pkg), v)
+ s := Linklookup(ctxt, name, v)
if v == 0 && s.Name[0] == '$' && s.Type == 0 {
if strings.HasPrefix(s.Name, "$f32.") {
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
pciternext(it)
}
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
)
// findfunctab generates a lookup table to quickly find the containing
-// function for a pc. See src/runtime/symtab.go:findfunc for details.
+// function for a pc. See src/runtime/symtab.go:findfunc for details.
func findfunctab() {
t := Linklookup(Ctxt, "runtime.findfunctab", 0)
t.Type = obj.SRODATA
obj.Flagstr("memprofile", "write memory profile to `file`", &memprofile)
obj.Flagint64("memprofilerate", "set runtime.MemProfileRate to `rate`", &memprofilerate)
- // Clumsy hack to preserve old two-argument -X name val syntax for old scripts.
- // Rewrite that syntax into new syntax -X name=val.
- // TODO(rsc): Delete this hack in Go 1.6 or later.
- var args []string
- for i := 0; i < len(os.Args); i++ {
- arg := os.Args[i]
- if (arg == "-X" || arg == "--X") && i+2 < len(os.Args) && !strings.Contains(os.Args[i+1], "=") {
- fmt.Fprintf(os.Stderr, "link: warning: option %s %s %s may not work in future releases; use %s %s=%s\n",
- arg, os.Args[i+1], os.Args[i+2],
- arg, os.Args[i+1], os.Args[i+2])
- args = append(args, arg)
- args = append(args, os.Args[i+1]+"="+os.Args[i+2])
- i += 2
- continue
- }
- if (strings.HasPrefix(arg, "-X=") || strings.HasPrefix(arg, "--X=")) && i+1 < len(os.Args) && strings.Count(arg, "=") == 1 {
- fmt.Fprintf(os.Stderr, "link: warning: option %s %s may not work in future releases; use %s=%s\n",
- arg, os.Args[i+1],
- arg, os.Args[i+1])
- args = append(args, arg+"="+os.Args[i+1])
- i++
- continue
- }
- args = append(args, arg)
- }
- os.Args = args
-
obj.Flagparse(usage)
startProfile()
import (
"cmd/internal/obj"
"log"
- "os"
- "path/filepath"
"strconv"
)
func linknew(arch *LinkArch) *Link {
ctxt := new(Link)
- ctxt.Hash = make(map[symVer]*LSym)
+ // Preallocate about 2mb for hash
+ ctxt.Hash = make(map[symVer]*LSym, 100000)
ctxt.Arch = arch
ctxt.Version = obj.HistVersion
ctxt.Goroot = obj.Getgoroot()
log.Fatalf("invalid goarch %s (want %s)", p, arch.Name)
}
- var buf string
- buf, _ = os.Getwd()
- if buf == "" {
- buf = "/???"
- }
- buf = filepath.ToSlash(buf)
-
ctxt.Headtype = headtype(obj.Getgoos())
if ctxt.Headtype < 0 {
log.Fatalf("unknown goos %s", obj.Getgoos())
}
func linknewsym(ctxt *Link, symb string, v int) *LSym {
- s := new(LSym)
- *s = LSym{}
+ batch := ctxt.LSymBatch
+ if len(batch) == 0 {
+ batch = make([]LSym, 1000)
+ }
+ s := &batch[0]
+ ctxt.LSymBatch = batch[1:]
s.Dynid = -1
s.Plt = -1
// When dynamically linking, we create LSym's by reading the names from
// the symbol tables of the shared libraries and so the names need to
- // match exactly. Tools like DTrace will have to wait for now.
+ // match exactly. Tools like DTrace will have to wait for now.
if !DynlinkingGo() {
// Rewrite · to . for ASCII-only tools like DTrace (sigh)
s = strings.Replace(s, "·", ".", -1)
dwarfaddelfsectionsyms()
+ // Some linkers will add a FILE sym if one is not present.
+ // Avoid having the working directory inserted into the symbol table.
+ putelfsyment(0, 0, 0, STB_LOCAL<<4|STT_FILE, SHN_ABS, 0)
+ numelfsym++
+
elfbind = STB_LOCAL
genasmsym(putelfsym)
Cput(uint8(w))
}
+func Append16b(b []byte, v uint16) []byte {
+ return append(b, uint8(v>>8), uint8(v))
+}
+func Append16l(b []byte, v uint16) []byte {
+ return append(b, uint8(v), uint8(v>>8))
+}
+
func Lputb(l uint32) {
Cput(uint8(l >> 24))
Cput(uint8(l >> 16))
Cput(uint8(l >> 24))
}
+func Append32b(b []byte, v uint32) []byte {
+ return append(b, uint8(v>>24), uint8(v>>16), uint8(v>>8), uint8(v))
+}
+func Append32l(b []byte, v uint32) []byte {
+ return append(b, uint8(v), uint8(v>>8), uint8(v>>16), uint8(v>>24))
+}
+
func Vputb(v uint64) {
Lputb(uint32(v >> 32))
Lputb(uint32(v))
Lputl(uint32(v >> 32))
}
+func Append64b(b []byte, v uint64) []byte {
+ b = Append32b(b, uint32(v>>32))
+ b = Append32b(b, uint32(v))
+ return b
+}
+
+func Append64l(b []byte, v uint64) []byte {
+ b = Append32l(b, uint32(v))
+ b = Append32l(b, uint32(v>>32))
+ return b
+}
+
type byPkg []*Library
func (libs byPkg) Len() int {
}
if strings.HasPrefix(s.Name, "type.") && !DynlinkingGo() {
- s.Hide = 1
+ s.Hidden = true
if UseRelro() && len(s.R) > 0 {
s.Type = obj.STYPERELRO
s.Outer = symtyperel
if strings.HasPrefix(s.Name, "go.typelink.") {
ntypelinks++
s.Type = obj.STYPELINK
- s.Hide = 1
+ s.Hidden = true
s.Outer = symtypelink
}
if strings.HasPrefix(s.Name, "go.string.") {
s.Type = obj.SGOSTRING
- s.Hide = 1
+ s.Hidden = true
s.Outer = symgostring
}
if strings.HasPrefix(s.Name, "runtime.gcbits.") {
s.Type = obj.SGCBITS
- s.Hide = 1
+ s.Hidden = true
s.Outer = symgcbits
}
if strings.HasPrefix(s.Name, "go.func.") {
s.Type = obj.SGOFUNC
- s.Hide = 1
+ s.Hidden = true
s.Outer = symgofunc
}
if strings.HasPrefix(s.Name, "gcargs.") || strings.HasPrefix(s.Name, "gclocals.") || strings.HasPrefix(s.Name, "gclocals·") {
s.Type = obj.SGOFUNC
- s.Hide = 1
+ s.Hidden = true
s.Outer = symgofunc
s.Align = 4
liveness += (s.Size + int64(s.Align) - 1) &^ (int64(s.Align) - 1)
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
ld.Thearch.Lput = ld.Lputl
ld.Thearch.Wput = ld.Wputl
ld.Thearch.Vput = ld.Vputl
+ ld.Thearch.Append16 = ld.Append16l
+ ld.Thearch.Append32 = ld.Append32l
+ ld.Thearch.Append64 = ld.Append64l
} else {
ld.Thearch.Lput = ld.Lputb
ld.Thearch.Wput = ld.Wputb
ld.Thearch.Vput = ld.Vputb
+ ld.Thearch.Append16 = ld.Append16b
+ ld.Thearch.Append32 = ld.Append32b
+ ld.Thearch.Append64 = ld.Append64b
}
ld.Thearch.Linuxdynld = "/lib64/ld64.so.1"
var i int
// The ppc64 ABI PLT has similar concepts to other
- // architectures, but is laid out quite differently. When we
+ // architectures, but is laid out quite differently. When we
// see an R_PPC64_REL24 relocation to a dynamic symbol
// (indicating that the call needs to go through the PLT), we
// generate up to three stubs and reserve a PLT slot.
// 5) We generate the glink resolver stub (only once). This
// computes which symbol resolver stub we came through and
// invokes the dynamic resolver via a pointer provided by
- // the dynamic linker. This will patch up the .plt slot to
+ // the dynamic linker. This will patch up the .plt slot to
// point directly at the function so future calls go
// straight from the call stub to the real function, and
// then call the function.
// platforms.
// Find all R_PPC64_REL24 relocations that reference dynamic
- // imports. Reserve PLT entries for these symbols and
- // generate call stubs. The call stubs need to live in .text,
+ // imports. Reserve PLT entries for these symbols and
+ // generate call stubs. The call stubs need to live in .text,
// which is why we need to do this pass this early.
//
// This assumes "case 1" from the ABI, where the caller needs
// Update the relocation to use the call stub
r.Sym = stub
- // Restore TOC after bl. The compiler put a
+ // Restore TOC after bl. The compiler put a
// nop here for us to overwrite.
o1 = 0xe8410018 // ld r2,24(r1)
ld.Ctxt.Arch.ByteOrder.PutUint32(s.P[r.Off+4:], o1)
// This is a local call, so the caller isn't setting
// up r12 and r2 is the same for the caller and
- // callee. Hence, we need to go to the local entry
+ // callee. Hence, we need to go to the local entry
// point. (If we don't do this, the callee will try
// to use r12 to compute r2.)
r.Add += int64(r.Sym.Localentry) * 4
// The dynamic linker stores the address of the
// dynamic resolver and the DSO identifier in the two
// doublewords at the beginning of the .plt section
- // before the PLT array. Reserve space for these.
+ // before the PLT array. Reserve space for these.
plt.Size = 16
}
}
ld.Thearch.Lput = ld.Lputl
ld.Thearch.Wput = ld.Wputl
ld.Thearch.Vput = ld.Vputl
+ ld.Thearch.Append16 = ld.Append16l
+ ld.Thearch.Append32 = ld.Append32l
+ ld.Thearch.Append64 = ld.Append64l
} else {
ld.Thearch.Lput = ld.Lputb
ld.Thearch.Wput = ld.Wputb
ld.Thearch.Vput = ld.Vputb
+ ld.Thearch.Append16 = ld.Append16b
+ ld.Thearch.Append32 = ld.Append32b
+ ld.Thearch.Append64 = ld.Append64b
}
// TODO(austin): ABI v1 uses /usr/lib/ld.so.1
// Instead, interpret the C declaration
// void *_Cvar_stderr = &stderr;
// as making _Cvar_stderr the name of a GOT entry
- // for stderr. This is separate from the usual GOT entry,
+ // for stderr. This is separate from the usual GOT entry,
// just in case the C code assigns to the variable,
// and of course it only works for single pointers,
// but we only need to support cgo and that's all it needs.
ld.Thearch.Lput = ld.Lputl
ld.Thearch.Wput = ld.Wputl
ld.Thearch.Vput = ld.Vputl
+ ld.Thearch.Append16 = ld.Append16l
+ ld.Thearch.Append32 = ld.Append32l
+ ld.Thearch.Append64 = ld.Append64l
ld.Thearch.Linuxdynld = "/lib/ld-linux.so.2"
ld.Thearch.Freebsddynld = "/usr/libexec/ld-elf.so.1"
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// preEncode populates the unexported fields to be used by encode
-// (with suffix X) from the corresponding exported fields. The
+// (with suffix X) from the corresponding exported fields. The
// exported fields are cleared up to facilitate testing.
func (p *Profile) preEncode() {
strings := make(map[string]int)
if s, addrs := extractHexAddresses(l); len(s) > 0 {
for _, addr := range addrs {
// Addresses from stack traces point to the next instruction after
- // each call. Adjust by -1 to land somewhere on the actual call
+ // each call. Adjust by -1 to land somewhere on the actual call
// (except for the leaf, which is not a call).
if len(sloc) > 0 {
addr--
// 3rd word -- 0
//
// Addresses from stack traces may point to the next instruction after
-// each call. Optionally adjust by -1 to land somewhere on the actual
+// each call. Optionally adjust by -1 to land somewhere on the actual
// call (except for the leaf, which is not a call).
func parseCPUSamples(b []byte, parse func(b []byte) (uint64, []byte), adjust bool, p *Profile) ([]byte, map[uint64]*Location, error) {
locs := make(map[uint64]*Location)
var sloc []*Location
for i, addr := range addrs {
// Addresses from stack traces point to the next instruction after
- // each call. Adjust by -1 to land somewhere on the actual call
+ // each call. Adjust by -1 to land somewhere on the actual call
// (except for the leaf, which is not a call).
if i > 0 {
addr--
var sloc []*Location
for i, addr := range addrs {
// Addresses from stack traces point to the next instruction after
- // each call. Adjust by -1 to land somewhere on the actual call
+ // each call. Adjust by -1 to land somewhere on the actual call
// (except for the leaf, which is not a call).
if i > 0 {
addr--
var sloc []*Location
for i, addr := range addrs {
// Addresses from stack traces point to the next instruction after
- // each call. Adjust by -1 to land somewhere on the actual call
+ // each call. Adjust by -1 to land somewhere on the actual call
// (except for the leaf, which is not a call).
if i > 0 {
addr--
filenameX int64
}
-// Parse parses a profile and checks for its validity. The input
+// Parse parses a profile and checks for its validity. The input
// may be a gzip-compressed encoded protobuf or one of many legacy
// profile formats which may be unsupported in the future.
func Parse(r io.Reader) (*Profile, error) {
return err
}
-// CheckValid tests whether the profile is valid. Checks include, but are
+// CheckValid tests whether the profile is valid. Checks include, but are
// not limited to:
// - len(Profile.Sample[n].value) == len(Profile.value_unit)
// - Sample.id has a corresponding Profile.Location
)
// Prune removes all nodes beneath a node matching dropRx, and not
-// matching keepRx. If the root node of a Sample matches, the sample
+// matching keepRx. If the root node of a Sample matches, the sample
// will have an empty stack.
func (p *Profile) Prune(dropRx, keepRx *regexp.Regexp) {
prune := make(map[uint64]bool)
// callgrindName implements the callgrind naming compression scheme.
// For names not previously seen returns "(N) name", where N is a
-// unique index. For names previously seen returns "(N)" where N is
+// unique index. For names previously seen returns "(N)" where N is
// the index returned the first time.
func callgrindName(names map[string]int, name string) string {
if name == "" {
addressOrder
)
-// sort reoders the entries in a report based on the specified
+// sort reorders the entries in a report based on the specified
// ordering criteria. The result is sorted in decreasing order for
// numeric quantities, alphabetically for text, and increasing for
// addresses.
return fnodes, filename
}
-// adjustSourcePath adjusts the pathe for a source file by trimmming
+// adjustSourcePath adjusts the path for a source file by trimming
// known prefixes and searching for the file on all parents of the
// current working dir.
func adjustSourcePath(path string) (*os.File, string, error) {
// Symbolize symbolizes profile p by parsing data returned by a
// symbolz handler. syms receives the symbolz query (hex addresses
-// separated by '+') and returns the symbolz output in a string. It
+// separated by '+') and returns the symbolz output in a string. It
// symbolizes all locations based on their addresses, regardless of
// mapping.
func Symbolize(source string, syms func(string, string) ([]byte, error), p *profile.Profile) error {
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
--- /dev/null
+Vet is a tool that checks correctness of Go programs. It runs a suite of tests,
+each tailored to check for a particular class of errors. Examples include incorrect
+Printf format verbs or malformed build tags.
+
+Over time many checks have been added to vet's suite, but many more have been
+rejected as not appropriate for the tool. The criteria applied when selecting which
+checks to add are:
+
+Correctness:
+
+Vet's tools are about correctness, not style. A vet check must identify real or
+potential bugs that could cause incorrect compilation or execution. A check that
+only identifies stylistic points or alternative correct approaches to a situation
+is not acceptable.
+
+Frequency:
+
+Vet is run every day by many programmers, often as part of every compilation or
+submission. The cost in execution time is considerable, especially in aggregate,
+so checks must be likely enough to find real problems that they are worth the
+overhead of the added check. A new check that finds only a handful of problems
+across all existing programs, even if the problem is significant, is not worth
+adding to the suite everyone runs daily.
+
+Precision:
+
+Most of vet's checks are heuristic and can generate both false positives (flagging
+correct programs) and false negatives (not flagging incorrect ones). The rate of
+both these failures must be very small. A check that is too noisy will be ignored
+by the programmer overwhelmed by the output; a check that misses too many of the
+cases it's looking for will give a false sense of security. Neither is acceptable.
+A vet check must be accurate enough that everything it reports is worth examining,
+and complete enough to encourage real confidence.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// cgoBaseType tries to look through type conversions involving
-// unsafe.Pointer to find the real type. It converts:
+// unsafe.Pointer to find the real type. It converts:
// unsafe.Pointer(x) => x
// *(*unsafe.Pointer)(unsafe.Pointer(&x)) => x
func cgoBaseType(f *File, arg ast.Expr) types.Type {
}
// typeOKForCgoCall returns true if the type of arg is OK to pass to a
-// C function using cgo. This is not true for Go types with embedded
+// C function using cgo. This is not true for Go types with embedded
// pointers.
func typeOKForCgoCall(t types.Type) bool {
if t == nil {
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
Locks that are erroneously passed by value.
-Documentation examples
+Tests, benchmarks and documentation examples
-Flag: -example
+Flag: -tests
-Mistakes involving example tests, including examples with incorrect names or
-function signatures, or that document identifiers not in the package.
+Mistakes involving tests including functions with incorrect names or signatures
+and example tests that document identifiers not in the package.
Methods
+++ /dev/null
-// Copyright 2015 The Go Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package main
-
-import (
- "go/ast"
- "go/types"
- "strings"
- "unicode"
- "unicode/utf8"
-)
-
-func init() {
- register("example",
- "check for common mistaken usages of documentation examples",
- checkExample,
- funcDecl)
-}
-
-func isExampleSuffix(s string) bool {
- r, size := utf8.DecodeRuneInString(s)
- return size > 0 && unicode.IsLower(r)
-}
-
-// checkExample walks the documentation example functions checking for common
-// mistakes of misnamed functions, failure to map functions to existing
-// identifiers, etc.
-func checkExample(f *File, node ast.Node) {
- if !strings.HasSuffix(f.name, "_test.go") {
- return
- }
- var (
- pkg = f.pkg
- pkgName = pkg.typesPkg.Name()
- scopes = []*types.Scope{pkg.typesPkg.Scope()}
- lookup = func(name string) types.Object {
- for _, scope := range scopes {
- if o := scope.Lookup(name); o != nil {
- return o
- }
- }
- return nil
- }
- )
- if strings.HasSuffix(pkgName, "_test") {
- // Treat 'package foo_test' as an alias for 'package foo'.
- var (
- basePkg = strings.TrimSuffix(pkgName, "_test")
- pkg = f.pkg
- )
- for _, p := range pkg.typesPkg.Imports() {
- if p.Name() == basePkg {
- scopes = append(scopes, p.Scope())
- break
- }
- }
- }
- fn, ok := node.(*ast.FuncDecl)
- if !ok {
- // Ignore non-functions.
- return
- }
- var (
- fnName = fn.Name.Name
- report = func(format string, args ...interface{}) { f.Badf(node.Pos(), format, args...) }
- )
- if fn.Recv != nil || !strings.HasPrefix(fnName, "Example") {
- // Ignore methods and types not named "Example".
- return
- }
- if params := fn.Type.Params; len(params.List) != 0 {
- report("%s should be niladic", fnName)
- }
- if results := fn.Type.Results; results != nil && len(results.List) != 0 {
- report("%s should return nothing", fnName)
- }
- if fnName == "Example" {
- // Nothing more to do.
- return
- }
- if filesRun && !includesNonTest {
- // The coherence checks between a test and the package it tests
- // will report false positives if no non-test files have
- // been provided.
- return
- }
- var (
- exName = strings.TrimPrefix(fnName, "Example")
- elems = strings.SplitN(exName, "_", 3)
- ident = elems[0]
- obj = lookup(ident)
- )
- if ident != "" && obj == nil {
- // Check ExampleFoo and ExampleBadFoo.
- report("%s refers to unknown identifier: %s", fnName, ident)
- // Abort since obj is absent and no subsequent checks can be performed.
- return
- }
- if elemCnt := strings.Count(exName, "_"); elemCnt == 0 {
- // Nothing more to do.
- return
- }
- mmbr := elems[1]
- if ident == "" {
- // Check Example_suffix and Example_BadSuffix.
- if residual := strings.TrimPrefix(exName, "_"); !isExampleSuffix(residual) {
- report("%s has malformed example suffix: %s", fnName, residual)
- }
- return
- }
- if !isExampleSuffix(mmbr) {
- // Check ExampleFoo_Method and ExampleFoo_BadMethod.
- if obj, _, _ := types.LookupFieldOrMethod(obj.Type(), true, obj.Pkg(), mmbr); obj == nil {
- report("%s refers to unknown field or method: %s.%s", fnName, ident, mmbr)
- }
- }
- if len(elems) == 3 && !isExampleSuffix(elems[2]) {
- // Check ExampleFoo_Method_suffix and ExampleFoo_Method_Badsuffix.
- report("%s has malformed example suffix: %s", fnName, elems[2])
- }
- return
-}
return report[name].isTrue()
}
-// setExit sets the value for os.Exit when it is called, later. It
+// setExit sets the value for os.Exit when it is called, later. It
// remembers the highest value.
func setExit(err int) {
if err > exitCode {
// expression instead of the inner part with the actual error, the
// precision can mislead.
posn := f.fset.Position(pos)
- return fmt.Sprintf("%s:%d: ", posn.Filename, posn.Line)
+ return fmt.Sprintf("%s:%d", posn.Filename, posn.Line)
}
// Warn reports an error but does not set the exit code.
func (f *File) Warn(pos token.Pos, args ...interface{}) {
- fmt.Fprint(os.Stderr, f.loc(pos)+fmt.Sprintln(args...))
+ fmt.Fprintf(os.Stderr, "%s: %s", f.loc(pos), fmt.Sprintln(args...))
}
// Warnf reports a formatted error but does not set the exit code.
func (f *File) Warnf(pos token.Pos, format string, args ...interface{}) {
- fmt.Fprintf(os.Stderr, f.loc(pos)+format+"\n", args...)
+ fmt.Fprintf(os.Stderr, "%s: %s\n", f.loc(pos), fmt.Sprintf(format, args...))
}
// walkFile walks the file's tree.
}
// canonicalMethods lists the input and output types for Go methods
-// that are checked using dynamic interface checks. Because the
+// that are checked using dynamic interface checks. Because the
// checks are dynamic, such methods would not cause a compile error
// if they have the wrong signature: instead the dynamic check would
-// fail, sometimes mysteriously. If a method is found with a name listed
+// fail, sometimes mysteriously. If a method is found with a name listed
// here but not the input/output types listed here, vet complains.
//
// A few of the canonical methods have very common names.
// To do that, the arguments that have a = prefix are treated as
// signals that the canonical meaning is intended: if a Scan
// method doesn't have a fmt.ScanState as its first argument,
-// we let it go. But if it does have a fmt.ScanState, then the
+// we let it go. But if it does have a fmt.ScanState, then the
// rest has to match.
var canonicalMethods = map[string]MethodSig{
// "Flush": {{}, {"error"}}, // http.Flusher and jpeg.writer conflict
}
// printList records the unformatted-print functions. The value is the location
-// of the first parameter to be printed. Names are lower-cased so the lookup is
+// of the first parameter to be printed. Names are lower-cased so the lookup is
// case insensitive.
var printList = map[string]int{
"error": 0,
// the shadowing identifier.
span, ok := f.pkg.spans[shadowed]
if !ok {
- f.Badf(ident.Pos(), "internal error: no range for %s", ident.Name)
+ f.Badf(ident.Pos(), "internal error: no range for %q", ident.Name)
return
}
if !span.contains(ident.Pos()) {
}
// Don't complain if the types differ: that implies the programmer really wants two different things.
if types.Identical(obj.Type(), shadowed.Type()) {
- f.Badf(ident.Pos(), "declaration of %s shadows declaration at %s", obj.Name(), f.loc(shadowed.Pos()))
+ f.Badf(ident.Pos(), "declaration of %q shadows declaration at %s", obj.Name(), f.loc(shadowed.Pos()))
}
}
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
_ = err
}
if f != nil {
- _, err := f.Read(buf) // ERROR "declaration of err shadows declaration at testdata/shadow.go:13"
+ _, err := f.Read(buf) // ERROR "declaration of .err. shadows declaration at testdata/shadow.go:13"
if err != nil {
return err
}
_ = i
}
if f != nil {
- x := one() // ERROR "declaration of x shadows declaration at testdata/shadow.go:14"
- var _, err = f.Read(buf) // ERROR "declaration of err shadows declaration at testdata/shadow.go:13"
+ x := one() // ERROR "declaration of .x. shadows declaration at testdata/shadow.go:14"
+ var _, err = f.Read(buf) // ERROR "declaration of .err. shadows declaration at testdata/shadow.go:13"
if x == 1 && err != nil {
return err
}
if shadowTemp := shadowTemp; true { // OK: obviously intentional idiomatic redeclaration
var f *os.File // OK because f is not mentioned later in the function.
// The declaration of x is a shadow because x is mentioned below.
- var x int // ERROR "declaration of x shadows declaration at testdata/shadow.go:14"
+ var x int // ERROR "declaration of .x. shadows declaration at testdata/shadow.go:14"
_, _, _ = x, f, shadowTemp
}
// Use a couple of variables to trigger shadowing errors.
-// Test of examples.
-
package testdata
+import (
+ "testing"
+)
+
// Buf is a ...
type Buf []byte
func ExamplePuffer_Append() // ERROR "ExamplePuffer_Append refers to unknown identifier: Puffer"
func ExamplePuffer_suffix() // ERROR "ExamplePuffer_suffix refers to unknown identifier: Puffer"
+
+func nonTest() {} // OK because it doesn't start with "Test".
+
+func (Buf) TesthasReceiver() {} // OK because it has a receiver.
+
+func TestOKSuffix(*testing.T) {} // OK because first char after "Test" is Uppercase.
+
+func TestÜnicodeWorks(*testing.T) {} // OK because the first char after "Test" is Uppercase.
+
+func TestbadSuffix(*testing.T) {} // ERROR "first letter after 'Test' must not be lowercase"
+
+func TestemptyImportBadSuffix(*T) {} // ERROR "first letter after 'Test' must not be lowercase"
+
+func Test(*testing.T) {} // OK "Test" on its own is considered a test.
+
+func Testify() {} // OK because it takes no parameters.
+
+func TesttooManyParams(*testing.T, string) {} // OK because it takes too many parameters.
+
+func TesttooManyNames(a, b *testing.T) {} // OK because it takes too many names.
+
+func TestnoTParam(string) {} // OK because it doesn't take a *testing.T
+
+func BenchmarkbadSuffix(*testing.B) {} // ERROR "first letter after 'Benchmark' must not be lowercase"
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package main
+
+import (
+ "go/ast"
+ "go/types"
+ "strings"
+ "unicode"
+ "unicode/utf8"
+)
+
+func init() {
+ register("tests",
+ "check for common mistaken usages of tests/documentation examples",
+ checkTestFunctions,
+ funcDecl)
+}
+
+func isExampleSuffix(s string) bool {
+ r, size := utf8.DecodeRuneInString(s)
+ return size > 0 && unicode.IsLower(r)
+}
+
+func isTestSuffix(name string) bool {
+ if len(name) == 0 {
+ // "Test" is ok.
+ return true
+ }
+ r, _ := utf8.DecodeRuneInString(name)
+ return !unicode.IsLower(r)
+}
+
+func isTestParam(typ ast.Expr, wantType string) bool {
+ ptr, ok := typ.(*ast.StarExpr)
+ if !ok {
+ // Not a pointer.
+ return false
+ }
+ // No easy way of making sure it's a *testing.T or *testing.B:
+ // ensure the name of the type matches.
+ if name, ok := ptr.X.(*ast.Ident); ok {
+ return name.Name == wantType
+ }
+ if sel, ok := ptr.X.(*ast.SelectorExpr); ok {
+ return sel.Sel.Name == wantType
+ }
+ return false
+}
+
+func lookup(name string, scopes []*types.Scope) types.Object {
+ for _, scope := range scopes {
+ if o := scope.Lookup(name); o != nil {
+ return o
+ }
+ }
+ return nil
+}
+
+func extendedScope(pkg *Package) []*types.Scope {
+ scopes := []*types.Scope{pkg.typesPkg.Scope()}
+
+ pkgName := pkg.typesPkg.Name()
+ if strings.HasPrefix(pkgName, "_test") {
+ basePkg := strings.TrimSuffix(pkgName, "_test")
+ for _, p := range pkg.typesPkg.Imports() {
+ if p.Name() == basePkg {
+ scopes = append(scopes, p.Scope())
+ break
+ }
+ }
+ }
+ return scopes
+}
+
+func checkExample(fn *ast.FuncDecl, pkg *Package, report reporter) {
+ fnName := fn.Name.Name
+ if params := fn.Type.Params; len(params.List) != 0 {
+ report("%s should be niladic", fnName)
+ }
+ if results := fn.Type.Results; results != nil && len(results.List) != 0 {
+ report("%s should return nothing", fnName)
+ }
+
+ if filesRun && !includesNonTest {
+ // The coherence checks between a test and the package it tests
+ // will report false positives if no non-test files have
+ // been provided.
+ return
+ }
+
+ if fnName == "Example" {
+ // Nothing more to do.
+ return
+ }
+
+ var (
+ exName = strings.TrimPrefix(fnName, "Example")
+ elems = strings.SplitN(exName, "_", 3)
+ ident = elems[0]
+ obj = lookup(ident, extendedScope(pkg))
+ )
+ if ident != "" && obj == nil {
+ // Check ExampleFoo and ExampleBadFoo.
+ report("%s refers to unknown identifier: %s", fnName, ident)
+ // Abort since obj is absent and no subsequent checks can be performed.
+ return
+ }
+ if len(elems) < 2 {
+ // Nothing more to do.
+ return
+ }
+
+ if ident == "" {
+ // Check Example_suffix and Example_BadSuffix.
+ if residual := strings.TrimPrefix(exName, "_"); !isExampleSuffix(residual) {
+ report("%s has malformed example suffix: %s", fnName, residual)
+ }
+ return
+ }
+
+ mmbr := elems[1]
+ if !isExampleSuffix(mmbr) {
+ // Check ExampleFoo_Method and ExampleFoo_BadMethod.
+ if obj, _, _ := types.LookupFieldOrMethod(obj.Type(), true, obj.Pkg(), mmbr); obj == nil {
+ report("%s refers to unknown field or method: %s.%s", fnName, ident, mmbr)
+ }
+ }
+ if len(elems) == 3 && !isExampleSuffix(elems[2]) {
+ // Check ExampleFoo_Method_suffix and ExampleFoo_Method_Badsuffix.
+ report("%s has malformed example suffix: %s", fnName, elems[2])
+ }
+}
+
+func checkTest(fn *ast.FuncDecl, prefix string, report reporter) {
+ // Want functions with 0 results and 1 parameter.
+ if fn.Type.Results != nil && len(fn.Type.Results.List) > 0 ||
+ fn.Type.Params == nil ||
+ len(fn.Type.Params.List) != 1 ||
+ len(fn.Type.Params.List[0].Names) > 1 {
+ return
+ }
+
+ // The param must look like a *testing.T or *testing.B.
+ if !isTestParam(fn.Type.Params.List[0].Type, prefix[:1]) {
+ return
+ }
+
+ if !isTestSuffix(fn.Name.Name[len(prefix):]) {
+ report("%s has malformed name: first letter after '%s' must not be lowercase", fn.Name.Name, prefix)
+ }
+}
+
+type reporter func(format string, args ...interface{})
+
+// checkTestFunctions walks Test, Benchmark and Example functions checking
+// malformed names, wrong signatures and examples documenting inexistent
+// identifiers.
+func checkTestFunctions(f *File, node ast.Node) {
+ if !strings.HasSuffix(f.name, "_test.go") {
+ return
+ }
+
+ fn, ok := node.(*ast.FuncDecl)
+ if !ok || fn.Recv != nil {
+ // Ignore non-functions or functions with receivers.
+ return
+ }
+
+ report := func(format string, args ...interface{}) { f.Badf(node.Pos(), format, args...) }
+
+ switch {
+ case strings.HasPrefix(fn.Name.Name, "Example"):
+ checkExample(fn, f.pkg, report)
+ case strings.HasPrefix(fn.Name.Name, "Test"):
+ checkTest(fn, "Test", report)
+ case strings.HasPrefix(fn.Name.Name, "Benchmark"):
+ checkTest(fn, "Benchmark", report)
+ }
+}
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// for clarity.
const eof = 0
-// The parser uses the type <prefix>Lex as a lexer. It must provide
+// The parser uses the type <prefix>Lex as a lexer. It must provide
// the methods Lex(*<prefix>SymType) int and Error(string).
type exprLex struct {
line []byte
peek rune
}
-// The parser calls this method to get each new token. This
+// The parser calls this method to get each new token. This
// implementation returns operators and NUM.
func (x *exprLex) Lex(yylval *exprSymType) int {
for {
blockCRC uint32
wantBlockCRC uint32
setupDone bool // true if we have parsed the bzip2 header.
- blockSize int // blockSize in bytes, i.e. 900 * 1024.
+ blockSize int // blockSize in bytes, i.e. 900 * 1000.
eof bool
buf []byte // stores Burrows-Wheeler transformed data.
c [256]uint // the `C' array for the inverse BWT.
}
bz2.fileCRC = 0
- bz2.blockSize = 100 * 1024 * (int(level) - '0')
+ bz2.blockSize = 100 * 1000 * (int(level) - '0')
if bz2.blockSize > len(bz2.tt) {
bz2.tt = make([]uint32, bz2.blockSize)
}
const rand3BZ2Hex = "425a68393141592653593be669d00000327ffffffffffffffffffffffffffffffffffff7ffffffffffffffffffffffffffffffc002b3b2b1b6e2bae400004c00132300004c0d268c004c08c0130026001a008683234c0684c34008c230261a04c0260064d07a8d00034000d27a1268c9931a8d327a3427a41faa69ea0da264c1a34219326869b51b49a6469a3268c689fa53269a62794687a9a68f5189994c9e487a8f534fd49a3d34043629e8c93d04da4f4648d30d4f44d3234c4d3023d0840680984d309934c234d3131a000640984f536a6132601300130130c8d00d04d1841ea7a8d31a02609b40023460010c01a34d4c1a0d04d3069306810034d0d0d4c0046130d034d0131a9a64d321804c68003400098344c13000991808c0001a00000000098004d3d4da4604c47a13012140aadf8d673c922c607ef6212a8c0403adea4b28aee578900e653b9cdeb8d11e6b838815f3ebaad5a01c5408d84a332170aff8734d4e06612d3c2889f31925fb89e33561f5100ae89b1f7047102e729373d3667e58d73aaa80fa7be368a1cc2dadd81d81ec8e1b504bd772ca31d03649269b01ceddaca07bf3d4eba24de141be3f86f93601e03714c0f64654671684f9f9528626fd4e1b76753dc0c54b842486b8d59d8ab314e86ca818e7a1f079463cbbd70d9b79b283c7edc419406311022e4be98c2c1374df9cdde2d008ce1d00e5f06ad1024baf555631f70831fc1023034e62be7c4bcb648caf276963ffa20e96bb50377fe1c113da0db4625b50741c35a058edb009c6ee5dbf93b8a6b060eec568180e8db791b82aab96cbf4326ca98361461379425ba8dcc347be670bdba7641883e5526ae3d833f6e9cb9bac9557747c79e206151072f7f0071dff3880411846f66bf4075c7462f302b53cb3400a74cf35652ad5641ed33572fd54e7ed7f85f58a0acba89327e7c6be5c58cb71528b99df2431f1d0358f8d28d81d95292da631fb06701decabb205fac59ff0fb1df536afc681eece6ea658c4d9eaa45f1342aa1ff70bdaff2ddaf25ec88c22f12829a0553db1ec2505554cb17d7b282e213a5a2aa30431ded2bce665bb199d023840832fedb2c0c350a27291407ff77440792872137df281592e82076a05c64c345ffb058c64f7f7c207ef78420b7010520610f17e302cc4dfcfaef72a0ed091aab4b541eb0531bbe941ca2f792bf7b31ca6162882b68054a8470115bc2c19f2df2023f7800432b39b04d3a304e8085ba3f1f0ca5b1ba4d38d339e6084de979cdea6d0e244c6c9fa0366bd890621e3d30846f5e8497e21597b8f29bbf52c961a485dfbea647600da0fc1f25ce4d203a8352ece310c39073525044e7ac46acf2ed9120bae1b4f6f02364abfe343f80b290983160c103557af1c68416480d024cc31b6c06cfec011456f1e95c420a12b48b1c3fe220c2879a982fb099948ac440db844b9a112a5188c7783fd3b19593290785f908d95c9db4b280bafe89c1313aeec24772046d9bc089645f0d182a21184e143823c5f52de50e5d7e98d3d7ab56f5413bbccd1415c9bcff707def475b643fb7f29842582104d4cc1dbaaca8f10a2f44273c339e0984f2b1e06ab2f0771db01fafa8142298345f3196f23e5847bda024034b6f59b11c29e981c881456e40d211929fd4f766200258aad8212016322bd5c605790dcfdf1bd2a93d99c9b8f498722d311d7eae7ff420496a31804c55f4759a7b13aaaf5f7ce006c3a8a998897d5e0a504398c2b627852545baf440798bcc5cc049357cf3f17d9771e4528a1af3d77dc794a11346e1bdf5efe37a405b127b4c43b616d61fbc5dc914e14240ef99a7400"
const rand3Hex = "1744b384d68c042371244e13500d4bfb98c6244e3d71a5b700224420b59c593553f33bd786e3d0ce31626f511bc985f59d1a88aa38ba8ad6218d306abee60dd9172540232b95be1af146c69e72e5fde667a090dc3f93bdc5c5af0ab80acdbaa7a505f628c59dc0247b31a439cacf5010a94376d71521df08c178b02fb96fdb1809144ea38c68536187c53201fea8631fb0a880b4451ccdca7cc61f6aafca21cc7449d920599db61789ac3b1e164b3390124f95022aeea39ccca3ec1053f4fa10de2978e2861ea58e477085c2220021a0927aa94c5d0006b5055abba340e4f9eba22e969978dfd18e278a8b89d877328ae34268bc0174cfe211954c0036f078025217d1269fac1932a03b05a0b616012271bbe1fb554171c7a59b196d8a4479f45a77931b5d97aaf6c0c673cbe597b79b96e2a0c1eae2e66e46ccc8c85798e23ffe972ebdaa3f6caea243c004e60321eb47cd79137d78fd0613be606feacc5b3637bdc96a89c13746db8cad886f3ccf912b2178c823bcac395f06d28080269bdca2debf3419c66c690fd1adcfbd53e32e79443d7a42511a84cb22ca94fffad9149275a075b2f8ae0b021dcde9bf62b102db920733b897560518b06e1ad7f4b03458493ddaa7f4fa2c1609f7a1735aeeb1b3e2cea3ab45fc376323cc91873b7e9c90d07c192e38d3f5dfc9bfab1fd821c854da9e607ea596c391c7ec4161c6c4493929a8176badaa5a5af7211c623f29643a937677d3df0da9266181b7c4da5dd40376db677fe8f4a1dc456adf6f33c1e37cec471dd318c2647644fe52f93707a77da7d1702380a80e14cc0fdce7bf2eed48a529090bae0388ee277ce6c7018c5fb00b88362554362205c641f0d0fab94fd5b8357b5ff08b207fee023709bc126ec90cfb17c006754638f8186aaeb1265e80be0c1189ec07d01d5f6f96cb9ce82744147d18490de7dc72862f42f024a16968891a356f5e7e0e695d8c933ba5b5e43ad4c4ade5399bc2cae9bb6189b7870d7f22956194d277f28b10e01c10c6ffe3e065f7e2d6d056aa790db5649ca84dc64c35566c0af1b68c32b5b7874aaa66467afa44f40e9a0846a07ae75360a641dd2acc69d93219b2891f190621511e62a27f5e4fbe641ece1fa234fc7e9a74f48d2a760d82160d9540f649256b169d1fed6fbefdc491126530f3cbad7913e19fbd7aa53b1e243fbf28d5f38c10ebd77c8b986775975cc1d619efb27cdcd733fa1ca36cffe9c0a33cc9f02463c91a886601fd349efee85ef1462065ef9bd2c8f533220ad93138b8382d5938103ab25b2d9af8ae106e1211eb9b18793fba033900c809c02cd6d17e2f3e6fc84dae873411f8e87c3f0a8f1765b7825d185ce3730f299c3028d4a62da9ee95c2b870fb70c79370d485f9d5d9acb78926d20444033d960524d2776dc31988ec7c0dbf23b9905d"
+const badBlockSize = "425a683131415926535936dc55330063ffc0006000200020a40830008b0008b8bb9229c28481b6e2a998"
+
const (
digits = iota
twain
if err != nil {
b.Fatal(err)
}
- b.SetBytes(int64(len(compressed)))
+
+ // Determine the uncompressed size of testfile.
+ uncompressedSize, err := io.Copy(ioutil.Discard, NewReader(bytes.NewReader(compressed)))
+ if err != nil {
+ b.Fatal(err)
+ }
+
+ b.SetBytes(uncompressedSize)
+ b.ReportAllocs()
+ b.ResetTimer()
+
for i := 0; i < b.N; i++ {
r := bytes.NewReader(compressed)
io.Copy(ioutil.Discard, NewReader(r))
}
}
+func TestBadBlockSize(t *testing.T) {
+ // Tests https://golang.org/issue/13941.
+ _, err := decompressHex(badBlockSize)
+ if err == nil {
+ t.Errorf("unexpected success")
+ }
+}
+
var bufferOverrunBase64 string = `
QlpoNTFBWSZTWTzyiGcACMP/////////////////////////////////3/7f3///
////4N/fCZODak2Xo44GIHZgkGzDRbFAuwAAKoFV7T6AO6qwA6APb6s2rOoAkAAD
+++ /dev/null
-// Copyright 2012 The Go Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package flate
-
-// forwardCopy is like the built-in copy function except that it always goes
-// forward from the start, even if the dst and src overlap.
-// It is equivalent to:
-// for i := 0; i < n; i++ {
-// mem[dst+i] = mem[src+i]
-// }
-func forwardCopy(mem []byte, dst, src, n int) {
- if dst <= src {
- copy(mem[dst:dst+n], mem[src:src+n])
- return
- }
- for {
- if dst >= src+n {
- copy(mem[dst:dst+n], mem[src:src+n])
- return
- }
- // There is some forward overlap. The destination
- // will be filled with a repeated pattern of mem[src:src+k].
- // We copy one instance of the pattern here, then repeat.
- // Each time around this loop k will double.
- k := dst - src
- copy(mem[dst:dst+k], mem[src:src+k])
- n -= k
- dst += k
- }
-}
+++ /dev/null
-// Copyright 2012 The Go Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package flate
-
-import (
- "testing"
-)
-
-func TestForwardCopy(t *testing.T) {
- testCases := []struct {
- dst0, dst1 int
- src0, src1 int
- want string
- }{
- {0, 9, 0, 9, "012345678"},
- {0, 5, 4, 9, "45678"},
- {4, 9, 0, 5, "01230"},
- {1, 6, 3, 8, "34567"},
- {3, 8, 1, 6, "12121"},
- {0, 9, 3, 6, "345"},
- {3, 6, 0, 9, "012"},
- {1, 6, 0, 9, "00000"},
- {0, 4, 7, 8, "7"},
- {0, 1, 6, 8, "6"},
- {4, 4, 6, 9, ""},
- {2, 8, 6, 6, ""},
- {0, 0, 0, 0, ""},
- }
- for _, tc := range testCases {
- b := []byte("0123456789")
- n := tc.dst1 - tc.dst0
- if tc.src1-tc.src0 < n {
- n = tc.src1 - tc.src0
- }
- forwardCopy(b, tc.dst0, tc.src0, n)
- got := string(b[tc.dst0 : tc.dst0+n])
- if got != tc.want {
- t.Errorf("dst=b[%d:%d], src=b[%d:%d]: got %q, want %q",
- tc.dst0, tc.dst1, tc.src0, tc.src1, got, tc.want)
- }
- // Check that the bytes outside of dst[:n] were not modified.
- for i, x := range b {
- if i >= tc.dst0 && i < tc.dst0+n {
- continue
- }
- if int(x) != '0'+i {
- t.Errorf("dst=b[%d:%d], src=b[%d:%d]: copy overrun at b[%d]: got '%c', want '%c'",
- tc.dst0, tc.dst1, tc.src0, tc.src1, i, x, '0'+i)
- }
- }
- }
-}
}
// NewWriterDict is like NewWriter but initializes the new
-// Writer with a preset dictionary. The returned Writer behaves
+// Writer with a preset dictionary. The returned Writer behaves
// as if the dictionary had been written to it without producing
-// any compressed output. The compressed data written to w
+// any compressed output. The compressed data written to w
// can only be decompressed by a Reader initialized with the
// same dictionary.
func NewWriterDict(w io.Writer, level int, dict []byte) (*Writer, error) {
// not necessarily the case: the write Flush may emit
// some extra framing bits that are not necessary
// to process to obtain the first half of the uncompressed
- // data. The test ran correctly most of the time, because
+ // data. The test ran correctly most of the time, because
// the background goroutine had usually read even
// those extra bits by now, but it's not a useful thing to
// check.
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package flate
+
+// dictDecoder implements the LZ77 sliding dictionary as used in decompression.
+// LZ77 decompresses data through sequences of two forms of commands:
+//
+// * Literal insertions: Runs of one or more symbols are inserted into the data
+// stream as is. This is accomplished through the writeByte method for a
+// single symbol, or combinations of writeSlice/writeMark for multiple symbols.
+// Any valid stream must start with a literal insertion if no preset dictionary
+// is used.
+//
+// * Backward copies: Runs of one or more symbols are copied from previously
+// emitted data. Backward copies come as the tuple (dist, length) where dist
+// determines how far back in the stream to copy from and length determines how
+// many bytes to copy. Note that it is valid for the length to be greater than
+// the distance. Since LZ77 uses forward copies, that situation is used to
+// perform a form of run-length encoding on repeated runs of symbols.
+// The writeCopy and tryWriteCopy are used to implement this command.
+//
+// For performance reasons, this implementation performs little to no sanity
+// checks about the arguments. As such, the invariants documented for each
+// method call must be respected.
+type dictDecoder struct {
+ hist []byte // Sliding window history
+
+ // Invariant: 0 <= rdPos <= wrPos <= len(hist)
+ wrPos int // Current output position in buffer
+ rdPos int // Have emitted hist[:rdPos] already
+ full bool // Has a full window length been written yet?
+}
+
+// init initializes dictDecoder to have a sliding window dictionary of the given
+// size. If a preset dict is provided, it will initialize the dictionary with
+// the contents of dict.
+func (dd *dictDecoder) init(size int, dict []byte) {
+ *dd = dictDecoder{hist: dd.hist}
+
+ if cap(dd.hist) < size {
+ dd.hist = make([]byte, size)
+ }
+ dd.hist = dd.hist[:size]
+
+ if len(dict) > len(dd.hist) {
+ dict = dict[len(dict)-len(dd.hist):]
+ }
+ dd.wrPos = copy(dd.hist, dict)
+ if dd.wrPos == len(dd.hist) {
+ dd.wrPos = 0
+ dd.full = true
+ }
+ dd.rdPos = dd.wrPos
+}
+
+// histSize reports the total amount of historical data in the dictionary.
+func (dd *dictDecoder) histSize() int {
+ if dd.full {
+ return len(dd.hist)
+ }
+ return dd.wrPos
+}
+
+// availRead reports the number of bytes that can be flushed by readFlush.
+func (dd *dictDecoder) availRead() int {
+ return dd.wrPos - dd.rdPos
+}
+
+// availWrite reports the available amount of output buffer space.
+func (dd *dictDecoder) availWrite() int {
+ return len(dd.hist) - dd.wrPos
+}
+
+// writeSlice returns a slice of the available buffer to write data to.
+//
+// This invariant will be kept: len(s) <= availWrite()
+func (dd *dictDecoder) writeSlice() []byte {
+ return dd.hist[dd.wrPos:]
+}
+
+// writeMark advances the writer pointer by cnt.
+//
+// This invariant must be kept: 0 <= cnt <= availWrite()
+func (dd *dictDecoder) writeMark(cnt int) {
+ dd.wrPos += cnt
+}
+
+// writeByte writes a single byte to the dictionary.
+//
+// This invariant must be kept: 0 < availWrite()
+func (dd *dictDecoder) writeByte(c byte) {
+ dd.hist[dd.wrPos] = c
+ dd.wrPos++
+}
+
+// writeCopy copies a string at a given (dist, length) to the output.
+// This returns the number of bytes copied and may be less than the requested
+// length if the available space in the output buffer is too small.
+//
+// This invariant must be kept: 0 < dist <= histSize()
+func (dd *dictDecoder) writeCopy(dist, length int) int {
+ dstBase := dd.wrPos
+ dstPos := dstBase
+ srcPos := dstPos - dist
+ endPos := dstPos + length
+ if endPos > len(dd.hist) {
+ endPos = len(dd.hist)
+ }
+
+ // Copy non-overlapping section after destination position.
+ //
+ // This section is non-overlapping in that the copy length for this section
+ // is always less than or equal to the backwards distance. This can occur
+ // if a distance refers to data that wraps-around in the buffer.
+ // Thus, a backwards copy is performed here; that is, the exact bytes in
+ // the source prior to the copy is placed in the destination.
+ if srcPos < 0 {
+ srcPos += len(dd.hist)
+ dstPos += copy(dd.hist[dstPos:endPos], dd.hist[srcPos:])
+ srcPos = 0
+ }
+
+ // Copy possibly overlapping section before destination position.
+ //
+ // This section can overlap if the copy length for this section is larger
+ // than the backwards distance. This is allowed by LZ77 so that repeated
+ // strings can be succinctly represented using (dist, length) pairs.
+ // Thus, a forwards copy is performed here; that is, the bytes copied is
+ // possibly dependent on the resulting bytes in the destination as the copy
+ // progresses along. This is functionally equivalent to the following:
+ //
+ // for i := 0; i < endPos-dstPos; i++ {
+ // dd.hist[dstPos+i] = dd.hist[srcPos+i]
+ // }
+ // dstPos = endPos
+ //
+ for dstPos < endPos {
+ dstPos += copy(dd.hist[dstPos:endPos], dd.hist[srcPos:dstPos])
+ }
+
+ dd.wrPos = dstPos
+ return dstPos - dstBase
+}
+
+// tryWriteCopy tries to copy a string at a given (distance, length) to the
+// output. This specialized version is optimized for short distances.
+//
+// This method is designed to be inlined for performance reasons.
+//
+// This invariant must be kept: 0 < dist <= histSize()
+func (dd *dictDecoder) tryWriteCopy(dist, length int) int {
+ dstPos := dd.wrPos
+ endPos := dstPos + length
+ if dstPos < dist || endPos > len(dd.hist) {
+ return 0
+ }
+ dstBase := dstPos
+ srcPos := dstPos - dist
+
+ // Copy possibly overlapping section before destination position.
+loop:
+ dstPos += copy(dd.hist[dstPos:endPos], dd.hist[srcPos:dstPos])
+ if dstPos < endPos {
+ goto loop // Avoid for-loop so that this function can be inlined
+ }
+
+ dd.wrPos = dstPos
+ return dstPos - dstBase
+}
+
+// readFlush returns a slice of the historical buffer that is ready to be
+// emitted to the user. The data returned by readFlush must be fully consumed
+// before calling any other dictDecoder methods.
+func (dd *dictDecoder) readFlush() []byte {
+ toRead := dd.hist[dd.rdPos:dd.wrPos]
+ dd.rdPos = dd.wrPos
+ if dd.wrPos == len(dd.hist) {
+ dd.wrPos, dd.rdPos = 0, 0
+ dd.full = true
+ }
+ return toRead
+}
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package flate
+
+import (
+ "bytes"
+ "strings"
+ "testing"
+)
+
+func TestDictDecoder(t *testing.T) {
+ const (
+ abc = "ABC\n"
+ fox = "The quick brown fox jumped over the lazy dog!\n"
+ poem = "The Road Not Taken\nRobert Frost\n" +
+ "\n" +
+ "Two roads diverged in a yellow wood,\n" +
+ "And sorry I could not travel both\n" +
+ "And be one traveler, long I stood\n" +
+ "And looked down one as far as I could\n" +
+ "To where it bent in the undergrowth;\n" +
+ "\n" +
+ "Then took the other, as just as fair,\n" +
+ "And having perhaps the better claim,\n" +
+ "Because it was grassy and wanted wear;\n" +
+ "Though as for that the passing there\n" +
+ "Had worn them really about the same,\n" +
+ "\n" +
+ "And both that morning equally lay\n" +
+ "In leaves no step had trodden black.\n" +
+ "Oh, I kept the first for another day!\n" +
+ "Yet knowing how way leads on to way,\n" +
+ "I doubted if I should ever come back.\n" +
+ "\n" +
+ "I shall be telling this with a sigh\n" +
+ "Somewhere ages and ages hence:\n" +
+ "Two roads diverged in a wood, and I-\n" +
+ "I took the one less traveled by,\n" +
+ "And that has made all the difference.\n"
+ )
+
+ var poemRefs = []struct {
+ dist int // Backward distance (0 if this is an insertion)
+ length int // Length of copy or insertion
+ }{
+ {0, 38}, {33, 3}, {0, 48}, {79, 3}, {0, 11}, {34, 5}, {0, 6}, {23, 7},
+ {0, 8}, {50, 3}, {0, 2}, {69, 3}, {34, 5}, {0, 4}, {97, 3}, {0, 4},
+ {43, 5}, {0, 6}, {7, 4}, {88, 7}, {0, 12}, {80, 3}, {0, 2}, {141, 4},
+ {0, 1}, {196, 3}, {0, 3}, {157, 3}, {0, 6}, {181, 3}, {0, 2}, {23, 3},
+ {77, 3}, {28, 5}, {128, 3}, {110, 4}, {70, 3}, {0, 4}, {85, 6}, {0, 2},
+ {182, 6}, {0, 4}, {133, 3}, {0, 7}, {47, 5}, {0, 20}, {112, 5}, {0, 1},
+ {58, 3}, {0, 8}, {59, 3}, {0, 4}, {173, 3}, {0, 5}, {114, 3}, {0, 4},
+ {92, 5}, {0, 2}, {71, 3}, {0, 2}, {76, 5}, {0, 1}, {46, 3}, {96, 4},
+ {130, 4}, {0, 3}, {360, 3}, {0, 3}, {178, 5}, {0, 7}, {75, 3}, {0, 3},
+ {45, 6}, {0, 6}, {299, 6}, {180, 3}, {70, 6}, {0, 1}, {48, 3}, {66, 4},
+ {0, 3}, {47, 5}, {0, 9}, {325, 3}, {0, 1}, {359, 3}, {318, 3}, {0, 2},
+ {199, 3}, {0, 1}, {344, 3}, {0, 3}, {248, 3}, {0, 10}, {310, 3}, {0, 3},
+ {93, 6}, {0, 3}, {252, 3}, {157, 4}, {0, 2}, {273, 5}, {0, 14}, {99, 4},
+ {0, 1}, {464, 4}, {0, 2}, {92, 4}, {495, 3}, {0, 1}, {322, 4}, {16, 4},
+ {0, 3}, {402, 3}, {0, 2}, {237, 4}, {0, 2}, {432, 4}, {0, 1}, {483, 5},
+ {0, 2}, {294, 4}, {0, 2}, {306, 3}, {113, 5}, {0, 1}, {26, 4}, {164, 3},
+ {488, 4}, {0, 1}, {542, 3}, {248, 6}, {0, 5}, {205, 3}, {0, 8}, {48, 3},
+ {449, 6}, {0, 2}, {192, 3}, {328, 4}, {9, 5}, {433, 3}, {0, 3}, {622, 25},
+ {615, 5}, {46, 5}, {0, 2}, {104, 3}, {475, 10}, {549, 3}, {0, 4}, {597, 8},
+ {314, 3}, {0, 1}, {473, 6}, {317, 5}, {0, 1}, {400, 3}, {0, 3}, {109, 3},
+ {151, 3}, {48, 4}, {0, 4}, {125, 3}, {108, 3}, {0, 2},
+ }
+
+ var got, want bytes.Buffer
+ var dd dictDecoder
+ dd.init(1<<11, nil)
+
+ var writeCopy = func(dist, length int) {
+ for length > 0 {
+ cnt := dd.tryWriteCopy(dist, length)
+ if cnt == 0 {
+ cnt = dd.writeCopy(dist, length)
+ }
+
+ length -= cnt
+ if dd.availWrite() == 0 {
+ got.Write(dd.readFlush())
+ }
+ }
+ }
+ var writeString = func(str string) {
+ for len(str) > 0 {
+ cnt := copy(dd.writeSlice(), str)
+ str = str[cnt:]
+ dd.writeMark(cnt)
+ if dd.availWrite() == 0 {
+ got.Write(dd.readFlush())
+ }
+ }
+ }
+
+ writeString(".")
+ want.WriteByte('.')
+
+ str := poem
+ for _, ref := range poemRefs {
+ if ref.dist == 0 {
+ writeString(str[:ref.length])
+ } else {
+ writeCopy(ref.dist, ref.length)
+ }
+ str = str[ref.length:]
+ }
+ want.WriteString(poem)
+
+ writeCopy(dd.histSize(), 33)
+ want.Write(want.Bytes()[:33])
+
+ writeString(abc)
+ writeCopy(len(abc), 59*len(abc))
+ want.WriteString(strings.Repeat(abc, 60))
+
+ writeString(fox)
+ writeCopy(len(fox), 9*len(fox))
+ want.WriteString(strings.Repeat(fox, 10))
+
+ writeString(".")
+ writeCopy(1, 9)
+ want.WriteString(strings.Repeat(".", 10))
+
+ writeString(strings.ToUpper(poem))
+ writeCopy(len(poem), 7*len(poem))
+ want.WriteString(strings.Repeat(strings.ToUpper(poem), 8))
+
+ writeCopy(dd.histSize(), 10)
+ want.Write(want.Bytes()[want.Len()-dd.histSize():][:10])
+
+ got.Write(dd.readFlush())
+ if got.String() != want.String() {
+ t.Errorf("final string mismatch:\ngot %q\nwant %q", got.String(), want.String())
+ }
+}
// The result is written into the codegen array, and the frequencies
// of each code is written into the codegenFreq array.
// Codes 0-15 are single byte codes. Codes 16-18 are followed by additional
-// information. Code badCode is an end marker
+// information. Code badCode is an end marker
//
// numLiterals The number of literals in literalEncoding
// numOffsets The number of offsets in offsetEncoding
// This is fine because the output is always shorter than the input used
// so far.
codegen := w.codegen // cache
- // Copy the concatenated code sizes to codegen. Put a marker at the end.
+ // Copy the concatenated code sizes to codegen. Put a marker at the end.
copy(codegen[0:numLiterals], w.literalEncoding.codeBits)
copy(codegen[numLiterals:numLiterals+numOffsets], w.offsetEncoding.codeBits)
codegen[numLiterals+numOffsets] = badCode
// The cases of 0, 1, and 2 literals are handled by special case code.
//
// list An array of the literals with non-zero frequencies
-// and their associated frequencies. The array is in order of increasing
+// and their associated frequencies. The array is in order of increasing
// frequency, and has as its last element a special element with frequency
// MaxInt32
// maxBits The maximum number of bits that should be used to encode any literal.
list = list[0 : n+1]
list[n] = maxNode()
- // The tree can't have greater depth than n - 1, no matter what. This
+ // The tree can't have greater depth than n - 1, no matter what. This
// saves a little bit of work in some small cases
if maxBits > n-1 {
maxBits = n - 1
if l.needed--; l.needed == 0 {
// We've done everything we need to do for this level.
- // Continue calculating one level up. Fill in nextPairFreq
+ // Continue calculating one level up. Fill in nextPairFreq
// of that level with the sum of the two nodes we've just calculated on
// this level.
if l.level == maxBits {
h.codeBits = h.codeBits[0:len(freq)]
list = list[0:count]
if count <= 2 {
- // Handle the small cases here, because they are awkward for the general case code. With
+ // Handle the small cases here, because they are awkward for the general case code. With
// two or fewer literals, everything has bit length 1.
for i, node := range list {
// "list" is in order of increasing literal value.
// trees are permitted.
func (h *huffmanDecoder) init(bits []int) bool {
// Sanity enables additional runtime tests during Huffman
- // table construction. It's intended to be used during
+ // table construction. It's intended to be used during
// development to supplement the currently ad-hoc unit tests.
const sanity = false
// Check that the coding is complete (i.e., that we've
// assigned all 2-to-the-max possible bit sequences).
// Exception: To be compatible with zlib, we also need to
- // accept degenerate single-code codings. See also
+ // accept degenerate single-code codings. See also
// TestDegenerateHuffmanCoding.
if code != 1<<uint(max) && !(code == 1 && max == 1) {
return false
if n <= huffmanChunkBits {
for off := reverse; off < len(h.chunks); off += 1 << uint(n) {
// We should never need to overwrite
- // an existing chunk. Also, 0 is
+ // an existing chunk. Also, 0 is
// never a valid chunk, because the
// lower 4 "count" bits should be
// between 1 and 15.
if sanity {
// Above we've sanity checked that we never overwrote
- // an existing entry. Here we additionally check that
+ // an existing entry. Here we additionally check that
// we filled the tables completely.
for i, chunk := range h.chunks {
if chunk == 0 {
codebits *[numCodes]int
// Output history, buffer.
- hist *[maxHist]byte
- hp int // current output position in buffer
- hw int // have written hist[0:hw] already
- hfull bool // buffer has filled at least once
+ dict dictDecoder
// Temporary buffer (avoids repeated allocation).
buf [4]byte
// Next step in the decompression,
// and decompression state.
- step func(*decompressor)
- final bool
- err error
- toRead []byte
- hl, hd *huffmanDecoder
- copyLen int
- copyDist int
+ step func(*decompressor)
+ stepState int
+ final bool
+ err error
+ toRead []byte
+ hl, hd *huffmanDecoder
+ copyLen int
+ copyDist int
}
func (f *decompressor) nextBlock() {
if f.final {
- if f.hw != f.hp {
- f.flush((*decompressor).nextBlock)
+ if f.dict.availRead() > 0 {
+ f.toRead = f.dict.readFlush()
+ f.step = (*decompressor).nextBlock
return
}
f.err = io.EOF
return 0, f.err
}
f.step(f)
+ f.woffset += int64(len(f.toRead))
}
}
// Decode a single Huffman block from f.
// hl and hd are the Huffman states for the lit/length values
-// and the distance values, respectively. If hd == nil, using the
+// and the distance values, respectively. If hd == nil, using the
// fixed distance encoding associated with fixed Huffman blocks.
func (f *decompressor) huffmanBlock() {
- for {
+ const (
+ stateInit = iota // Zero value must be stateInit
+ stateDict
+ )
+
+ switch f.stepState {
+ case stateInit:
+ goto readLiteral
+ case stateDict:
+ goto copyHistory
+ }
+
+readLiteral:
+ // Read literal and/or (length, distance) according to RFC section 3.2.3.
+ {
v, err := f.huffSym(f.hl)
if err != nil {
f.err = err
var length int
switch {
case v < 256:
- f.hist[f.hp] = byte(v)
- f.hp++
- if f.hp == len(f.hist) {
- // After the flush, continue this loop.
- f.flush((*decompressor).huffmanBlock)
+ f.dict.writeByte(byte(v))
+ if f.dict.availWrite() == 0 {
+ f.toRead = f.dict.readFlush()
+ f.step = (*decompressor).huffmanBlock
+ f.stepState = stateInit
return
}
- continue
+ goto readLiteral
case v == 256:
// Done with huffman block; read next block.
f.step = (*decompressor).nextBlock
return
}
- // Copy history[-dist:-dist+length] into output.
- if dist > len(f.hist) {
- f.err = InternalError("bad history distance")
- return
- }
-
// No check on length; encoding can be prescient.
- if !f.hfull && dist > f.hp {
+ if dist > f.dict.histSize() {
f.err = CorruptInputError(f.roffset)
return
}
f.copyLen, f.copyDist = length, dist
- if f.copyHist() {
- return
- }
+ goto copyHistory
}
-}
-// copyHist copies f.copyLen bytes from f.hist (f.copyDist bytes ago) to itself.
-// It reports whether the f.hist buffer is full.
-func (f *decompressor) copyHist() bool {
- p := f.hp - f.copyDist
- if p < 0 {
- p += len(f.hist)
- }
- for f.copyLen > 0 {
- n := f.copyLen
- if x := len(f.hist) - f.hp; n > x {
- n = x
- }
- if x := len(f.hist) - p; n > x {
- n = x
- }
- forwardCopy(f.hist[:], f.hp, p, n)
- p += n
- f.hp += n
- f.copyLen -= n
- if f.hp == len(f.hist) {
- // After flush continue copying out of history.
- f.flush((*decompressor).copyHuff)
- return true
- }
- if p == len(f.hist) {
- p = 0
+copyHistory:
+ // Perform a backwards copy according to RFC section 3.2.3.
+ {
+ cnt := f.dict.tryWriteCopy(f.copyDist, f.copyLen)
+ if cnt == 0 {
+ cnt = f.dict.writeCopy(f.copyDist, f.copyLen)
}
- }
- return false
-}
+ f.copyLen -= cnt
-func (f *decompressor) copyHuff() {
- if f.copyHist() {
- return
+ if f.dict.availWrite() == 0 || f.copyLen > 0 {
+ f.toRead = f.dict.readFlush()
+ f.step = (*decompressor).huffmanBlock // We need to continue this work
+ f.stepState = stateDict
+ return
+ }
+ goto readLiteral
}
- f.huffmanBlock()
}
// Copy a single uncompressed data block from input to output.
}
if n == 0 {
- // 0-length block means sync
- f.flush((*decompressor).nextBlock)
+ f.toRead = f.dict.readFlush()
+ f.step = (*decompressor).nextBlock
return
}
// copyData copies f.copyLen bytes from the underlying reader into f.hist.
// It pauses for reads when f.hist is full.
func (f *decompressor) copyData() {
- n := f.copyLen
- for n > 0 {
- m := len(f.hist) - f.hp
- if m > n {
- m = n
- }
- m, err := io.ReadFull(f.r, f.hist[f.hp:f.hp+m])
- f.roffset += int64(m)
- if err != nil {
- if err == io.EOF {
- err = io.ErrUnexpectedEOF
- }
- f.err = err
- return
- }
- n -= m
- f.hp += m
- if f.hp == len(f.hist) {
- f.copyLen = n
- f.flush((*decompressor).copyData)
- return
- }
+ buf := f.dict.writeSlice()
+ if len(buf) > f.copyLen {
+ buf = buf[:f.copyLen]
}
- f.step = (*decompressor).nextBlock
-}
-func (f *decompressor) setDict(dict []byte) {
- if len(dict) > len(f.hist) {
- // Will only remember the tail.
- dict = dict[len(dict)-len(f.hist):]
+ cnt, err := io.ReadFull(f.r, buf)
+ f.roffset += int64(cnt)
+ f.copyLen -= cnt
+ f.dict.writeMark(cnt)
+ if err != nil {
+ if err == io.EOF {
+ err = io.ErrUnexpectedEOF
+ }
+ f.err = err
+ return
}
- f.hp = copy(f.hist[:], dict)
- if f.hp == len(f.hist) {
- f.hp = 0
- f.hfull = true
+ if f.dict.availWrite() == 0 || f.copyLen > 0 {
+ f.toRead = f.dict.readFlush()
+ f.step = (*decompressor).copyData
+ return
}
- f.hw = f.hp
+ f.step = (*decompressor).nextBlock
}
func (f *decompressor) moreBits() error {
}
}
-// Flush any buffered output to the underlying writer.
-func (f *decompressor) flush(step func(*decompressor)) {
- f.toRead = f.hist[f.hw:f.hp]
- f.woffset += int64(f.hp - f.hw)
- f.hw = f.hp
- if f.hp == len(f.hist) {
- f.hp = 0
- f.hw = 0
- f.hfull = true
- }
- f.step = step
-}
-
func makeReader(r io.Reader) Reader {
if rr, ok := r.(Reader); ok {
return rr
r: makeReader(r),
bits: f.bits,
codebits: f.codebits,
- hist: f.hist,
+ dict: f.dict,
step: (*decompressor).nextBlock,
}
- if dict != nil {
- f.setDict(dict)
- }
+ f.dict.init(maxHist, nil)
return nil
}
var f decompressor
f.r = makeReader(r)
- f.hist = new([maxHist]byte)
f.bits = new([maxNumLit + maxNumDist]int)
f.codebits = new([numCodes]int)
f.step = (*decompressor).nextBlock
+ f.dict.init(maxHist, nil)
return &f
}
// NewReaderDict is like NewReader but initializes the reader
-// with a preset dictionary. The returned Reader behaves as if
+// with a preset dictionary. The returned Reader behaves as if
// the uncompressed data stream started with the given dictionary,
-// which has already been read. NewReaderDict is typically used
+// which has already been read. NewReaderDict is typically used
// to read data compressed by NewWriterDict.
//
// The ReadCloser returned by NewReader also implements Resetter.
var f decompressor
f.r = makeReader(r)
- f.hist = new([maxHist]byte)
f.bits = new([maxNumLit + maxNumDist]int)
f.codebits = new([numCodes]int)
f.step = (*decompressor).nextBlock
- f.setDict(dict)
+ f.dict.init(maxHist, dict)
return &f
}
// uncompressed data from a gzip-format compressed file.
//
// In general, a gzip file can be a concatenation of gzip files,
-// each with its own header. Reads from the Reader
+// each with its own header. Reads from the Reader
// return the concatenation of the uncompressed data of each.
// Only the first header is recorded in the Reader fields.
//
// Gzip files store a length and checksum of the uncompressed data.
// The Reader will return a ErrChecksum when Read
// reaches the end of the uncompressed data if it does not
-// have the expected length or checksum. Clients should treat data
+// have the expected length or checksum. Clients should treat data
// returned by Read as tentative until they receive the io.EOF
// marking the end of the data.
type Reader struct {
return
}
- // Yes. Reset and read from it.
+ // Yes. Reset and read from it.
z.digest.Reset()
z.size = 0
return z.Read(p)
// !h.Less(j, i) for 0 <= i < h.Len() and 2*i+1 <= j <= 2*i+2 and j < h.Len()
//
// Note that Push and Pop in this interface are for package heap's
-// implementation to call. To add and remove things from the heap,
+// implementation to call. To add and remove things from the heap,
// use heap.Push and heap.Pop.
type Interface interface {
sort.Interface
// Rotate
func rotw(w uint32) uint32 { return w<<8 | w>>24 }
-// Key expansion algorithm. See FIPS-197, Figure 11.
+// Key expansion algorithm. See FIPS-197, Figure 11.
// Their rcon[i] is our powx[i-1] << 24.
func expandKeyGo(key []byte, enc, dec []uint32) {
// Encryption key setup.
package cipher
// A Block represents an implementation of block cipher
-// using a given key. It provides the capability to encrypt
-// or decrypt individual blocks. The mode implementations
+// using a given key. It provides the capability to encrypt
+// or decrypt individual blocks. The mode implementations
// extend that capability to streams of blocks.
type Block interface {
// BlockSize returns the cipher's block size.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// GenerateParameters puts a random, valid set of DSA parameters into params.
// This function can take many seconds, even on fast machines.
-func GenerateParameters(params *Parameters, rand io.Reader, sizes ParameterSizes) (err error) {
+func GenerateParameters(params *Parameters, rand io.Reader, sizes ParameterSizes) error {
// This function doesn't follow FIPS 186-3 exactly in that it doesn't
// use a verification seed to generate the primes. The verification
// seed doesn't appear to be exported or used by other code and
GeneratePrimes:
for {
- _, err = io.ReadFull(rand, qBytes)
- if err != nil {
- return
+ if _, err := io.ReadFull(rand, qBytes); err != nil {
+ return err
}
qBytes[len(qBytes)-1] |= 1
}
for i := 0; i < 4*L; i++ {
- _, err = io.ReadFull(rand, pBytes)
- if err != nil {
- return
+ if _, err := io.ReadFull(rand, pBytes); err != nil {
+ return err
}
pBytes[len(pBytes)-1] |= 1
}
params.G = g
- return
+ return nil
}
}
}
// GenerateKey generates a public and private key pair.
-func GenerateKey(c elliptic.Curve, rand io.Reader) (priv *PrivateKey, err error) {
+func GenerateKey(c elliptic.Curve, rand io.Reader) (*PrivateKey, error) {
k, err := randFieldElement(c, rand)
if err != nil {
- return
+ return nil, err
}
- priv = new(PrivateKey)
+ priv := new(PrivateKey)
priv.PublicKey.Curve = c
priv.D = k
priv.PublicKey.X, priv.PublicKey.Y = c.ScalarBaseMult(k.Bytes())
- return
+ return priv, nil
}
// hashToInt converts a hash value to an integer. There is some disagreement
}
if r0.Cmp(r1) == 0 {
- t.Errorf("%s: the nonce used for two diferent messages was the same", tag)
+ t.Errorf("%s: the nonce used for two different messages was the same", tag)
}
}
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
func (d *digest) checkSum() [Size]byte {
- // Padding. Add a 1 bit and 0 bits until 56 bytes mod 64.
+ // Padding. Add a 1 bit and 0 bits until 56 bytes mod 64.
len := d.len
var tmp [64]byte
tmp[0] = 0x80
BEQ aligned // aligned detected - skip copy
// Copy the unaligned source data into the aligned temporary buffer
- // memove(to=4(R13), from=8(R13), n=12(R13)) - Corrupts all registers
+ // memmove(to=4(R13), from=8(R13), n=12(R13)) - Corrupts all registers
MOVW $buf, Rtable // to
MOVW $64, Rc0 // n
MOVM.IB [Rtable,Rdata,Rc0], (R13)
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Reader is a global, shared instance of a cryptographically
// strong pseudo-random generator.
//
-// On Unix-like systems, Reader reads from /dev/urandom.
// On Linux, Reader uses getrandom(2) if available, /dev/urandom otherwise.
+// On OpenBSD, Reader uses getentropy(2).
+// On other Unix-like systems, Reader reads from /dev/urandom.
// On Windows systems, Reader uses the CryptGenRandom API.
var Reader io.Reader
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package rand
+
+import (
+ "internal/syscall/unix"
+)
+
+func init() {
+ altGetRandom = getRandomOpenBSD
+}
+
+func getRandomOpenBSD(p []byte) (ok bool) {
+ // getentropy(2) returns a maximum of 256 bytes per call
+ for i := 0; i < len(p); i += 256 {
+ end := i + 256
+ if len(p) < end {
+ end = len(p)
+ }
+ err := unix.GetEntropy(p[i:end])
+ if err != nil {
+ return false
+ }
+ }
+ return true
+}
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// systems without a reliable /dev/urandom.
// newReader returns a new pseudorandom generator that
-// seeds itself by reading from entropy. If entropy == nil,
+// seeds itself by reading from entropy. If entropy == nil,
// the generator seeds itself by reading from the system's
// random number generator, typically /dev/random.
// The Read method on the returned reader always returns
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
p.SetBytes(bytes)
- // Calculate the value mod the product of smallPrimes. If it's
+ // Calculate the value mod the product of smallPrimes. If it's
// a multiple of any of these primes we add two until it isn't.
// The probability of overflowing is minimal and can be ignored
// because we still perform Miller-Rabin tests on the result.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
return "crypto/rc4: invalid key size " + strconv.Itoa(int(k))
}
-// NewCipher creates and returns a new Cipher. The key argument should be the
+// NewCipher creates and returns a new Cipher. The key argument should be the
// RC4 key, at least 1 byte and at most 256 bytes.
func NewCipher(key []byte) (*Cipher, error) {
k := len(key)
}
// xorKeyStreamGeneric sets dst to the result of XORing src with the
-// key stream. Dst and src may be the same slice but otherwise should
+// key stream. Dst and src may be the same slice but otherwise should
// not overlap.
//
// This is the pure Go version. rc4_{amd64,386,arm}* contain assembly
SessionKeyLen int
}
-// EncryptPKCS1v15 encrypts the given message with RSA and the padding scheme from PKCS#1 v1.5.
-// The message must be no longer than the length of the public modulus minus 11 bytes.
+// EncryptPKCS1v15 encrypts the given message with RSA and the padding
+// scheme from PKCS#1 v1.5. The message must be no longer than the
+// length of the public modulus minus 11 bytes.
//
-// The rand parameter is used as a source of entropy to ensure that encrypting
-// the same message twice doesn't result in the same ciphertext.
+// The rand parameter is used as a source of entropy to ensure that
+// encrypting the same message twice doesn't result in the same
+// ciphertext.
//
-// WARNING: use of this function to encrypt plaintexts other than session keys
-// is dangerous. Use RSA OAEP in new protocols.
-func EncryptPKCS1v15(rand io.Reader, pub *PublicKey, msg []byte) (out []byte, err error) {
+// WARNING: use of this function to encrypt plaintexts other than
+// session keys is dangerous. Use RSA OAEP in new protocols.
+func EncryptPKCS1v15(rand io.Reader, pub *PublicKey, msg []byte) ([]byte, error) {
if err := checkPub(pub); err != nil {
return nil, err
}
k := (pub.N.BitLen() + 7) / 8
if len(msg) > k-11 {
- err = ErrMessageTooLong
- return
+ return nil, ErrMessageTooLong
}
// EM = 0x00 || 0x02 || PS || 0x00 || M
em := make([]byte, k)
em[1] = 2
ps, mm := em[2:len(em)-len(msg)-1], em[len(em)-len(msg):]
- err = nonZeroRandomBytes(ps, rand)
+ err := nonZeroRandomBytes(ps, rand)
if err != nil {
- return
+ return nil, err
}
em[len(em)-len(msg)-1] = 0
copy(mm, msg)
c := encrypt(new(big.Int), pub, m)
copyWithLeftPad(em, c.Bytes())
- out = em
- return
+ return em, nil
}
// DecryptPKCS1v15 decrypts a plaintext using RSA and the padding scheme from PKCS#1 v1.5.
// learn whether each instance returned an error then they can decrypt and
// forge signatures as if they had the private key. See
// DecryptPKCS1v15SessionKey for a way of solving this problem.
-func DecryptPKCS1v15(rand io.Reader, priv *PrivateKey, ciphertext []byte) (out []byte, err error) {
+func DecryptPKCS1v15(rand io.Reader, priv *PrivateKey, ciphertext []byte) ([]byte, error) {
if err := checkPub(&priv.PublicKey); err != nil {
return nil, err
}
valid, out, index, err := decryptPKCS1v15(rand, priv, ciphertext)
if err != nil {
- return
+ return nil, err
}
if valid == 0 {
return nil, ErrDecryption
}
- out = out[index:]
- return
+ return out[index:], nil
}
// DecryptPKCS1v15SessionKey decrypts a session key using RSA and the padding scheme from PKCS#1 v1.5.
// a random value was used (because it'll be different for the same ciphertext)
// and thus whether the padding was correct. This defeats the point of this
// function. Using at least a 16-byte key will protect against this attack.
-func DecryptPKCS1v15SessionKey(rand io.Reader, priv *PrivateKey, ciphertext []byte, key []byte) (err error) {
+func DecryptPKCS1v15SessionKey(rand io.Reader, priv *PrivateKey, ciphertext []byte, key []byte) error {
if err := checkPub(&priv.PublicKey); err != nil {
return err
}
valid, em, index, err := decryptPKCS1v15(rand, priv, ciphertext)
if err != nil {
- return
+ return err
}
if len(em) != k {
valid &= subtle.ConstantTimeEq(int32(len(em)-index), int32(len(key)))
subtle.ConstantTimeCopy(valid, key, em[len(em)-len(key):])
- return
+ return nil
}
// decryptPKCS1v15 decrypts ciphertext using priv and blinds the operation if
crypto.RIPEMD160: {0x30, 0x20, 0x30, 0x08, 0x06, 0x06, 0x28, 0xcf, 0x06, 0x03, 0x00, 0x31, 0x04, 0x14},
}
-// SignPKCS1v15 calculates the signature of hashed using RSASSA-PKCS1-V1_5-SIGN from RSA PKCS#1 v1.5.
-// Note that hashed must be the result of hashing the input message using the
-// given hash function. If hash is zero, hashed is signed directly. This isn't
+// SignPKCS1v15 calculates the signature of hashed using
+// RSASSA-PKCS1-V1_5-SIGN from RSA PKCS#1 v1.5. Note that hashed must
+// be the result of hashing the input message using the given hash
+// function. If hash is zero, hashed is signed directly. This isn't
// advisable except for interoperability.
//
-// If rand is not nil then RSA blinding will be used to avoid timing side-channel attacks.
+// If rand is not nil then RSA blinding will be used to avoid timing
+// side-channel attacks.
//
-// This function is deterministic. Thus, if the set of possible messages is
-// small, an attacker may be able to build a map from messages to signatures
-// and identify the signed messages. As ever, signatures provide authenticity,
-// not confidentiality.
-func SignPKCS1v15(rand io.Reader, priv *PrivateKey, hash crypto.Hash, hashed []byte) (s []byte, err error) {
+// This function is deterministic. Thus, if the set of possible
+// messages is small, an attacker may be able to build a map from
+// messages to signatures and identify the signed messages. As ever,
+// signatures provide authenticity, not confidentiality.
+func SignPKCS1v15(rand io.Reader, priv *PrivateKey, hash crypto.Hash, hashed []byte) ([]byte, error) {
hashLen, prefix, err := pkcs1v15HashInfo(hash, len(hashed))
if err != nil {
- return
+ return nil, err
}
tLen := len(prefix) + hashLen
m := new(big.Int).SetBytes(em)
c, err := decryptAndCheck(rand, priv, m)
if err != nil {
- return
+ return nil, err
}
copyWithLeftPad(em, c.Bytes())
- s = em
- return
+ return em, nil
}
// VerifyPKCS1v15 verifies an RSA PKCS#1 v1.5 signature.
// function and sig is the signature. A valid signature is indicated by
// returning a nil error. If hash is zero then hashed is used directly. This
// isn't advisable except for interoperability.
-func VerifyPKCS1v15(pub *PublicKey, hash crypto.Hash, hashed []byte, sig []byte) (err error) {
+func VerifyPKCS1v15(pub *PublicKey, hash crypto.Hash, hashed []byte, sig []byte) error {
hashLen, prefix, err := pkcs1v15HashInfo(hash, len(hashed))
if err != nil {
- return
+ return err
}
tLen := len(prefix) + hashLen
k := (pub.N.BitLen() + 7) / 8
if k < tLen+11 {
- err = ErrVerification
- return
+ return ErrVerification
}
c := new(big.Int).SetBytes(sig)
hash.Reset()
// 7. Generate an octet string PS consisting of emLen - sLen - hLen - 2
- // zero octets. The length of PS may be 0.
+ // zero octets. The length of PS may be 0.
//
// 8. Let DB = PS || 0x01 || salt; DB is an octet string of length
// emLen - hLen - 1.
// Note that hashed must be the result of hashing the input message using the
// given hash function. The opts argument may be nil, in which case sensible
// defaults are used.
-func SignPSS(rand io.Reader, priv *PrivateKey, hash crypto.Hash, hashed []byte, opts *PSSOptions) (s []byte, err error) {
+func SignPSS(rand io.Reader, priv *PrivateKey, hash crypto.Hash, hashed []byte, opts *PSSOptions) ([]byte, error) {
saltLength := opts.saltLength()
switch saltLength {
case PSSSaltLengthAuto:
}
salt := make([]byte, saltLength)
- if _, err = io.ReadFull(rand, salt); err != nil {
- return
+ if _, err := io.ReadFull(rand, salt); err != nil {
+ return nil, err
}
return signPSSWithSalt(rand, priv, hash, hashed, salt)
}
// possible.
//
// Two sets of interfaces are included in this package. When a more abstract
-// interface isn't neccessary, there are functions for encrypting/decrypting
+// interface isn't necessary, there are functions for encrypting/decrypting
// with v1.5/OAEP and signing/verifying with v1.5/PSS. If one needs to abstract
// over the public-key primitive, the PrivateKey struct implements the
// Decrypter and Signer interfaces from the crypto package.
// GenerateKey generates an RSA keypair of the given bit size using the
// random source random (for example, crypto/rand.Reader).
-func GenerateKey(random io.Reader, bits int) (priv *PrivateKey, err error) {
+func GenerateKey(random io.Reader, bits int) (*PrivateKey, error) {
return GenerateMultiPrimeKey(random, 2, bits)
}
//
// [1] US patent 4405829 (1972, expired)
// [2] http://www.cacr.math.uwaterloo.ca/techreports/2006/cacr2006-16.pdf
-func GenerateMultiPrimeKey(random io.Reader, nprimes int, bits int) (priv *PrivateKey, err error) {
- priv = new(PrivateKey)
+func GenerateMultiPrimeKey(random io.Reader, nprimes int, bits int) (*PrivateKey, error) {
+ priv := new(PrivateKey)
priv.E = 65537
if nprimes < 2 {
todo += (nprimes - 2) / 5
}
for i := 0; i < nprimes; i++ {
+ var err error
primes[i], err = rand.Prime(random, todo/(nprimes-i))
if err != nil {
return nil, err
}
priv.Precompute()
- return
+ return priv, nil
}
// incCounter increments a four byte, big-endian counter.
//
// The message must be no longer than the length of the public modulus less
// twice the hash length plus 2.
-func EncryptOAEP(hash hash.Hash, random io.Reader, pub *PublicKey, msg []byte, label []byte) (out []byte, err error) {
+func EncryptOAEP(hash hash.Hash, random io.Reader, pub *PublicKey, msg []byte, label []byte) ([]byte, error) {
if err := checkPub(pub); err != nil {
return nil, err
}
hash.Reset()
k := (pub.N.BitLen() + 7) / 8
if len(msg) > k-2*hash.Size()-2 {
- err = ErrMessageTooLong
- return
+ return nil, ErrMessageTooLong
}
hash.Write(label)
db[len(db)-len(msg)-1] = 1
copy(db[len(db)-len(msg):], msg)
- _, err = io.ReadFull(random, seed)
+ _, err := io.ReadFull(random, seed)
if err != nil {
- return
+ return nil, err
}
mgf1XOR(db, hash, seed)
m := new(big.Int)
m.SetBytes(em)
c := encrypt(new(big.Int), pub, m)
- out = c.Bytes()
+ out := c.Bytes()
if len(out) < k {
// If the output is too small, we need to left-pad with zeros.
out = t
}
- return
+ return out, nil
}
// ErrDecryption represents a failure to decrypt a message.
//
// The label parameter must match the value given when encrypting. See
// EncryptOAEP for details.
-func DecryptOAEP(hash hash.Hash, random io.Reader, priv *PrivateKey, ciphertext []byte, label []byte) (msg []byte, err error) {
+func DecryptOAEP(hash hash.Hash, random io.Reader, priv *PrivateKey, ciphertext []byte, label []byte) ([]byte, error) {
if err := checkPub(&priv.PublicKey); err != nil {
return nil, err
}
k := (priv.N.BitLen() + 7) / 8
if len(ciphertext) > k ||
k < hash.Size()*2+2 {
- err = ErrDecryption
- return
+ return nil, ErrDecryption
}
c := new(big.Int).SetBytes(ciphertext)
m, err := decrypt(random, priv, c)
if err != nil {
- return
+ return nil, err
}
hash.Write(label)
}
if firstByteIsZero&lHash2Good&^invalid&^lookingForIndex != 1 {
- err = ErrDecryption
- return
+ return nil, ErrDecryption
}
- msg = rest[index+1:]
- return
+ return rest[index+1:], nil
}
// leftPad returns a new slice of length size. The contents of input are right
func (d *digest) checkSum() [Size]byte {
len := d.len
- // Padding. Add a 1 bit and 0 bits until 56 bytes mod 64.
+ // Padding. Add a 1 bit and 0 bits until 56 bytes mod 64.
var tmp [64]byte
tmp[0] = 0x80
if len%64 < 56 {
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// SHA1 hash algorithm. See RFC 3174.
+// SHA1 hash algorithm. See RFC 3174.
package sha1
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
func (d *digest) checkSum() [Size]byte {
len := d.len
- // Padding. Add a 1 bit and 0 bits until 56 bytes mod 64.
+ // Padding. Add a 1 bit and 0 bits until 56 bytes mod 64.
var tmp [64]byte
tmp[0] = 0x80
if len%64 < 56 {
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// SHA256 hash algorithm. See FIPS 180-2.
+// SHA256 hash algorithm. See FIPS 180-2.
package sha256
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
func (d *digest) checkSum() [Size]byte {
- // Padding. Add a 1 bit and 0 bits until 112 bytes mod 128.
+ // Padding. Add a 1 bit and 0 bits until 112 bytes mod 128.
len := d.len
var tmp [128]byte
tmp[0] = 0x80
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// SHA512 hash algorithm. See FIPS 180-4.
+// SHA512 hash algorithm. See FIPS 180-4.
package sha512
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
var cipherSuites = []*cipherSuite{
// Ciphersuite order is chosen so that ECDHE comes before plain RSA
- // and RC4 comes before AES (because of the Lucky13 attack).
+ // and RC4 comes before AES-CBC (because of the Lucky13 attack).
{TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, 16, 0, 4, ecdheRSAKA, suiteECDHE | suiteTLS12, nil, nil, aeadAESGCM},
{TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, 16, 0, 4, ecdheECDSAKA, suiteECDHE | suiteECDSA | suiteTLS12, nil, nil, aeadAESGCM},
{TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, 32, 0, 4, ecdheRSAKA, suiteECDHE | suiteTLS12 | suiteSHA384, nil, nil, aeadAESGCM},
}
// VerifyHostname checks that the peer certificate chain is valid for
-// connecting to host. If so, it returns nil; if not, it returns an error
+// connecting to host. If so, it returns nil; if not, it returns an error
// describing the problem.
func (c *Conn) VerifyHostname(host string) error {
c.handshakeMutex.Lock()
"io"
"net"
"strconv"
+ "strings"
)
type clientHandshakeState struct {
return errors.New("tls: NextProtos values too large")
}
- sni := c.config.ServerName
- // IP address literals are not permitted as SNI values. See
- // https://tools.ietf.org/html/rfc6066#section-3.
- if net.ParseIP(sni) != nil {
- sni = ""
- }
-
hello := &clientHelloMsg{
vers: c.config.maxVersion(),
compressionMethods: []uint8{compressionNone},
random: make([]byte, 32),
ocspStapling: true,
scts: true,
- serverName: sni,
+ serverName: hostnameInSNI(c.config.ServerName),
supportedCurves: c.config.curvePreferences(),
supportedPoints: []uint8{pointFormatUncompressed},
nextProtoNeg: len(c.config.NextProtos) > 0,
return protos[0], true
}
+
+// hostnameInSNI converts name into an approriate hostname for SNI.
+// Literal IP addresses and absolute FQDNs are not permitted as SNI values.
+// See https://tools.ietf.org/html/rfc6066#section-3.
+func hostnameInSNI(name string) string {
+ host := name
+ if len(host) > 0 && host[0] == '[' && host[len(host)-1] == ']' {
+ host = host[1 : len(host)-1]
+ }
+ if i := strings.LastIndex(host, "%"); i > 0 {
+ host = host[:i]
+ }
+ if net.ParseIP(host) != nil {
+ return ""
+ }
+ if len(name) > 0 && name[len(name)-1] == '.' {
+ name = name[:len(name)-1]
+ }
+ return name
+}
t.Fatalf("%s resumed: %v, expected: %v", test, hs.DidResume, didResume)
}
if didResume && (hs.PeerCertificates == nil || hs.VerifiedChains == nil) {
- t.Fatalf("expected non-nil certificates after resumption. Got peerCertificates: %#v, verifedCertificates: %#v", hs.PeerCertificates, hs.VerifiedChains)
+ t.Fatalf("expected non-nil certificates after resumption. Got peerCertificates: %#v, verifiedCertificates: %#v", hs.PeerCertificates, hs.VerifiedChains)
}
}
runClientTestTLS12(t, test)
}
-func TestNoIPAddressesInSNI(t *testing.T) {
- for _, ipLiteral := range []string{"1.2.3.4", "::1"} {
+var hostnameInSNITests = []struct {
+ in, out string
+}{
+ // Opaque string
+ {"", ""},
+ {"localhost", "localhost"},
+ {"foo, bar, baz and qux", "foo, bar, baz and qux"},
+
+ // DNS hostname
+ {"golang.org", "golang.org"},
+ {"golang.org.", "golang.org"},
+
+ // Literal IPv4 address
+ {"1.2.3.4", ""},
+
+ // Literal IPv6 address
+ {"::1", ""},
+ {"::1%lo0", ""}, // with zone identifier
+ {"[::1]", ""}, // as per RFC 5952 we allow the [] style as IPv6 literal
+ {"[::1%lo0]", ""},
+}
+
+func TestHostnameInSNI(t *testing.T) {
+ for _, tt := range hostnameInSNITests {
c, s := net.Pipe()
- go func() {
- client := Client(c, &Config{ServerName: ipLiteral})
- client.Handshake()
- }()
+ go func(host string) {
+ Client(c, &Config{ServerName: host, InsecureSkipVerify: true}).Handshake()
+ }(tt.in)
var header [5]byte
if _, err := io.ReadFull(s, header[:]); err != nil {
if _, err := io.ReadFull(s, record[:]); err != nil {
t.Fatal(err)
}
+
+ c.Close()
s.Close()
- if bytes.Index(record, []byte(ipLiteral)) != -1 {
- t.Errorf("IP literal %q found in ClientHello: %x", ipLiteral, record)
+ var m clientHelloMsg
+ if !m.unmarshal(record) {
+ t.Errorf("unmarshaling ClientHello for %q failed", tt.in)
+ continue
+ }
+ if tt.in != tt.out && m.serverName == tt.in {
+ t.Errorf("prohibited %q found in ClientHello: %x", tt.in, record)
+ }
+ if m.serverName != tt.out {
+ t.Errorf("expected %q not found in ClientHello: %x", tt.out, record)
}
}
}
// If we received a client cert in response to our certificate request message,
// the client will send us a certificateVerifyMsg immediately after the
- // clientKeyExchangeMsg. This message is a digest of all preceding
+ // clientKeyExchangeMsg. This message is a digest of all preceding
// handshake-layer messages that is signed using the private key corresponding
// to the client's certificate. This allows us to verify that the client is in
// possession of the private key of the certificate.
func testClientHelloFailure(t *testing.T, serverConfig *Config, m handshakeMessage, expectedSubStr string) {
// Create in-memory network connection,
- // send message to server. Should return
+ // send message to server. Should return
// expected error.
c, s := net.Pipe()
go func() {
func TestNoSuiteOverlap(t *testing.T) {
clientHello := &clientHelloMsg{
- vers: 0x0301,
+ vers: VersionTLS10,
cipherSuites: []uint16{0xff00},
- compressionMethods: []uint8{0},
+ compressionMethods: []uint8{compressionNone},
}
testClientHelloFailure(t, testConfig, clientHello, "no cipher suite supported by both client and server")
}
func TestNoCompressionOverlap(t *testing.T) {
clientHello := &clientHelloMsg{
- vers: 0x0301,
+ vers: VersionTLS10,
cipherSuites: []uint16{TLS_RSA_WITH_RC4_128_SHA},
compressionMethods: []uint8{0xff},
}
func TestNoRC4ByDefault(t *testing.T) {
clientHello := &clientHelloMsg{
- vers: 0x0301,
+ vers: VersionTLS10,
cipherSuites: []uint16{TLS_RSA_WITH_RC4_128_SHA},
- compressionMethods: []uint8{0},
+ compressionMethods: []uint8{compressionNone},
}
serverConfig := *testConfig
// Reset the enabled cipher suites to nil in order to test the
// Test that, even when both sides support an ECDSA cipher suite, it
// won't be selected if the server's private key doesn't support it.
clientHello := &clientHelloMsg{
- vers: 0x0301,
+ vers: VersionTLS10,
cipherSuites: []uint16{TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA},
- compressionMethods: []uint8{0},
+ compressionMethods: []uint8{compressionNone},
supportedCurves: []CurveID{CurveP256},
supportedPoints: []uint8{pointFormatUncompressed},
}
// Test that, even when both sides support an RSA cipher suite, it
// won't be selected if the server's private key doesn't support it.
clientHello := &clientHelloMsg{
- vers: 0x0301,
+ vers: VersionTLS10,
cipherSuites: []uint16{TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA},
- compressionMethods: []uint8{0},
+ compressionMethods: []uint8{compressionNone},
supportedCurves: []CurveID{CurveP256},
supportedPoints: []uint8{pointFormatUncompressed},
}
server := Server(serverConn, config)
connStateChan := make(chan ConnectionState, 1)
go func() {
- var err error
- if _, err = server.Write([]byte("hello, world\n")); err != nil {
- t.Logf("Error from Server.Write: %s", err)
- }
+ _, err := server.Write([]byte("hello, world\n"))
if len(test.expectHandshakeErrorIncluding) > 0 {
if err == nil {
t.Errorf("Error expected, but no error returned")
} else if s := err.Error(); !strings.Contains(s, test.expectHandshakeErrorIncluding) {
t.Errorf("Error expected containing '%s' but got '%s'", test.expectHandshakeErrorIncluding, s)
}
+ } else {
+ if err != nil {
+ t.Logf("Error from Server.Write: '%s'", err)
+ }
}
server.Close()
serverConn.Close()
}
clientHello := &clientHelloMsg{
- vers: 0x0301,
+ vers: VersionTLS10,
cipherSuites: []uint16{TLS_RSA_WITH_RC4_128_SHA},
- compressionMethods: []uint8{0},
+ compressionMethods: []uint8{compressionNone},
serverName: "test",
}
testClientHelloFailure(t, &serverConfig, clientHello, errMsg)
serverConfig.Certificates = nil
clientHello := &clientHelloMsg{
- vers: 0x0301,
+ vers: VersionTLS10,
cipherSuites: []uint16{TLS_RSA_WITH_RC4_128_SHA},
- compressionMethods: []uint8{0},
+ compressionMethods: []uint8{compressionNone},
}
testClientHelloFailure(t, &serverConfig, clientHello, errMsg)
serverConfig.GetCertificate = nil
clientHello = &clientHelloMsg{
- vers: 0x0301,
+ vers: VersionTLS10,
cipherSuites: []uint16{TLS_RSA_WITH_RC4_128_SHA},
- compressionMethods: []uint8{0},
+ compressionMethods: []uint8{compressionNone},
}
testClientHelloFailure(t, &serverConfig, clientHello, "no certificates")
}
if err != nil {
return nil, err
}
- // We don't check the version number in the premaster secret. For one,
+ // We don't check the version number in the premaster secret. For one,
// by checking it, we would leak information about the validity of the
// encrypted pre-master secret. Secondly, it provides only a small
// benefit against a downgrade attack and some implementations send the
}
// Accept waits for and returns the next incoming TLS connection.
-// The returned connection c is a *tls.Conn.
-func (l *listener) Accept() (c net.Conn, err error) {
- c, err = l.Listener.Accept()
+// The returned connection is of type *Conn.
+func (l *listener) Accept() (net.Conn, error) {
+ c, err := l.Listener.Accept()
if err != nil {
- return
+ return nil, err
}
- c = Server(c, l.config)
- return
+ return Server(c, l.config), nil
}
// NewListener creates a Listener which accepts connections from an inner
func TestConnReadNonzeroAndEOF(t *testing.T) {
// This test is racy: it assumes that after a write to a
// localhost TCP connection, the peer TCP connection can
- // immediately read it. Because it's racy, we skip this test
+ // immediately read it. Because it's racy, we skip this test
// in short mode, and then retry it several times with an
// increasing sleep in between our final write (via srv.Close
// below) and the following read.
package x509_test
import (
+ "crypto/dsa"
+ "crypto/ecdsa"
+ "crypto/rsa"
"crypto/x509"
"encoding/pem"
+ "fmt"
)
func ExampleCertificate_Verify() {
panic("failed to verify certificate: " + err.Error())
}
}
+
+func ExampleParsePKIXPublicKey() {
+ const pubPEM = `
+-----BEGIN PUBLIC KEY-----
+MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAlRuRnThUjU8/prwYxbty
+WPT9pURI3lbsKMiB6Fn/VHOKE13p4D8xgOCADpdRagdT6n4etr9atzDKUSvpMtR3
+CP5noNc97WiNCggBjVWhs7szEe8ugyqF23XwpHQ6uV1LKH50m92MbOWfCtjU9p/x
+qhNpQQ1AZhqNy5Gevap5k8XzRmjSldNAFZMY7Yv3Gi+nyCwGwpVtBUwhuLzgNFK/
+yDtw2WcWmUU7NuC8Q6MWvPebxVtCfVp/iQU6q60yyt6aGOBkhAX0LpKAEhKidixY
+nP9PNVBvxgu3XZ4P36gZV6+ummKdBVnc3NqwBLu5+CcdRdusmHPHd5pHf4/38Z3/
+6qU2a/fPvWzceVTEgZ47QjFMTCTmCwNt29cvi7zZeQzjtwQgn4ipN9NibRH/Ax/q
+TbIzHfrJ1xa2RteWSdFjwtxi9C20HUkjXSeI4YlzQMH0fPX6KCE7aVePTOnB69I/
+a9/q96DiXZajwlpq3wFctrs1oXqBp5DVrCIj8hU2wNgB7LtQ1mCtsYz//heai0K9
+PhE4X6hiE0YmeAZjR0uHl8M/5aW9xCoJ72+12kKpWAa0SFRWLy6FejNYCYpkupVJ
+yecLk/4L1W0l6jQQZnWErXZYe0PNFcmwGXy1Rep83kfBRNKRy5tvocalLlwXLdUk
+AIU+2GKjyT3iMuzZxxFxPFMCAwEAAQ==
+-----END PUBLIC KEY-----`
+
+ block, _ := pem.Decode([]byte(pubPEM))
+ if block == nil {
+ panic("failed to parse PEM block containing the public key")
+ }
+
+ pub, err := x509.ParsePKIXPublicKey(block.Bytes)
+ if err != nil {
+ panic("failed to parse DER encoded public key: " + err.Error())
+ }
+
+ switch pub := pub.(type) {
+ case *rsa.PublicKey:
+ fmt.Println("pub is of type RSA:", pub)
+ case *dsa.PublicKey:
+ fmt.Println("pub is of type DSA:", pub)
+ case *ecdsa.PublicKey:
+ fmt.Println("pub is of type ECDSA:", pub)
+ default:
+ panic("unknown type of public key")
+ }
+}
}
// rfc1423Algos holds a slice of the possible ways to encrypt a PEM
-// block. The ivSize numbers were taken from the OpenSSL source.
+// block. The ivSize numbers were taken from the OpenSSL source.
var rfc1423Algos = []rfc1423Algo{{
cipher: PEMCipherDES,
name: "DES-CBC",
}
// ParsePKCS1PrivateKey returns an RSA private key from its ASN.1 PKCS#1 DER encoded form.
-func ParsePKCS1PrivateKey(der []byte) (key *rsa.PrivateKey, err error) {
+func ParsePKCS1PrivateKey(der []byte) (*rsa.PrivateKey, error) {
var priv pkcs1PrivateKey
rest, err := asn1.Unmarshal(der, &priv)
if len(rest) > 0 {
- err = asn1.SyntaxError{Msg: "trailing data"}
- return
+ return nil, asn1.SyntaxError{Msg: "trailing data"}
}
if err != nil {
- return
+ return nil, err
}
if priv.Version > 1 {
return nil, errors.New("x509: private key contains zero or negative value")
}
- key = new(rsa.PrivateKey)
+ key := new(rsa.PrivateKey)
key.PublicKey = rsa.PublicKey{
E: priv.E,
N: priv.N,
}
key.Precompute()
- return
+ return key, nil
}
// MarshalPKCS1PrivateKey converts a private key to ASN.1 DER encoded form.
}
if have < want {
- t.Errorf("insufficent overlap between cgo and non-cgo roots; want at least %d, have %d", want, have)
+ t.Errorf("insufficient overlap between cgo and non-cgo roots; want at least %d, have %d", want, have)
}
}
}
// CertGetCertificateChain will traverse Windows's root stores
- // in an attempt to build a verified certificate chain. Once
+ // in an attempt to build a verified certificate chain. Once
// it has found a verified chain, it stops. MSDN docs on
// CERT_CHAIN_CONTEXT:
//
}
// ParseECPrivateKey parses an ASN.1 Elliptic Curve Private Key Structure.
-func ParseECPrivateKey(der []byte) (key *ecdsa.PrivateKey, err error) {
+func ParseECPrivateKey(der []byte) (*ecdsa.PrivateKey, error) {
return parseECPrivateKey(nil, der)
}
}
privateKeyBytes := key.D.Bytes()
- paddedPrivateKey := make([]byte, (key.Curve.Params().N.BitLen() + 7) / 8)
- copy(paddedPrivateKey[len(paddedPrivateKey) - len(privateKeyBytes):], privateKeyBytes)
+ paddedPrivateKey := make([]byte, (key.Curve.Params().N.BitLen()+7)/8)
+ copy(paddedPrivateKey[len(paddedPrivateKey)-len(privateKeyBytes):], privateKeyBytes)
return asn1.Marshal(ecPrivateKey{
Version: 1,
priv.Curve = curve
priv.D = k
- privateKey := make([]byte, (curveOrder.BitLen() + 7) / 8)
+ privateKey := make([]byte, (curveOrder.BitLen()+7)/8)
// Some private keys have leading zero padding. This is invalid
// according to [SEC1], but this code will ignore it.
// Some private keys remove all leading zeros, this is also invalid
// according to [SEC1] but since OpenSSL used to do this, we ignore
// this too.
- copy(privateKey[len(privateKey) - len(privKey.PrivateKey):], privKey.PrivateKey)
+ copy(privateKey[len(privateKey)-len(privKey.PrivateKey):], privKey.PrivateKey)
priv.X, priv.Y = curve.ScalarBaseMult(privateKey)
return priv, nil
"testing"
)
-var ecKeyTests = []struct{
- derHex string
+var ecKeyTests = []struct {
+ derHex string
shouldReserialize bool
}{
// Generated using:
// being valid for encryption only, but no-one noticed. Another
// European CA marked its signature keys as not being valid for
// signatures. A different CA marked its own trusted root certificate
- // as being invalid for certificate signing. Another national CA
+ // as being invalid for certificate signing. Another national CA
// distributed a certificate to be used to encrypt data for the
// country’s tax authority that was marked as only being usable for
// digital signatures but not for encryption. Yet another CA reversed
// ParsePKIXPublicKey parses a DER encoded public key. These values are
// typically found in PEM blocks with "BEGIN PUBLIC KEY".
+//
+// Supported key types include RSA, DSA, and ECDSA. Unknown key
+// types result in an error.
+//
+// On success, pub will be of type *rsa.PublicKey, *dsa.PublicKey,
+// or *ecdsa.PublicKey.
func ParsePKIXPublicKey(derBytes []byte) (pub interface{}, err error) {
var pki publicKeyInfo
if rest, err := asn1.Unmarshal(derBytes, &pki); err != nil {
// CheckSignatureFrom verifies that the signature on c is a valid signature
// from parent.
-func (c *Certificate) CheckSignatureFrom(parent *Certificate) (err error) {
+func (c *Certificate) CheckSignatureFrom(parent *Certificate) error {
// RFC 5280, 4.2.1.9:
// "If the basic constraints extension is not present in a version 3
// certificate, or the extension is present but the cA boolean is not
// CheckSignature verifies that signature is a valid signature over signed from
// c's public key.
-func (c *Certificate) CheckSignature(algo SignatureAlgorithm, signed, signature []byte) (err error) {
+func (c *Certificate) CheckSignature(algo SignatureAlgorithm, signed, signature []byte) error {
return checkSignature(algo, signed, signature, c.PublicKey)
}
}
// CheckCRLSignature checks that the signature in crl is from c.
-func (c *Certificate) CheckCRLSignature(crl *pkix.CertificateList) (err error) {
+func (c *Certificate) CheckCRLSignature(crl *pkix.CertificateList) error {
algo := getSignatureAlgorithmFromOID(crl.SignatureAlgorithm.Algorithm)
return c.CheckSignature(algo, crl.TBSCertList.Raw, crl.SignatureValue.RightAlign())
}
// encoded CRLs will appear where they should be DER encoded, so this function
// will transparently handle PEM encoding as long as there isn't any leading
// garbage.
-func ParseCRL(crlBytes []byte) (certList *pkix.CertificateList, err error) {
+func ParseCRL(crlBytes []byte) (*pkix.CertificateList, error) {
if bytes.HasPrefix(crlBytes, pemCRLPrefix) {
block, _ := pem.Decode(crlBytes)
if block != nil && block.Type == pemType {
}
// ParseDERCRL parses a DER encoded CRL from the given bytes.
-func ParseDERCRL(derBytes []byte) (certList *pkix.CertificateList, err error) {
- certList = new(pkix.CertificateList)
+func ParseDERCRL(derBytes []byte) (*pkix.CertificateList, error) {
+ certList := new(pkix.CertificateList)
if rest, err := asn1.Unmarshal(derBytes, certList); err != nil {
return nil, err
} else if len(rest) != 0 {
return out, nil
}
-// CheckSignature verifies that the signature on c is a valid signature
-func (c *CertificateRequest) CheckSignature() (err error) {
+// CheckSignature reports whether the signature on c is valid.
+func (c *CertificateRequest) CheckSignature() error {
return checkSignature(c.SignatureAlgorithm, c.RawTBSCertificateRequest, c.Signature, c.PublicKey)
}
// Let the Stmt convert its own arguments.
for n, arg := range args {
// First, see if the value itself knows how to convert
- // itself to a driver type. For example, a NullString
+ // itself to a driver type. For example, a NullString
// struct changing into a string or nil.
if svi, ok := arg.(driver.Valuer); ok {
sv, err := svi.Value()
// any type to a driver Value.
type ColumnConverter interface {
// ColumnConverter returns a ValueConverter for the provided
- // column index. If the type of a specific column isn't known
+ // column index. If the type of a specific column isn't known
// or shouldn't be handled specially, DefaultValueConverter
// can be returned.
ColumnConverter(idx int) ValueConverter
type Rows interface {
// Columns returns the names of the columns. The number of
// columns of the result is inferred from the length of the
- // slice. If a particular column name isn't known, an empty
+ // slice. If a particular column name isn't known, an empty
// string should be returned for that entry.
Columns() []string
//
// Various implementations of ValueConverter are provided by the
// driver package to provide consistent implementations of conversions
-// between drivers. The ValueConverters have several uses:
+// between drivers. The ValueConverters have several uses:
//
// * converting from the Value types as provided by the sql package
// into a database table's specific column type and making sure it
// named method on fakeStmt to panic.
//
// When opening a fakeDriver's database, it starts empty with no
-// tables. All tables and data are stored in memory only.
+// tables. All tables and data are stored in memory only.
type fakeDriver struct {
mu sync.Mutex // guards 3 following fields
openCount int // conn opens
rows:
for _, trow := range t.rows {
// Process the where clause, skipping non-match rows. This is lazy
- // and just uses fmt.Sprintf("%v") to test equality. Good enough
+ // and just uses fmt.Sprintf("%v") to test equality. Good enough
// for test code.
for widx, wcol := range s.whereCol {
idx := t.columnIndex(wcol)
// time.Time
// nil - for NULL values
//
- // An error should be returned if the value can not be stored
+ // An error should be returned if the value cannot be stored
// without loss of information.
Scan(src interface{}) error
}
return conn, nil
}
- // Out of free connections or we were asked not to use one. If we're not
+ // Out of free connections or we were asked not to use one. If we're not
// allowed to open any more connections, make a request and wait.
if db.maxOpen > 0 && db.numOpen >= db.maxOpen {
// Make the connRequest channel. It's buffered so that the
// ErrTxDone.
done bool
- // All Stmts prepared for this transaction. These will be closed after the
+ // All Stmts prepared for this transaction. These will be closed after the
// transaction has been committed or rolled back.
stmts struct {
sync.Mutex
// necessary. Or, better: keep a map in DB of query string to
// Stmts, and have Stmt.Execute do the right thing and
// re-prepare if the Conn in use doesn't have that prepared
- // statement. But we'll want to avoid caching the statement
+ // statement. But we'll want to avoid caching the statement
// in the case where we only call conn.Prepare implicitly
// (such as in db.Exec or tx.Exec), but the caller package
// can't be holding a reference to the returned statement.
// be used once the transaction has been committed or rolled back.
func (tx *Tx) Stmt(stmt *Stmt) *Stmt {
// TODO(bradfitz): optimize this. Currently this re-prepares
- // each time. This is fine for now to illustrate the API but
+ // each time. This is fine for now to illustrate the API but
// we should really cache already-prepared statements
// per-Conn. See also the big comment in Tx.Prepare.
closed bool
// css is a list of underlying driver statement interfaces
- // that are valid on particular connections. This is only
+ // that are valid on particular connections. This is only
// used if tx == nil and one is found that has idle
- // connections. If tx != nil, txsi is always used.
+ // connections. If tx != nil, txsi is always used.
css []connStmt
// lastNumClosed is copied from db.numClosed when Stmt is created
closeStmt driver.Stmt // if non-nil, statement to Close on close
}
-// Next prepares the next result row for reading with the Scan method. It
+// Next prepares the next result row for reading with the Scan method. It
// returns true on success, or false if there is no next result row or an error
-// happened while preparing it. Err should be consulted to distinguish between
+// happened while preparing it. Err should be consulted to distinguish between
// the two cases.
//
// Every call to Scan, even the first one, must be preceded by a call to Next.
// the Rows in our defer, when we return from this function.
// the contract with the driver.Next(...) interface is that it
// can return slices into read-only temporary memory that's
- // only valid until the next Scan/Close. But the TODO is that
- // for a lot of drivers, this copy will be unnecessary. We
+ // only valid until the next Scan/Close. But the TODO is that
+ // for a lot of drivers, this copy will be unnecessary. We
// should provide an optional interface for drivers to
// implement to say, "don't worry, the []bytes that I return
// from Next will not be modified again." (for instance, if
if err == nil {
// TODO: this test fails, but it's just because
// fakeConn implements the optional Execer interface,
- // so arguably this is the correct behavior. But
+ // so arguably this is the correct behavior. But
// maybe I should flesh out the fakeConn.Exec
// implementation so this properly fails.
// t.Errorf("expected error inserting nil name with Exec")
_, err := db.Query("SELECT|non_existent|name|")
if err == nil {
- t.Fatal("Quering non-existent table should fail")
+ t.Fatal("Querying non-existent table should fail")
}
}
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
err error
}
-// Data format, other than byte order. This affects the handling of
+// Data format, other than byte order. This affects the handling of
// certain field formats.
type dataFormat interface {
- // DWARF version number. Zero means unknown.
+ // DWARF version number. Zero means unknown.
version() int
// 64-bit DWARF format?
dwarf64() (dwarf64 bool, isKnown bool)
- // Size of an address, in bytes. Zero means unknown.
+ // Size of an address, in bytes. Zero means unknown.
addrsize() int
}
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
Class Class
}
-// A Class is the DWARF 4 class of an attibute value.
+// A Class is the DWARF 4 class of an attribute value.
//
// In general, a given attribute's value may take on one of several
// possible classes defined by DWARF, each of which leads to a
}
// A Reader allows reading Entry structures from a DWARF ``info'' section.
-// The Entry structures are arranged in a tree. The Reader's Next function
+// The Entry structures are arranged in a tree. The Reader's Next function
// return successive entries from a pre-order traversal of the tree.
// If an entry has children, its Children field will be true, and the children
// follow, terminated by an Entry with Tag 0.
}
// SkipChildren skips over the child entries associated with
-// the last Entry returned by Next. If that Entry did not have
+// the last Entry returned by Next. If that Entry did not have
// children or Next has not been called, SkipChildren is a no-op.
func (r *Reader) SkipChildren() {
if r.err != nil || !r.lastChildren {
}
}
-// clone returns a copy of the reader. This is used by the typeReader
+// clone returns a copy of the reader. This is used by the typeReader
// interface.
func (r *Reader) clone() typeReader {
return r.d.Reader()
}
-// offset returns the current buffer offset. This is used by the
+// offset returns the current buffer offset. This is used by the
// typeReader interface.
func (r *Reader) offset() Offset {
return r.b.off
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
return d, nil
}
-// AddTypes will add one .debug_types section to the DWARF data. A
+// AddTypes will add one .debug_types section to the DWARF data. A
// typical object with DWARF version 4 debug info will have multiple
-// .debug_types sections. The name is used for error reporting only,
+// .debug_types sections. The name is used for error reporting only,
// and serves to distinguish one .debug_types section from another.
func (d *Data) AddTypes(name string, types []byte) error {
return d.parseTypes(name, types)
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// Get Type referred to by Entry's AttrType field.
- // Set err if error happens. Not having a type is an error.
+ // Set err if error happens. Not having a type is an error.
typeOf := func(e *Entry) Type {
tval := e.Val(AttrType)
var t Type
bito = f.ByteOffset * 8
}
if bito == lastFieldBitOffset && t.Kind != "union" {
- // Last field was zero width. Fix array length.
+ // Last field was zero width. Fix array length.
// (DWARF writes out 0-length arrays as if they were 1-length arrays.)
zeroArray(lastFieldType)
}
if t.Kind != "union" {
b, ok := e.Val(AttrByteSize).(int64)
if ok && b*8 == lastFieldBitOffset {
- // Final field must be zero width. Fix array length.
+ // Final field must be zero width. Fix array length.
zeroArray(lastFieldType)
}
}
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// As Apple converts gcc to a clang-based front end
-// they keep breaking the DWARF output. This map lists the
+// they keep breaking the DWARF output. This map lists the
// conversion from real answer to Apple answer.
var machoBug = map[string]string{
"func(*char, ...) void": "func(*char) void",
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
"strconv"
)
-// Parse the type units stored in a DWARF4 .debug_types section. Each
+// Parse the type units stored in a DWARF4 .debug_types section. Each
// type unit defines a single primary type and an 8-byte signature.
// Other sections may then use formRefSig8 to refer to the type.
-// The typeUnit format is a single type with a signature. It holds
+// The typeUnit format is a single type with a signature. It holds
// the same data as a compilation unit.
type typeUnit struct {
unit
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
Align uint32 /* Alignment in memory and file. */
}
-// ELF32 Dynamic structure. The ".dynamic" section contains an array of them.
+// ELF32 Dynamic structure. The ".dynamic" section contains an array of them.
type Dyn32 struct {
Tag int32 /* Entry type. */
Val uint32 /* Integer/Address value. */
Align uint64 /* Alignment in memory and file. */
}
-// ELF64 Dynamic structure. The ".dynamic" section contains an array of them.
+// ELF64 Dynamic structure. The ".dynamic" section contains an array of them.
type Dyn64 struct {
Tag int64 /* Entry type. */
Val uint64 /* Integer/address value */
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
func (t *LineTable) parse(targetPC uint64, targetLine int) (b []byte, pc uint64, line int) {
// The PC/line table can be thought of as a sequence of
// <pc update>* <line update>
- // batches. Each update batch results in a (pc, line) pair,
+ // batches. Each update batch results in a (pc, line) pair,
// where line applies to every PC from pc up to but not
// including the pc of the next pair.
//
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
if err := cmd.Run(); err != nil {
t.Fatal(err)
}
- cmd = exec.Command("go", "tool", "link", "-H", "linux", "-E", "main",
+ cmd = exec.Command("go", "tool", "link", "-H", "linux",
"-o", pclinetestBinary, pclinetestBinary+".o")
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
* Symbol tables
*/
-// Table represents a Go symbol table. It stores all of the
+// Table represents a Go symbol table. It stores all of the
// symbols decoded from the program and provides methods to translate
// between symbols, names, and addresses.
type Table struct {
}
// Count text symbols and attach frame sizes, parameters, and
- // locals to them. Also, find object file boundaries.
+ // locals to them. Also, find object file boundaries.
lastf := 0
for i := 0; i < len(t.Syms); i++ {
sym := &t.Syms[i]
}
// LineToPC looks up the first program counter on the given line in
-// the named file. It returns UnknownPathError or UnknownLineError if
+// the named file. It returns UnknownPathError or UnknownLineError if
// there is an error looking up this line.
func (t *Table) LineToPC(file string, line int) (pc uint64, fn *Func, err error) {
obj, ok := t.Files[file]
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// OpenFat opens the named file using os.Open and prepares it for use as a Mach-O
// universal binary.
-func OpenFat(name string) (ff *FatFile, err error) {
+func OpenFat(name string) (*FatFile, error) {
f, err := os.Open(name)
if err != nil {
return nil, err
}
- ff, err = NewFatFile(f)
+ ff, err := NewFatFile(f)
if err != nil {
f.Close()
return nil, err
}
ff.closer = f
- return
+ return ff, nil
}
func (ff *FatFile) Close() error {
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//
// The encoding handles 4-byte chunks, using a special encoding
// for the last fragment, so Encode is not appropriate for use on
-// individual blocks of a large data stream. Use NewEncoder() instead.
+// individual blocks of a large data stream. Use NewEncoder() instead.
//
// Often, ascii85-encoded data is wrapped in <~ and ~> symbols.
// Encode does not add these.
// MaxEncodedLen returns the maximum length of an encoding of n source bytes.
func MaxEncodedLen(n int) int { return (n + 3) / 4 * 5 }
-// NewEncoder returns a new ascii85 stream encoder. Data written to
+// NewEncoder returns a new ascii85 stream encoder. Data written to
// the returned writer will be encoded and then written to w.
// Ascii85 encodings operate in 32-bit blocks; when finished
// writing, the caller must Close the returned encoder to flush any
}
}
- // Out of input, out of decoded output. Check errors.
+ // Out of input, out of decoded output. Check errors.
if d.err != nil {
return 0, d.err
}
// A forkableWriter is an in-memory buffer that can be
// 'forked' to create new forkableWriters that bracket the
-// original. After
+// original. After
// pre, post := w.fork()
// the overall sequence of bytes represented is logically w+pre+post.
type forkableWriter struct {
*/
// An Encoding is a radix 32 encoding/decoding scheme, defined by a
-// 32-character alphabet. The most common is the "base32" encoding
+// 32-character alphabet. The most common is the "base32" encoding
// introduced for SASL GSSAPI and standardized in RFC 4648.
// The alternate "base32hex" encoding is used in DNSSEC.
type Encoding struct {
//
// The encoding pads the output to a multiple of 8 bytes,
// so Encode is not appropriate for use on individual blocks
-// of a large data stream. Use NewEncoder() instead.
+// of a large data stream. Use NewEncoder() instead.
func (enc *Encoding) Encode(dst, src []byte) {
if len(src) == 0 {
return
return e.err
}
-// NewEncoder returns a new base32 stream encoder. Data written to
+// NewEncoder returns a new base32 stream encoder. Data written to
// the returned writer will be encoded using enc and then written to w.
// Base32 encodings operate in 5-byte blocks; when finished
// writing, the caller must Close the returned encoder to flush any
return n, end, nil
}
-// Decode decodes src using the encoding enc. It writes at most
+// Decode decodes src using the encoding enc. It writes at most
// DecodedLen(len(src)) bytes to dst and returns the number of bytes
-// written. If src contains invalid base32 data, it will return the
+// written. If src contains invalid base32 data, it will return the
// number of bytes successfully written and CorruptInputError.
// New line characters (\r and \n) are ignored.
func (enc *Encoding) Decode(dst, src []byte) (n int, err error) {
_, err := StdEncoding.Decode(dbuf, []byte(tc.input))
if tc.offset == -1 {
if err != nil {
- t.Error("Decoder wrongly detected coruption in", tc.input)
+ t.Error("Decoder wrongly detected corruption in", tc.input)
}
continue
}
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
*/
// An Encoding is a radix 64 encoding/decoding scheme, defined by a
-// 64-character alphabet. The most common encoding is the "base64"
+// 64-character alphabet. The most common encoding is the "base64"
// encoding defined in RFC 4648 and used in MIME (RFC 2045) and PEM
// (RFC 1421). RFC 4648 also defines an alternate encoding, which is
// the standard encoding with - and _ substituted for + and /.
//
// The encoding pads the output to a multiple of 4 bytes,
// so Encode is not appropriate for use on individual blocks
-// of a large data stream. Use NewEncoder() instead.
+// of a large data stream. Use NewEncoder() instead.
func (enc *Encoding) Encode(dst, src []byte) {
if len(src) == 0 {
return
return e.err
}
-// NewEncoder returns a new base64 stream encoder. Data written to
+// NewEncoder returns a new base64 stream encoder. Data written to
// the returned writer will be encoded using enc and then written to w.
// Base64 encodings operate in 4-byte blocks; when finished
// writing, the caller must Close the returned encoder to flush any
return n, end, err
}
-// Decode decodes src using the encoding enc. It writes at most
+// Decode decodes src using the encoding enc. It writes at most
// DecodedLen(len(src)) bytes to dst and returns the number of bytes
-// written. If src contains invalid base64 data, it will return the
+// written. If src contains invalid base64 data, it will return the
// number of bytes successfully written and CorruptInputError.
// New line characters (\r and \n) are ignored.
func (enc *Encoding) Decode(dst, src []byte) (n int, err error) {
_, err := StdEncoding.Decode(dbuf, []byte(tc.input))
if tc.offset == -1 {
if err != nil {
- t.Error("Decoder wrongly detected coruption in", tc.input)
+ t.Error("Decoder wrongly detected corruption in", tc.input)
}
continue
}
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// An attempt to read into a struct with an unexported field will
-// panic. This is probably not the best choice, but at this point
+// panic. This is probably not the best choice, but at this point
// anything else would be an API change.
type Unexported struct {
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//
// Carriage returns before newline characters are silently removed.
//
-// Blank lines are ignored. A line with only whitespace characters (excluding
+// Blank lines are ignored. A line with only whitespace characters (excluding
// the ending newline character) is not considered a blank line.
//
// Fields which start and stop with the quote character " are called
-// quoted-fields. The beginning and ending quote are not part of the
+// quoted-fields. The beginning and ending quote are not part of the
// field.
//
// The source:
// The exported fields can be changed to customize the details before the
// first call to Read or ReadAll.
//
-// Comma is the field delimiter. It defaults to ','.
+// Comma is the field delimiter. It defaults to ','.
//
// Comment, if not 0, is the comment character. Lines beginning with the
// Comment character are ignored.
//
// If FieldsPerRecord is positive, Read requires each record to
-// have the given number of fields. If FieldsPerRecord is 0, Read sets it to
+// have the given number of fields. If FieldsPerRecord is 0, Read sets it to
// the number of fields in the first record, so that future records must
-// have the same field count. If FieldsPerRecord is negative, no check is
+// have the same field count. If FieldsPerRecord is negative, no check is
// made and records may have a variable number of fields.
//
// If LazyQuotes is true, a quote may appear in an unquoted field and a
// non-doubled quote may appear in a quoted field.
//
// If TrimLeadingSpace is true, leading white space in a field is ignored.
+// If the field delimiter is white space, TrimLeadingSpace will trim the
+// delimiter.
type Reader struct {
Comma rune // field delimiter (set to ',' by NewReader)
Comment rune // comment character for start of line
}
}
-// Read reads one record from r. The record is a slice of strings with each
+// Read reads one record from r. The record is a slice of strings with each
// string representing one field.
func (r *Reader) Read() (record []string, err error) {
for {
func (r *Reader) readRune() (rune, error) {
r1, _, err := r.r.ReadRune()
- // Handle \r\n here. We make the simplifying assumption that
+ // Handle \r\n here. We make the simplifying assumption that
// anytime \r is followed by \n that it can be folded to \n.
// We will not detect files which contain both \r\n and bare \n.
if r1 == '\r' {
// parseRecord reads and parses a single csv record from r.
func (r *Reader) parseRecord() (fields []string, err error) {
- // Each record starts on a new line. We increment our line
+ // Each record starts on a new line. We increment our line
// number (lines start at 1, not 0) and set column to -1
// so as we increment in readRune it points to the character we read.
r.line++
r.column = -1
- // Peek at the first rune. If it is an error we are done.
+ // Peek at the first rune. If it is an error we are done.
// If we support comments and it is the comment character
// then skip to the end of line.
}
}
-// parseField parses the next field in the record. The read field is
-// located in r.field. Delim is the first character not part of the field
+// parseField parses the next field in the record. The read field is
+// located in r.field. Delim is the first character not part of the field
// (r.Comma or '\n').
func (r *Reader) parseField() (haveField bool, delim rune, err error) {
r.field.Reset()
// A Writer writes records to a CSV encoded file.
//
// As returned by NewWriter, a Writer writes records terminated by a
-// newline and uses ',' as the field delimiter. The exported fields can be
+// newline and uses ',' as the field delimiter. The exported fields can be
// changed to customize the details before the first call to Write or WriteAll.
//
// Comma is the field delimiter.
// Writer writes a single CSV record to w along with any necessary quoting.
// A record is a slice of strings with each string being one field.
-func (w *Writer) Write(record []string) (err error) {
+func (w *Writer) Write(record []string) error {
for n, field := range record {
if n > 0 {
- if _, err = w.w.WriteRune(w.Comma); err != nil {
- return
+ if _, err := w.w.WriteRune(w.Comma); err != nil {
+ return err
}
}
// If we don't have to have a quoted field then just
// write out the field and continue to the next field.
if !w.fieldNeedsQuotes(field) {
- if _, err = w.w.WriteString(field); err != nil {
- return
+ if _, err := w.w.WriteString(field); err != nil {
+ return err
}
continue
}
- if err = w.w.WriteByte('"'); err != nil {
- return
+ if err := w.w.WriteByte('"'); err != nil {
+ return err
}
for _, r1 := range field {
+ var err error
switch r1 {
case '"':
_, err = w.w.WriteString(`""`)
_, err = w.w.WriteRune(r1)
}
if err != nil {
- return
+ return err
}
}
- if err = w.w.WriteByte('"'); err != nil {
- return
+ if err := w.w.WriteByte('"'); err != nil {
+ return err
}
}
+ var err error
if w.UseCRLF {
_, err = w.w.WriteString("\r\n")
} else {
err = w.w.WriteByte('\n')
}
- return
+ return err
}
// Flush writes any buffered data to the underlying io.Writer.
}
// WriteAll writes multiple CSV records to w using Write and then calls Flush.
-func (w *Writer) WriteAll(records [][]string) (err error) {
+func (w *Writer) WriteAll(records [][]string) error {
for _, record := range records {
- err = w.Write(record)
+ err := w.Write(record)
if err != nil {
return err
}
if field == "" {
return false
}
- if field == `\.` || strings.IndexRune(field, w.Comma) >= 0 || strings.IndexAny(field, "\"\r\n") >= 0 {
+ if field == `\.` || strings.ContainsRune(field, w.Comma) || strings.ContainsAny(field, "\"\r\n") {
return true
}
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
err := NewEncoder(b).Encode(&rec)
if err == nil {
t.Error("expected error; got none")
- } else if strings.Index(err.Error(), "recursive") < 0 {
+ } else if !strings.Contains(err.Error(), "recursive") {
t.Error("expected recursive type error; got", err)
}
// Can't test decode easily because we can't encode one, so we can't pass one to a Decoder.
package gob
-// This file is not normally included in the gob package. Used only for debugging the package itself.
+// This file is not normally included in the gob package. Used only for debugging the package itself.
// Except for reading uints, it is an implementation of a reader that is independent of
// the one implemented by Decoder.
// To enable the Debug function, delete the +build ignore line above and do
// loadBlock preps us to read a message
// of the length specified next in the input. It returns
// the length of the block. The argument tells whether
-// an EOF is acceptable now. If it is and one is found,
+// an EOF is acceptable now. If it is and one is found,
// the return value is negative.
func (deb *debugger) loadBlock(eofOK bool) int {
n64, w, err := decodeUintReader(deb.r, deb.tmp) // deb.uint64 will error at EOF
return string(b)
}
-// delta returns the field delta at the input point. The expect argument,
+// delta returns the field delta at the input point. The expect argument,
// if non-negative, identifies what the value should be.
func (deb *debugger) delta(expect int) int {
delta := int(deb.uint64())
}
// Since the encoder writes no zeros, if we arrive at a decoder we have
-// a value to extract and store. The field number has already been read
+// a value to extract and store. The field number has already been read
// (it's how we knew to call this decoder).
// Each decoder is responsible for handling any indirections associated
-// with the data structure. If any pointer so reached is nil, allocation must
+// with the data structure. If any pointer so reached is nil, allocation must
// be done.
// decAlloc takes a value and returns a settable value that can
}
// Floating-point numbers are transmitted as uint64s holding the bits
-// of the underlying representation. They are sent byte-reversed, with
+// of the underlying representation. They are sent byte-reversed, with
// the exponent end coming out first, so integer floating point numbers
-// (for example) transmit more compactly. This routine does the
+// (for example) transmit more compactly. This routine does the
// unswizzling.
func float64FromBits(u uint64) float64 {
var v uint64
if av < 0 {
av = -av
}
- // +Inf is OK in both 32- and 64-bit floats. Underflow is always OK.
+ // +Inf is OK in both 32- and 64-bit floats. Underflow is always OK.
if math.MaxFloat32 < av && av <= math.MaxFloat64 {
error_(ovfl)
}
// Execution engine
// The encoder engine is an array of instructions indexed by field number of the incoming
-// decoder. It is executed with random access according to field number.
+// decoder. It is executed with random access according to field number.
type decEngine struct {
instr []decInstr
numInstr int // the number of active instructions
}
// decodeStruct decodes a top-level struct and stores it in value.
-// Indir is for the value, not the type. At the time of the call it may
+// Indir is for the value, not the type. At the time of the call it may
// differ from ut.indir, which was computed when the engine was built.
// This state cannot arise for decodeSingle, which is called directly
// from the user's value, not from the innards of an engine.
}
// decodeArray decodes an array and stores it in value.
-// The length is an unsigned integer preceding the elements. Even though the length is redundant
+// The length is an unsigned integer preceding the elements. Even though the length is redundant
// (it's part of the type), it's a useful check and is included in the encoding.
func (dec *Decoder) decodeArray(atyp reflect.Type, state *decoderState, value reflect.Value, elemOp decOp, length int, ovfl error, helper decHelper) {
if n := state.decodeUint(); n != uint64(length) {
return
}
-// compileDec compiles the decoder engine for a value. If the value is not a struct,
+// compileDec compiles the decoder engine for a value. If the value is not a struct,
// it calls out to compileSingle.
func (dec *Decoder) compileDec(remoteId typeId, ut *userTypeInfo) (engine *decEngine, err error) {
defer catchError(&err)
// decodeTypeSequence parses:
// TypeSequence
// (TypeDefinition DelimitedTypeDefinition*)?
-// and returns the type id of the next value. It returns -1 at
+// and returns the type id of the next value. It returns -1 at
// EOF. Upon return, the remainder of dec.buf is the value to be
-// decoded. If this is an interface value, it can be ignored by
+// decoded. If this is an interface value, it can be ignored by
// resetting that buffer.
func (dec *Decoder) decodeTypeSequence(isInterface bool) typeId {
for dec.err == nil {
// Type definition for (-id) follows.
dec.recvType(-id)
// When decoding an interface, after a type there may be a
- // DelimitedValue still in the buffer. Skip its count.
+ // DelimitedValue still in the buffer. Skip its count.
// (Alternatively, the buffer is empty and the byte count
// will be absorbed by recvMessage.)
if dec.buf.Len() > 0 {
}
value := reflect.ValueOf(e)
// If e represents a value as opposed to a pointer, the answer won't
- // get back to the caller. Make sure it's a pointer.
+ // get back to the caller. Make sure it's a pointer.
if value.Type().Kind() != reflect.Ptr {
dec.err = errors.New("gob: attempt to decode into a non-pointer")
return dec.err
// DecodeValue reads the next value from the input stream.
// If v is the zero reflect.Value (v.Kind() == Invalid), DecodeValue discards the value.
-// Otherwise, it stores the value into v. In that case, v must represent
+// Otherwise, it stores the value into v. In that case, v must represent
// a non-nil pointer to data or be an assignable reflect.Value (v.CanSet())
// If the input is at EOF, DecodeValue returns io.EOF and
// does not modify v.
enc.freeList = e
}
-// Unsigned integers have a two-state encoding. If the number is less
+// Unsigned integers have a two-state encoding. If the number is less
// than 128 (0 through 0x7F), its value is written directly.
// Otherwise the value is written in big-endian byte order preceded
// by the byte length, negated.
// Each encoder for a composite is responsible for handling any
// indirections associated with the elements of the data structure.
-// If any pointer so reached is nil, no bytes are written. If the
-// data item is zero, no bytes are written. Single values - ints,
+// If any pointer so reached is nil, no bytes are written. If the
+// data item is zero, no bytes are written. Single values - ints,
// strings etc. - are indirected before calling their encoders.
// Otherwise, the output (for a scalar) is the field number, as an
// encoded integer, followed by the field data in its appropriate
// floatBits returns a uint64 holding the bits of a floating-point number.
// Floating-point numbers are transmitted as uint64s holding the bits
-// of the underlying representation. They are sent byte-reversed, with
+// of the underlying representation. They are sent byte-reversed, with
// the exponent end coming out first, so integer floating point numbers
-// (for example) transmit more compactly. This routine does the
+// (for example) transmit more compactly. This routine does the
// swizzling.
func floatBits(f float64) uint64 {
u := math.Float64bits(f)
// Execution engine
// encEngine an array of instructions indexed by field number of the encoding
-// data, typically a struct. It is executed top to bottom, walking the struct.
+// data, typically a struct. It is executed top to bottom, walking the struct.
type encEngine struct {
instr []encInstr
}
defer enc.freeEncoderState(state)
state.fieldnum = singletonField
// There is no surrounding struct to frame the transmission, so we must
- // generate data even if the item is zero. To do this, set sendZero.
+ // generate data even if the item is zero. To do this, set sendZero.
state.sendZero = true
instr := &engine.instr[singletonField]
if instr.indir > 0 {
// encodeInterface encodes the interface value iv.
// To send an interface, we send a string identifying the concrete type, followed
// by the type identifier (which might require defining that type right now), followed
-// by the concrete value. A nil value gets sent as the empty string for the name,
+// by the concrete value. A nil value gets sent as the empty string for the name,
// followed by no value.
func (enc *Encoder) encodeInterface(b *encBuffer, iv reflect.Value) {
// Gobs can encode nil interface values but not typed interface
enc.sendTypeDescriptor(enc.writer(), state, ut)
// Send the type id.
enc.sendTypeId(state, ut)
- // Encode the value into a new buffer. Any nested type definitions
+ // Encode the value into a new buffer. Any nested type definitions
// should be written to b, before the encoded value.
enc.pushWriter(b)
data := encBufferPool.Get().(*encBuffer)
return
}
// If the type info has still not been transmitted, it means we have
- // a singleton basic type (int, []byte etc.) at top level. We don't
+ // a singleton basic type (int, []byte etc.) at top level. We don't
// need to send the type info but we do need to update enc.sent.
if !sent {
info, err := getTypeInfo(ut)
}
t4p := &Type4{3}
var t4 Type4 // note: not a pointer.
- if err := encAndDec(t4p, t4); err == nil || strings.Index(err.Error(), "pointer") < 0 {
+ if err := encAndDec(t4p, t4); err == nil || !strings.Contains(err.Error(), "pointer") {
t.Error("expected error about pointer; got", err)
}
}
t.Errorf("expected error decoding %v: %s", test.in, test.err)
continue
case err != nil && test.err != "":
- if strings.Index(err.Error(), test.err) < 0 {
+ if !strings.Contains(err.Error(), test.err) {
t.Errorf("wrong error decoding %v: wanted %s, got %v", test.in, test.err, err)
}
continue
var ns NonStruct
if err := encAndDec(s, &ns); err == nil {
t.Error("should get error for struct/non-struct")
- } else if strings.Index(err.Error(), "type") < 0 {
+ } else if !strings.Contains(err.Error(), "type") {
t.Error("for struct/non-struct expected type error; got", err)
}
// Now try the other way
}
if err := encAndDec(ns, &s); err == nil {
t.Error("should get error for non-struct/struct")
- } else if strings.Index(err.Error(), "type") < 0 {
+ } else if !strings.Contains(err.Error(), "type") {
t.Error("for non-struct/struct expected type error; got", err)
}
}
return true
}
-// A version of a bug reported on golang-nuts. Also tests top-level
-// slice of interfaces. The issue was registering *T caused T to be
+// A version of a bug reported on golang-nuts. Also tests top-level
+// slice of interfaces. The issue was registering *T caused T to be
// stored as the concrete type.
func TestInterfaceIndirect(t *testing.T) {
Register(&interfaceIndirectTestT{})
// Also, when the ignored object contains an interface value, it may define
// types. Make sure that skipping the value still defines the types by using
-// the encoder/decoder pair to send a value afterwards. If an interface
+// the encoder/decoder pair to send a value afterwards. If an interface
// is sent, its type in the test is always NewType0, so this checks that the
// encoder and decoder don't skew with respect to type definitions.
// There was an error check comparing the length of the input with the
// length of the slice being decoded. It was wrong because the next
// thing in the input might be a type definition, which would lead to
-// an incorrect length check. This test reproduces the corner case.
+// an incorrect length check. This test reproduces the corner case.
type Z struct {
}
// Errors in decoding and encoding are handled using panic and recover.
// Panics caused by user error (that is, everything except run-time panics
// such as "index out of bounds" errors) do not leave the file that caused
-// them, but are instead turned into plain error returns. Encoding and
+// them, but are instead turned into plain error returns. Encoding and
// decoding functions and methods that do not return an error either use
// panic to report an error or are guaranteed error-free.
}
// catchError is meant to be used as a deferred function to turn a panic(gobError) into a
-// plain error. It overwrites the error return of the function that deferred its call.
+// plain error. It overwrites the error return of the function that deferred its call.
func catchError(err *error) {
if e := recover(); e != nil {
ge, ok := e.(gobError)
// registered. We registered it in the calling function.
// Pass pointer to interface so Encode sees (and hence sends) a value of
- // interface type. If we passed p directly it would see the concrete type instead.
+ // interface type. If we passed p directly it would see the concrete type instead.
// See the blog post, "The Laws of Reflection" for background.
err := enc.Encode(&p)
if err != nil {
// This example shows the basic usage of the package: Create an encoder,
// transmit some values, receive them with a decoder.
func Example_basic() {
- // Initialize the encoder and decoder. Normally enc and dec would be
+ // Initialize the encoder and decoder. Normally enc and dec would be
// bound to network connections and the encoder and decoder would
// run in different processes.
var network bytes.Buffer // Stand-in for a network connection
}
// As long as the fields have the same name and implement the
-// interface, we can cross-connect them. Not sure it's useful
+// interface, we can cross-connect them. Not sure it's useful
// and may even be bad but it works and it's hard to prevent
// without exposing the contents of the object, which would
// defeat the purpose.
}
// Test that we can use a value then a pointer type of a GobEncoder
-// in the same encoded value. Bug 4647.
+// in the same encoded value. Bug 4647.
func TestGobEncoderValueThenPointer(t *testing.T) {
v := ValueGobber("forty-two")
w := ValueGobber("six-by-nine")
if err == nil {
t.Fatal("expected decode error for mismatched fields (encoder to non-decoder)")
}
- if strings.Index(err.Error(), "type") < 0 {
+ if !strings.Contains(err.Error(), "type") {
t.Fatal("expected type error; got", err)
}
// Non-encoder to GobDecoder: error
if err == nil {
t.Fatal("expected decode error for mismatched fields (non-encoder to decoder)")
}
- if strings.Index(err.Error(), "type") < 0 {
+ if !strings.Contains(err.Error(), "type") {
t.Fatal("expected type error; got", err)
}
}
)
// userTypeInfo stores the information associated with a type the user has handed
-// to the package. It's computed once and stored in a map keyed by reflection
+// to the package. It's computed once and stored in a map keyed by reflection
// type.
type userTypeInfo struct {
user reflect.Type // the type the user handed us
)
// validType returns, and saves, the information associated with user-provided type rt.
-// If the user type is not valid, err will be non-nil. To be used when the error handler
+// If the user type is not valid, err will be non-nil. To be used when the error handler
// is not set up.
func validUserType(rt reflect.Type) (ut *userTypeInfo, err error) {
userTypeLock.RLock()
ut.base = rt
ut.user = rt
// A type that is just a cycle of pointers (such as type T *T) cannot
- // be represented in gobs, which need some concrete data. We use a
+ // be represented in gobs, which need some concrete data. We use a
// cycle detection algorithm from Knuth, Vol 2, Section 3.1, Ex 6,
// pp 539-540. As we step through indirections, run another type at
// half speed. If they meet up, there's a cycle.
// For arrays, maps, and slices, we set the type id after the elements
// are constructed. This is to retain the order of type id allocation after
// a fix made to handle recursive types, which changed the order in
- // which types are built. Delaying the setting in this way preserves
+ // which types are built. Delaying the setting in this way preserves
// type ids while allowing recursive types to be described. Structs,
// done below, were already handling recursion correctly so they
// assign the top-level id before those of the field.
// getType returns the Gob type describing the given reflect.Type.
// Should be called only when handling GobEncoders/Decoders,
-// which may be pointers. All other types are handled through the
+// which may be pointers. All other types are handled through the
// base type, never a pointer.
// typeLock must be held.
func getType(name string, ut *userTypeInfo, rt reflect.Type) (gobType, error) {
// For bootstrapping purposes, we assume that the recipient knows how
// to decode a wireType; it is exactly the wireType struct here, interpreted
// using the gob rules for sending a structure, except that we assume the
-// ids for wireType and structType etc. are known. The relevant pieces
+// ids for wireType and structType etc. are known. The relevant pieces
// are built in encode.go's init() function.
// To maintain binary compatibility, if you extend this type, always put
// the new fields last.
//
// Note: Since gobs can be stored permanently, it is good design
// to guarantee the encoding used by a GobEncoder is stable as the
-// software evolves. For instance, it might make sense for GobEncode
+// software evolves. For instance, it might make sense for GobEncode
// to include a version number in the encoding.
type GobEncoder interface {
// GobEncode returns a byte slice representing the encoding of the
}
// Register records a type, identified by a value for that type, under its
-// internal type name. That name will identify the concrete type of a value
-// sent or received as an interface variable. Only types that will be
+// internal type name. That name will identify the concrete type of a value
+// sent or received as an interface variable. Only types that will be
// transferred as implementations of interface values need to be registered.
// Expecting to be used only during initialization, it panics if the mapping
// between types and names is not a bijection.
func EncodedLen(n int) int { return n * 2 }
// Encode encodes src into EncodedLen(len(src))
-// bytes of dst. As a convenience, it returns the number
+// bytes of dst. As a convenience, it returns the number
// of bytes written to dst, but this value is always EncodedLen(len(src)).
// Encode implements hexadecimal encoding.
func Encode(dst, src []byte) int {
dumper := Dumper(&buf)
dumper.Write(data)
dumper.Close()
- return string(buf.Bytes())
+ return buf.String()
}
// Dumper returns a WriteCloser that writes a hex dump of all written data to
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
for i := 0; i < b.N; i++ {
var r codeResponse
if err := Unmarshal(codeJSON, &r); err != nil {
- b.Fatal("Unmmarshal:", err)
+ b.Fatal("Unmarshal:", err)
}
}
b.SetBytes(int64(len(codeJSON)))
var r codeResponse
for i := 0; i < b.N; i++ {
if err := Unmarshal(codeJSON, &r); err != nil {
- b.Fatal("Unmmarshal:", err)
+ b.Fatal("Unmarshal:", err)
}
}
}
// with the following additional rules:
//
// To unmarshal JSON into a pointer, Unmarshal first handles the case of
-// the JSON being the JSON literal null. In that case, Unmarshal sets
-// the pointer to nil. Otherwise, Unmarshal unmarshals the JSON into
-// the value pointed at by the pointer. If the pointer is nil, Unmarshal
+// the JSON being the JSON literal null. In that case, Unmarshal sets
+// the pointer to nil. Otherwise, Unmarshal unmarshals the JSON into
+// the value pointed at by the pointer. If the pointer is nil, Unmarshal
// allocates a new value for it to point to.
//
// To unmarshal JSON into a struct, Unmarshal matches incoming object
if i < v.Len() {
if v.Kind() == reflect.Array {
- // Array. Zero the rest.
+ // Array. Zero the rest.
z := reflect.Zero(v.Type().Elem())
for ; i < v.Len(); i++ {
v.Index(i).Set(z)
}
// The xxxInterface routines build up a value to be stored
-// in an empty interface. They are not strictly necessary,
+// in an empty interface. They are not strictly necessary,
// but they avoid the weight of reflection in this common case.
// valueInterface is like value but returns interface{}
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// an UnsupportedTypeError.
//
// JSON cannot represent cyclic data structures and Marshal does not
-// handle them. Passing cyclic structures to Marshal will result in
+// handle them. Passing cyclic structures to Marshal will result in
// an infinite recursion.
//
func Marshal(v interface{}) ([]byte, error) {
// To deal with recursive types, populate the map with an
// indirect func before we build it. This type waits on the
- // real func (f) to be ready and then calls it. This indirect
+ // real func (f) to be ready and then calls it. This indirect
// func is only used for recursive types.
encoderCache.Lock()
if encoderCache.m == nil {
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// These values are stored in the parseState stack.
// They give the current state of a composite value
-// being scanned. If the parser is inside a nested value
+// being scanned. If the parser is inside a nested value
// the parseState describes the nested state, outermost at entry 0.
const (
parseObjectKey = iota // parsing object key (before colon)
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
dec.buf = newBuf
}
- // Read. Delay error for next iteration (after scan).
+ // Read. Delay error for next iteration (after scan).
n, err := dec.r.Read(dec.buf[len(dec.buf):cap(dec.buf)])
dec.buf = dec.buf[0 : len(dec.buf)+n]
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
func decodeError(data, rest []byte) (*Block, []byte) {
// If we get here then we have rejected a likely looking, but
// ultimately invalid PEM block. We need to start over from a new
- // position. We have consumed the preamble line and will have consumed
+ // position. We have consumed the preamble line and will have consumed
// any lines which could be header lines. However, a valid preamble
// line is not a valid header line, therefore we cannot have consumed
// the preamble line for the any subsequent block. Thus, we will always
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//
// Marshal handles an array or slice by marshalling each of the elements.
// Marshal handles a pointer by marshalling the value it points at or, if the
-// pointer is nil, by writing nothing. Marshal handles an interface value by
+// pointer is nil, by writing nothing. Marshal handles an interface value by
// marshalling the value it contains or, if the interface value is nil, by
-// writing nothing. Marshal handles all other data by writing one or more XML
+// writing nothing. Marshal handles all other data by writing one or more XML
// elements containing the data.
//
// The name for the XML elements is taken from, in order of preference:
// value were part of the outer struct.
//
// If a field uses a tag "a>b>c", then the element c will be nested inside
-// parent elements a and b. Fields that appear next to each other that name
+// parent elements a and b. Fields that appear next to each other that name
// the same parent will be enclosed in one XML element.
//
// See MarshalIndent for an example.
return p.cachedWriteError()
case ProcInst:
// First token to be encoded which is also a ProcInst with target of xml
- // is the xml declaration. The only ProcInst where target of xml is allowed.
+ // is the xml declaration. The only ProcInst where target of xml is allowed.
if t.Target == "xml" && p.Buffered() != 0 {
return fmt.Errorf("xml: EncodeToken of ProcInst xml target only valid for xml declaration, first token encoded")
}
switch k {
case reflect.String:
s := vf.String()
- dashDash = strings.Index(s, "--") >= 0
+ dashDash = strings.Contains(s, "--")
dashLast = s[len(s)-1] == '-'
if !dashDash {
p.WriteString(s)
}
// trim updates the XML context to match the longest common prefix of the stack
-// and the given parents. A closing tag will be written for every parent
-// popped. Passing a zero slice or nil will close all the elements.
+// and the given parents. A closing tag will be written for every parent
+// popped. Passing a zero slice or nil will close all the elements.
func (s *parentStack) trim(parents []string) error {
split := 0
for ; split < len(parents) && split < len(s.stack); split++ {
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// discarded.
//
// Because Unmarshal uses the reflect package, it can only assign
-// to exported (upper case) fields. Unmarshal uses a case-sensitive
+// to exported (upper case) fields. Unmarshal uses a case-sensitive
// comparison to match XML element names to tag values and struct
// field names.
//
//
// * If the struct has a field of type []byte or string with tag
// ",innerxml", Unmarshal accumulates the raw XML nested inside the
-// element in that field. The rest of the rules still apply.
+// element in that field. The rest of the rules still apply.
//
// * If the struct has a field named XMLName of type xml.Name,
// Unmarshal records the element name in that field.
//
// * If the XML element contains comments, they are accumulated in
// the first struct field that has tag ",comment". The struct
-// field may have type []byte or string. If there is no such
+// field may have type []byte or string. If there is no such
// field, the comments are discarded.
//
// * If the XML element contains a sub-element whose name matches
//
// Unmarshal maps an XML element or attribute value to an integer or
// floating-point field by setting the field to the result of
-// interpreting the string value in decimal. There is no check for
+// interpreting the string value in decimal. There is no check for
// overflow.
//
// Unmarshal maps an XML element to an xml.Name by recording the
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//
// Slices of bytes in the returned token data refer to the
// parser's internal buffer and remain valid only until the next
-// call to Token. To acquire a copy of the bytes, call CopyToken
+// call to Token. To acquire a copy of the bytes, call CopyToken
// or the token's Copy method.
//
// Token expands self-closing elements such as <br/>
// set to the URL identifying its name space when known.
// If Token encounters an unrecognized name space prefix,
// it uses the prefix as the Space rather than report an error.
-func (d *Decoder) Token() (t Token, err error) {
+func (d *Decoder) Token() (Token, error) {
+ var t Token
+ var err error
if d.stk != nil && d.stk.kind == stkEOF {
- err = io.EOF
- return
+ return nil, io.EOF
}
if d.nextToken != nil {
t = d.nextToken
if err == io.EOF && d.stk != nil && d.stk.kind != stkEOF {
err = d.syntaxError("unexpected EOF")
}
- return
+ return t, err
}
if !d.Strict {
}
t = t1
}
- return
+ return t, err
}
const xmlURL = "http://www.w3.org/XML/1998/namespace"
}
// Parsing state - stack holds old name space translations
-// and the current set of open elements. The translations to pop when
+// and the current set of open elements. The translations to pop when
// ending a given tag are *below* it on the stack, which is
// more work but forced on us by XML.
type stack struct {
// These tables were generated by cut and paste from Appendix B of
// the XML spec at http://www.xml.com/axml/testaxml.htm
-// and then reformatting. First corresponds to (Letter | '_' | ':')
+// and then reformatting. First corresponds to (Letter | '_' | ':')
// and second corresponds to NameChar.
var first = &unicode.RangeTable{
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// memstats runtime.Memstats
//
// The package is sometimes only imported for the side effect of
-// registering its HTTP handler and the above variables. To use it
+// registering its HTTP handler and the above variables. To use it
// this way, link this package into your program:
// import _ "expvar"
//
sort.Strings(varKeys)
}
-// Get retrieves a named exported variable.
+// Get retrieves a named exported variable. It returns nil if the name has
+// not been registered.
func Get(name string) Var {
mutex.RLock()
defer mutex.RUnlock()
varKeys = nil
}
+func TestNil(t *testing.T) {
+ RemoveAll()
+ val := Get("missing")
+ if val != nil {
+ t.Errorf("got %v, want nil", val)
+ }
+}
+
func TestInt(t *testing.T) {
RemoveAll()
reqs := NewInt("requests")
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
PanicOnError // Call panic with a descriptive error.
)
-// A FlagSet represents a set of defined flags. The zero value of a FlagSet
+// A FlagSet represents a set of defined flags. The zero value of a FlagSet
// has no name and has ContinueOnError error handling.
type FlagSet struct {
// Usage is the function called when an error occurs while parsing flags.
}
// VisitAll visits the command-line flags in lexicographical order, calling
-// fn for each. It visits all flags, even those not set.
+// fn for each. It visits all flags, even those not set.
func VisitAll(fn func(*Flag)) {
CommandLine.VisitAll(fn)
}
}
// Visit visits the command-line flags in lexicographical order, calling fn
-// for each. It visits only those flags that have been set.
+// for each. It visits only those flags that have been set.
func Visit(fn func(*Flag)) {
CommandLine.Visit(fn)
}
// NFlag returns the number of command-line flags that have been set.
func NFlag() int { return len(CommandLine.actual) }
-// Arg returns the i'th argument. Arg(0) is the first remaining argument
+// Arg returns the i'th argument. Arg(0) is the first remaining argument
// after flags have been processed. Arg returns an empty string if the
// requested element does not exist.
func (f *FlagSet) Arg(i int) string {
return f.args[i]
}
-// Arg returns the i'th command-line argument. Arg(0) is the first remaining argument
+// Arg returns the i'th command-line argument. Arg(0) is the first remaining argument
// after flags have been processed. Arg returns an empty string if the
// requested element does not exist.
func Arg(i int) string {
}
// Parse parses flag definitions from the argument list, which should not
-// include the command name. Must be called after all flags in the FlagSet
+// include the command name. Must be called after all flags in the FlagSet
// are defined and before flags are accessed by the program.
// The return value will be ErrHelp if -help or -h were set but not defined.
func (f *FlagSet) Parse(arguments []string) error {
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
{"%+.3F", float32(-1.0), "-1.000"},
{"%+07.2f", 1.0, "+001.00"},
{"%+07.2f", -1.0, "-001.00"},
+ {"%-07.2f", 1.0, "1.00 "},
+ {"%-07.2f", -1.0, "-1.00 "},
+ {"%+-07.2f", 1.0, "+1.00 "},
+ {"%+-07.2f", -1.0, "-1.00 "},
+ {"%-+07.2f", 1.0, "+1.00 "},
+ {"%-+07.2f", -1.0, "-1.00 "},
{"%+10.2f", +1.0, " +1.00"},
{"%+10.2f", -1.0, " -1.00"},
{"% .3E", -1.0, "-1.000E+00"},
{"%b", 1.0, "4503599627370496p-52"},
// complex values
+ {"%.f", 0i, "(0+0i)"},
+ {"%+.f", 0i, "(+0+0i)"},
+ {"% +.f", 0i, "(+0+0i)"},
{"%+.3e", 0i, "(+0.000e+00+0.000e+00i)"},
{"%+.3f", 0i, "(+0.000+0.000i)"},
{"%+.3g", 0i, "(+0+0i)"},
{"%g", 1.23456789e-3, "0.00123456789"},
{"%g", 1.23456789e20, "1.23456789e+20"},
{"%20e", math.Inf(1), " +Inf"},
+ {"% 20f", math.Inf(1), " Inf"},
+ {"%+20f", math.Inf(1), " +Inf"},
+ {"% +20f", math.Inf(1), " +Inf"},
{"%-20f", math.Inf(-1), "-Inf "},
{"%20g", math.NaN(), " NaN"},
+ {"%+20f", math.NaN(), " +NaN"},
+ {"% +20f", math.NaN(), " +NaN"},
+ {"% -20f", math.NaN(), " NaN "},
+ {"%+-20f", math.NaN(), "+NaN "},
// arrays
{"%v", array, "[1 2 3 4 5]"},
// The "<nil>" show up because maps are printed by
// first obtaining a list of keys and then looking up
- // each key. Since NaNs can be map keys but cannot
+ // each key. Since NaNs can be map keys but cannot
// be fetched directly, the lookup fails and returns a
// zero reflect.Value, which formats as <nil>.
// This test is just to check that it shows the two NaNs at all.
"[%7.2f]",
"[% 7.2f]",
"[%+7.2f]",
+ "[% +7.2f]",
"[%07.2f]",
"[% 07.2f]",
"[%+07.2f]",
+ "[% +07.2f]"
};
int main(void) {
int i;
- for(i = 0; i < 9; i++) {
+ for(i = 0; i < 11; i++) {
printf("%s: ", format[i]);
printf(format[i], 1.0);
printf(" ");
[%7.2f]: [ 1.00] [ -1.00]
[% 7.2f]: [ 1.00] [ -1.00]
[%+7.2f]: [ +1.00] [ -1.00]
+ [% +7.2f]: [ +1.00] [ -1.00]
[%07.2f]: [0001.00] [-001.00]
[% 07.2f]: [ 001.00] [-001.00]
[%+07.2f]: [+001.00] [-001.00]
+ [% +07.2f]: [+001.00] [-001.00]
+
*/
{"%.2f", 1.0, "1.00"},
{"%.2f", -1.0, "-1.00"},
{"% 7.2f", -1.0, " -1.00"},
{"%+7.2f", 1.0, " +1.00"},
{"%+7.2f", -1.0, " -1.00"},
+ {"% +7.2f", 1.0, " +1.00"},
+ {"% +7.2f", -1.0, " -1.00"},
{"%07.2f", 1.0, "0001.00"},
{"%07.2f", -1.0, "-001.00"},
{"% 07.2f", 1.0, " 001.00"},
{"% 07.2f", -1.0, "-001.00"},
{"%+07.2f", 1.0, "+001.00"},
{"%+07.2f", -1.0, "-001.00"},
+ {"% +07.2f", 1.0, "+001.00"},
+ {"% +07.2f", -1.0, "-001.00"},
// Complex numbers: exhaustively tested in TestComplexFormatting.
{"%7.2f", 1 + 2i, "( 1.00 +2.00i)"},
{"%+07.2f", -1 - 2i, "(-001.00-002.00i)"},
- // Zero padding does not apply to infinities.
+ // Zero padding does not apply to infinities and NaN.
{"%020f", math.Inf(-1), " -Inf"},
{"%020f", math.Inf(+1), " +Inf"},
+ {"%020f", math.NaN(), " NaN"},
{"% 020f", math.Inf(-1), " -Inf"},
{"% 020f", math.Inf(+1), " Inf"},
+ {"% 020f", math.NaN(), " NaN"},
{"%+020f", math.Inf(-1), " -Inf"},
{"%+020f", math.Inf(+1), " +Inf"},
+ {"%+020f", math.NaN(), " +NaN"},
+ {"%-020f", math.Inf(-1), "-Inf "},
+ {"%-020f", math.Inf(+1), "+Inf "},
+ {"%-020f", math.NaN(), "NaN "},
{"%20f", -1.0, " -1.000000"},
// Make sure we can handle very large widths.
{"%0100f", -1.0, zeroFill("-", 99, "1.000000")},
+ // Use spaces instead of zero if padding to the right.
+ {"%0-5s", "abc", "abc "},
+ {"%-05.1f", 1.0, "1.0 "},
+
// Complex fmt used to leave the plus flag set for future entries in the array
// causing +2+0i and +3+0i instead of 2+0i and 3+0i.
{"%v", []complex64{1, 2, 3}, "[(1+0i) (2+0i) (3+0i)]"},
}
}
+func BenchmarkSprintfPadding(b *testing.B) {
+ b.RunParallel(func(pb *testing.PB) {
+ for pb.Next() {
+ Sprintf("%16f", 1.0)
+ }
+ })
+}
+
func BenchmarkSprintfEmpty(b *testing.B) {
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
}
})
}
+func BenchmarkSprintfBoolean(b *testing.B) {
+ b.RunParallel(func(pb *testing.PB) {
+ for pb.Next() {
+ Sprintf("%t", true)
+ }
+ })
+}
func BenchmarkManyArgs(b *testing.B) {
b.RunParallel(func(pb *testing.PB) {
package fmt
import (
- "math"
"strconv"
"unicode/utf8"
)
unsigned = false
)
-var padZeroBytes = make([]byte, nByte)
-var padSpaceBytes = make([]byte, nByte)
-
-func init() {
- for i := 0; i < nByte; i++ {
- padZeroBytes[i] = '0'
- padSpaceBytes[i] = ' '
- }
-}
-
// flags placed in a separate struct for easy clearing.
type fmtFlags struct {
widPresent bool
f.clearflags()
}
-// computePadding computes left and right padding widths (only one will be non-zero).
-func (f *fmt) computePadding(width int) (padding []byte, leftWidth, rightWidth int) {
- left := !f.minus
- w := f.wid
- if w < 0 {
- left = false
- w = -w
- }
- w -= width
- if w > 0 {
- if left && f.zero {
- return padZeroBytes, w, 0
- }
- if left {
- return padSpaceBytes, w, 0
- } else {
- // can't be zero padding on the right
- return padSpaceBytes, 0, w
- }
- }
- return
-}
-
// writePadding generates n bytes of padding.
-func (f *fmt) writePadding(n int, padding []byte) {
- for n > 0 {
- m := n
- if m > nByte {
- m = nByte
- }
- f.buf.Write(padding[0:m])
- n -= m
+func (f *fmt) writePadding(n int) {
+ if n <= 0 { // No padding bytes needed.
+ return
}
+ buf := *f.buf
+ oldLen := len(buf)
+ newLen := oldLen + n
+ // Make enough room for padding.
+ if newLen > cap(buf) {
+ buf = make(buffer, cap(buf)*2+n)
+ copy(buf, *f.buf)
+ }
+ // Decide which byte the padding should be filled with.
+ padByte := byte(' ')
+ if f.zero {
+ padByte = byte('0')
+ }
+ // Fill padding with padByte.
+ padding := buf[oldLen:newLen]
+ for i := range padding {
+ padding[i] = padByte
+ }
+ *f.buf = buf[:newLen]
}
-// pad appends b to f.buf, padded on left (w > 0) or right (w < 0 or f.minus).
+// pad appends b to f.buf, padded on left (!f.minus) or right (f.minus).
func (f *fmt) pad(b []byte) {
if !f.widPresent || f.wid == 0 {
f.buf.Write(b)
return
}
- padding, left, right := f.computePadding(utf8.RuneCount(b))
- if left > 0 {
- f.writePadding(left, padding)
- }
- f.buf.Write(b)
- if right > 0 {
- f.writePadding(right, padding)
+ width := f.wid - utf8.RuneCount(b)
+ if !f.minus {
+ // left padding
+ f.writePadding(width)
+ f.buf.Write(b)
+ } else {
+ // right padding
+ f.buf.Write(b)
+ f.writePadding(width)
}
}
-// padString appends s to buf, padded on left (w > 0) or right (w < 0 or f.minus).
+// padString appends s to f.buf, padded on left (!f.minus) or right (f.minus).
func (f *fmt) padString(s string) {
if !f.widPresent || f.wid == 0 {
f.buf.WriteString(s)
return
}
- padding, left, right := f.computePadding(utf8.RuneCountInString(s))
- if left > 0 {
- f.writePadding(left, padding)
- }
- f.buf.WriteString(s)
- if right > 0 {
- f.writePadding(right, padding)
+ width := f.wid - utf8.RuneCountInString(s)
+ if !f.minus {
+ // left padding
+ f.writePadding(width)
+ f.buf.WriteString(s)
+ } else {
+ // right padding
+ f.buf.WriteString(s)
+ f.writePadding(width)
}
}
-var (
- trueBytes = []byte("true")
- falseBytes = []byte("false")
-)
-
// fmt_boolean formats a boolean.
func (f *fmt) fmt_boolean(v bool) {
if v {
- f.pad(trueBytes)
+ f.padString("true")
} else {
- f.pad(falseBytes)
+ f.padString("false")
}
}
-// integer; interprets prec but not wid. Once formatted, result is sent to pad()
+// integer; interprets prec but not wid. Once formatted, result is sent to pad()
// and then flags are cleared.
func (f *fmt) integer(a int64, base uint64, signedness bool, digits string) {
// precision of 0 and value of 0 means "print nothing"
// formatFloat formats a float64; it is an efficient equivalent to f.pad(strconv.FormatFloat()...).
func (f *fmt) formatFloat(v float64, verb byte, prec, n int) {
// Format number, reserving space for leading + sign if needed.
- num := strconv.AppendFloat(f.intbuf[0:1], v, verb, prec, n)
+ num := strconv.AppendFloat(f.intbuf[:1], v, verb, prec, n)
if num[1] == '-' || num[1] == '+' {
num = num[1:]
} else {
num[0] = '+'
}
- // Special handling for infinity, which doesn't look like a number so shouldn't be padded with zeros.
- if math.IsInf(v, 0) {
- if f.zero {
- defer func() { f.zero = true }()
- f.zero = false
- }
+ // f.space means to add a leading space instead of a "+" sign unless
+ // the sign is explicitly asked for by f.plus.
+ if f.space && num[0] == '+' && !f.plus {
+ num[0] = ' '
}
- // num is now a signed version of the number.
- // If we're zero padding, want the sign before the leading zeros.
- // Achieve this by writing the sign out and then padding the unsigned number.
- if f.zero && f.widPresent && f.wid > len(num) {
- if f.space && v >= 0 {
- f.buf.WriteByte(' ') // This is what C does: even with zero, f.space means space.
- f.wid--
- } else if f.plus || v < 0 {
- f.buf.WriteByte(num[0])
- f.wid--
+ // Special handling for infinities and NaN,
+ // which don't look like a number so shouldn't be padded with zeros.
+ if num[1] == 'I' || num[1] == 'N' {
+ oldZero := f.zero
+ f.zero = false
+ // Remove sign before NaN if not asked for.
+ if num[1] == 'N' && !f.space && !f.plus {
+ num = num[1:]
}
- f.pad(num[1:])
- return
- }
- // f.space says to replace a leading + with a space.
- if f.space && num[0] == '+' {
- num[0] = ' '
f.pad(num)
+ f.zero = oldZero
return
}
- // Now we know the sign is attached directly to the number, if present at all.
- // We want a sign if asked for, if it's negative, or if it's infinity (+Inf vs. -Inf).
- if f.plus || num[0] == '-' || math.IsInf(v, 0) {
+ // We want a sign if asked for and if the sign is not positive.
+ if f.plus || num[0] != '+' {
+ // If we're zero padding we want the sign before the leading zeros.
+ // Achieve this by writing the sign out and then padding the unsigned number.
+ if f.zero && f.widPresent && f.wid > len(num) {
+ f.buf.WriteByte(num[0])
+ f.wid--
+ num = num[1:]
+ }
f.pad(num)
return
}
f.space = oldSpace
f.plus = oldPlus
f.wid = oldWid
- f.buf.Write(irparenBytes)
+ f.buf.WriteString("i)")
}
"unicode/utf8"
)
-// Some constants in the form of bytes, to avoid string overhead.
-// Needlessly fastidious, I suppose.
-var (
- commaSpaceBytes = []byte(", ")
- nilAngleBytes = []byte("<nil>")
- nilParenBytes = []byte("(nil)")
- nilBytes = []byte("nil")
- mapBytes = []byte("map[")
- percentBangBytes = []byte("%!")
- missingBytes = []byte("(MISSING)")
- badIndexBytes = []byte("(BADINDEX)")
- panicBytes = []byte("(PANIC=")
- extraBytes = []byte("%!(EXTRA ")
- irparenBytes = []byte("i)")
- bytesBytes = []byte("[]byte{")
- badWidthBytes = []byte("%!(BADWIDTH)")
- badPrecBytes = []byte("%!(BADPREC)")
- noVerbBytes = []byte("%!(NOVERB)")
+// Strings for use with buffer.WriteString.
+// This is less overhead than using buffer.Write with byte arrays.
+const (
+ commaSpaceString = ", "
+ nilAngleString = "<nil>"
+ nilParenString = "(nil)"
+ nilString = "nil"
+ mapString = "map["
+ percentBangString = "%!"
+ missingString = "(MISSING)"
+ badIndexString = "(BADINDEX)"
+ panicString = "(PANIC="
+ extraString = "%!(EXTRA "
+ bytesString = "[]byte{"
+ badWidthString = "%!(BADWIDTH)"
+ badPrecString = "%!(BADPREC)"
+ noVerbString = "%!(NOVERB)"
+ invReflectString = "<invalid reflect.Value>"
)
// State represents the printer state passed to custom formatters.
// the flags and options for the operand's format specifier.
type State interface {
// Write is the function to call to emit formatted output to be printed.
- Write(b []byte) (ret int, err error)
+ Write(b []byte) (n int, err error)
// Width returns the value of the width option and whether it has been set.
Width() (wid int, ok bool)
// Precision returns the value of the precision option and whether it has been set.
// Use simple []byte instead of bytes.Buffer to avoid large dependency.
type buffer []byte
-func (b *buffer) Write(p []byte) (n int, err error) {
+func (b *buffer) Write(p []byte) {
*b = append(*b, p...)
- return len(p), nil
}
-func (b *buffer) WriteString(s string) (n int, err error) {
+func (b *buffer) WriteString(s string) {
*b = append(*b, s...)
- return len(s), nil
}
-func (b *buffer) WriteByte(c byte) error {
+func (b *buffer) WriteByte(c byte) {
*b = append(*b, c)
- return nil
}
-func (bp *buffer) WriteRune(r rune) error {
+func (bp *buffer) WriteRune(r rune) {
if r < utf8.RuneSelf {
*bp = append(*bp, byte(r))
- return nil
+ return
}
b := *bp
}
w := utf8.EncodeRune(b[n:n+utf8.UTFMax], r)
*bp = b[:n+w]
- return nil
}
type pp struct {
return false
}
-func (p *pp) add(c rune) {
- p.buf.WriteRune(c)
-}
-
// Implement Write so we can call Fprintf on a pp (through State), for
// recursive use in custom verbs.
func (p *pp) Write(b []byte) (ret int, err error) {
- return p.buf.Write(b)
+ p.buf.Write(b)
+ return len(b), nil
}
// These routines end in 'f' and take a format string.
func (p *pp) unknownType(v reflect.Value) {
if !v.IsValid() {
- p.buf.Write(nilAngleBytes)
+ p.buf.WriteString(nilAngleString)
return
}
p.buf.WriteByte('?')
func (p *pp) badVerb(verb rune) {
p.erroring = true
- p.add('%')
- p.add('!')
- p.add(verb)
- p.add('(')
+ p.buf.WriteString(percentBangString)
+ p.buf.WriteRune(verb)
+ p.buf.WriteByte('(')
switch {
case p.arg != nil:
p.buf.WriteString(reflect.TypeOf(p.arg).String())
- p.add('=')
+ p.buf.WriteByte('=')
p.printArg(p.arg, 'v', 0)
case p.value.IsValid():
p.buf.WriteString(p.value.Type().String())
- p.add('=')
+ p.buf.WriteByte('=')
p.printValue(p.value, 'v', 0)
default:
- p.buf.Write(nilAngleBytes)
+ p.buf.WriteString(nilAngleString)
}
- p.add(')')
+ p.buf.WriteByte(')')
p.erroring = false
}
p.buf.WriteString("[]byte(nil)")
} else {
p.buf.WriteString(typ.String())
- p.buf.Write(nilParenBytes)
+ p.buf.WriteString(nilParenString)
}
return
}
if typ == nil {
- p.buf.Write(bytesBytes)
+ p.buf.WriteString(bytesString)
} else {
p.buf.WriteString(typ.String())
p.buf.WriteByte('{')
for i, c := range v {
if i > 0 {
if p.fmt.sharpV {
- p.buf.Write(commaSpaceBytes)
+ p.buf.WriteString(commaSpaceString)
} else {
p.buf.WriteByte(' ')
}
}
if p.fmt.sharpV {
- p.add('(')
+ p.buf.WriteByte('(')
p.buf.WriteString(value.Type().String())
- p.add(')')
- p.add('(')
+ p.buf.WriteString(")(")
if u == 0 {
- p.buf.Write(nilBytes)
+ p.buf.WriteString(nilString)
} else {
p.fmt0x64(uint64(u), true)
}
- p.add(')')
+ p.buf.WriteByte(')')
} else if verb == 'v' && u == 0 {
- p.buf.Write(nilAngleBytes)
+ p.buf.WriteString(nilAngleString)
} else {
if use0x64 {
p.fmt0x64(uint64(u), !p.fmt.sharp)
// Stringer that fails to guard against nil or a nil pointer for a
// value receiver, and in either case, "<nil>" is a nice result.
if v := reflect.ValueOf(arg); v.Kind() == reflect.Ptr && v.IsNil() {
- p.buf.Write(nilAngleBytes)
+ p.buf.WriteString(nilAngleString)
return
}
// Otherwise print a concise panic message. Most of the time the panic
panic(err)
}
p.fmt.clearflags() // We are done, and for this output we want default behavior.
- p.buf.Write(percentBangBytes)
- p.add(verb)
- p.buf.Write(panicBytes)
+ p.buf.WriteString(percentBangString)
+ p.buf.WriteRune(verb)
+ p.buf.WriteString(panicString)
p.panicking = true
p.printArg(err, 'v', 0)
p.panicking = false
return false
}
-func (p *pp) printArg(arg interface{}, verb rune, depth int) (wasString bool) {
+func (p *pp) printArg(arg interface{}, verb rune, depth int) {
p.arg = arg
p.value = reflect.Value{}
if arg == nil {
- if verb == 'T' || verb == 'v' {
- p.fmt.pad(nilAngleBytes)
- } else {
+ switch verb {
+ case 'T', 'v':
+ p.fmt.padString(nilAngleString)
+ default:
p.badVerb(verb)
}
- return false
+ return
}
// Special processing considerations.
switch verb {
case 'T':
p.printArg(reflect.TypeOf(arg).String(), 's', 0)
- return false
+ return
case 'p':
p.fmtPointer(reflect.ValueOf(arg), verb)
- return false
+ return
}
// Some types can be done without reflection.
p.fmtUint64(uint64(f), verb)
case string:
p.fmtString(f, verb)
- wasString = verb == 's' || verb == 'v'
case []byte:
p.fmtBytes(f, verb, nil, depth)
- wasString = verb == 's'
case reflect.Value:
- return p.printReflectValue(f, verb, depth)
+ p.printReflectValue(f, verb, depth)
+ return
default:
// If the type is not simple, it might have methods.
- if handled := p.handleMethods(verb, depth); handled {
- return false
+ if p.handleMethods(verb, depth) {
+ return
}
// Need to use reflection
- return p.printReflectValue(reflect.ValueOf(arg), verb, depth)
+ p.printReflectValue(reflect.ValueOf(arg), verb, depth)
+ return
}
p.arg = nil
- return
}
// printValue is like printArg but starts with a reflect value, not an interface{} value.
-func (p *pp) printValue(value reflect.Value, verb rune, depth int) (wasString bool) {
+func (p *pp) printValue(value reflect.Value, verb rune, depth int) {
if !value.IsValid() {
- if verb == 'T' || verb == 'v' {
- p.buf.Write(nilAngleBytes)
- } else {
+ switch verb {
+ case 'T', 'v':
+ p.buf.WriteString(nilAngleString)
+ default:
p.badVerb(verb)
}
- return false
+ return
}
// Special processing considerations.
switch verb {
case 'T':
p.printArg(value.Type().String(), 's', 0)
- return false
+ return
case 'p':
p.fmtPointer(value, verb)
- return false
+ return
}
// Handle values with special methods.
if value.CanInterface() {
p.arg = value.Interface()
}
- if handled := p.handleMethods(verb, depth); handled {
- return false
+ if p.handleMethods(verb, depth) {
+ return
}
- return p.printReflectValue(value, verb, depth)
+ p.printReflectValue(value, verb, depth)
}
var byteType = reflect.TypeOf(byte(0))
// printReflectValue is the fallback for both printArg and printValue.
// It uses reflect to print the value.
-func (p *pp) printReflectValue(value reflect.Value, verb rune, depth int) (wasString bool) {
+func (p *pp) printReflectValue(value reflect.Value, verb rune, depth int) {
oldValue := p.value
p.value = value
BigSwitch:
switch f := value; f.Kind() {
case reflect.Invalid:
- p.buf.WriteString("<invalid reflect.Value>")
+ p.buf.WriteString(invReflectString)
case reflect.Bool:
p.fmtBool(f.Bool(), verb)
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
if p.fmt.sharpV {
p.buf.WriteString(f.Type().String())
if f.IsNil() {
- p.buf.WriteString("(nil)")
+ p.buf.WriteString(nilParenString)
break
}
p.buf.WriteByte('{')
} else {
- p.buf.Write(mapBytes)
+ p.buf.WriteString(mapString)
}
keys := f.MapKeys()
for i, key := range keys {
if i > 0 {
if p.fmt.sharpV {
- p.buf.Write(commaSpaceBytes)
+ p.buf.WriteString(commaSpaceString)
} else {
p.buf.WriteByte(' ')
}
if p.fmt.sharpV {
p.buf.WriteString(value.Type().String())
}
- p.add('{')
+ p.buf.WriteByte('{')
v := f
t := v.Type()
for i := 0; i < v.NumField(); i++ {
if i > 0 {
if p.fmt.sharpV {
- p.buf.Write(commaSpaceBytes)
+ p.buf.WriteString(commaSpaceString)
} else {
p.buf.WriteByte(' ')
}
if !value.IsValid() {
if p.fmt.sharpV {
p.buf.WriteString(f.Type().String())
- p.buf.Write(nilParenBytes)
+ p.buf.WriteString(nilParenString)
} else {
- p.buf.Write(nilAngleBytes)
+ p.buf.WriteString(nilAngleString)
}
} else {
- wasString = p.printValue(value, verb, depth+1)
+ p.printValue(value, verb, depth+1)
}
case reflect.Array, reflect.Slice:
// Byte slices are special:
}
}
p.fmtBytes(bytes, verb, typ, depth)
- wasString = verb == 's'
break
}
if p.fmt.sharpV {
for i := 0; i < f.Len(); i++ {
if i > 0 {
if p.fmt.sharpV {
- p.buf.Write(commaSpaceBytes)
+ p.buf.WriteString(commaSpaceString)
} else {
p.buf.WriteByte(' ')
}
p.unknownType(f)
}
p.value = oldValue
- return wasString
}
// intFromArg gets the argNumth element of a. On return, isInt reports whether the argument has integer type.
p.fmt.wid, p.fmt.widPresent, argNum = intFromArg(a, argNum)
if !p.fmt.widPresent {
- p.buf.Write(badWidthBytes)
+ p.buf.WriteString(badWidthString)
}
// We have a negative width, so take its value and ensure
p.fmt.precPresent = false
}
if !p.fmt.precPresent {
- p.buf.Write(badPrecBytes)
+ p.buf.WriteString(badPrecString)
}
afterIndex = false
} else {
}
if i >= end {
- p.buf.Write(noVerbBytes)
+ p.buf.WriteString(noVerbString)
continue
}
c, w := utf8.DecodeRuneInString(format[i:])
continue
}
if !p.goodArgNum {
- p.buf.Write(percentBangBytes)
- p.add(c)
- p.buf.Write(badIndexBytes)
+ p.buf.WriteString(percentBangString)
+ p.buf.WriteRune(c)
+ p.buf.WriteString(badIndexString)
continue
} else if argNum >= len(a) { // out of operands
- p.buf.Write(percentBangBytes)
- p.add(c)
- p.buf.Write(missingBytes)
+ p.buf.WriteString(percentBangString)
+ p.buf.WriteRune(c)
+ p.buf.WriteString(missingString)
continue
}
arg := a[argNum]
p.fmt.plusV = true
}
}
+
+ // Use space padding instead of zero padding to the right.
+ if p.fmt.minus {
+ p.fmt.zero = false
+ }
+
p.printArg(arg, c, 0)
}
// out of order, in which case it's too expensive to detect if they've all
// been used and arguably OK if they're not.
if !p.reordered && argNum < len(a) {
- p.buf.Write(extraBytes)
+ p.buf.WriteString(extraString)
for ; argNum < len(a); argNum++ {
arg := a[argNum]
if arg != nil {
}
p.printArg(arg, 'v', 0)
if argNum+1 < len(a) {
- p.buf.Write(commaSpaceBytes)
+ p.buf.WriteString(commaSpaceString)
}
}
p.buf.WriteByte(')')
func (p *pp) doPrint(a []interface{}, addspace, addnewline bool) {
prevString := false
- for argNum := 0; argNum < len(a); argNum++ {
+ for argNum, arg := range a {
p.fmt.clearflags()
- // always add spaces if we're doing Println
- arg := a[argNum]
- if argNum > 0 {
- isString := arg != nil && reflect.TypeOf(arg).Kind() == reflect.String
- if addspace || !isString && !prevString {
- p.buf.WriteByte(' ')
- }
+ isString := arg != nil && reflect.TypeOf(arg).Kind() == reflect.String
+ // Add a space between two non-string arguments or if
+ // explicitly asked for by addspace.
+ if argNum > 0 && (addspace || (!isString && !prevString)) {
+ p.buf.WriteByte(' ')
}
- prevString = p.printArg(arg, 'v', 0)
+ p.printArg(arg, 'v', 0)
+ prevString = isString
}
if addnewline {
p.buf.WriteByte('\n')
"unicode/utf8"
)
-// runeUnreader is the interface to something that can unread runes.
-// If the object provided to Scan does not satisfy this interface,
-// a local buffer will be used to back up the input, but its contents
-// will be lost when Scan returns.
-type runeUnreader interface {
- UnreadRune() error
-}
-
// ScanState represents the scanner state passed to custom scanners.
// Scanners may do rune-at-a-time scanning or ask the ScanState
// to discover the next space-delimited token.
// Token skips space in the input if skipSpace is true, then returns the
// run of Unicode code points c satisfying f(c). If f is nil,
// !unicode.IsSpace(c) is used; that is, the token will hold non-space
- // characters. Newlines are treated appropriately for the operation being
+ // characters. Newlines are treated appropriately for the operation being
// performed; see the package documentation for more information.
// The returned slice points to shared data that may be overwritten
// by the next call to Token, a call to a Scan function using the ScanState
// Scanner is implemented by any value that has a Scan method, which scans
// the input for the representation of a value and stores the result in the
-// receiver, which must be a pointer to be useful. The Scan method is called
+// receiver, which must be a pointer to be useful. The Scan method is called
// for any argument to Scan, Scanf, or Scanln that implements it.
type Scanner interface {
Scan(state ScanState, verb rune) error
}
// Scan scans text read from standard input, storing successive
-// space-separated values into successive arguments. Newlines count
-// as space. It returns the number of items successfully scanned.
+// space-separated values into successive arguments. Newlines count
+// as space. It returns the number of items successfully scanned.
// If that is less than the number of arguments, err will report why.
func Scan(a ...interface{}) (n int, err error) {
return Fscan(os.Stdin, a...)
// Scanf scans text read from standard input, storing successive
// space-separated values into successive arguments as determined by
-// the format. It returns the number of items successfully scanned.
+// the format. It returns the number of items successfully scanned.
// If that is less than the number of arguments, err will report why.
// Newlines in the input must match newlines in the format.
// The one exception: the verb %c always scans the next rune in the
}
// Sscan scans the argument string, storing successive space-separated
-// values into successive arguments. Newlines count as space. It
-// returns the number of items successfully scanned. If that is less
+// values into successive arguments. Newlines count as space. It
+// returns the number of items successfully scanned. If that is less
// than the number of arguments, err will report why.
func Sscan(str string, a ...interface{}) (n int, err error) {
return Fscan((*stringReader)(&str), a...)
}
// Sscanf scans the argument string, storing successive space-separated
-// values into successive arguments as determined by the format. It
+// values into successive arguments as determined by the format. It
// returns the number of items successfully parsed.
// Newlines in the input must match newlines in the format.
func Sscanf(str string, format string, a ...interface{}) (n int, err error) {
}
// Fscan scans text read from r, storing successive space-separated
-// values into successive arguments. Newlines count as space. It
-// returns the number of items successfully scanned. If that is less
+// values into successive arguments. Newlines count as space. It
+// returns the number of items successfully scanned. If that is less
// than the number of arguments, err will report why.
func Fscan(r io.Reader, a ...interface{}) (n int, err error) {
s, old := newScanState(r, true, false)
}
// Fscanf scans text read from r, storing successive space-separated
-// values into successive arguments as determined by the format. It
+// values into successive arguments as determined by the format. It
// returns the number of items successfully parsed.
// Newlines in the input must match newlines in the format.
func Fscanf(r io.Reader, format string, a ...interface{}) (n int, err error) {
// ss is the internal implementation of ScanState.
type ss struct {
- rr io.RuneReader // where to read input
- buf buffer // token accumulator
- peekRune rune // one-rune lookahead
- prevRune rune // last rune returned by ReadRune
- count int // runes consumed so far.
- atEOF bool // already read EOF
+ rs io.RuneScanner // where to read input
+ buf buffer // token accumulator
+ count int // runes consumed so far.
+ atEOF bool // already read EOF
ssave
}
}
func (s *ss) ReadRune() (r rune, size int, err error) {
- if s.peekRune >= 0 {
- s.count++
- r = s.peekRune
- size = utf8.RuneLen(r)
- s.prevRune = r
- s.peekRune = -1
- return
- }
- if s.atEOF || s.nlIsEnd && s.prevRune == '\n' || s.count >= s.argLimit {
+ if s.atEOF || s.count >= s.argLimit {
err = io.EOF
return
}
- r, size, err = s.rr.ReadRune()
+ r, size, err = s.rs.ReadRune()
if err == nil {
s.count++
- s.prevRune = r
+ if s.nlIsEnd && r == '\n' {
+ s.atEOF = true
+ }
} else if err == io.EOF {
s.atEOF = true
}
}
func (s *ss) UnreadRune() error {
- if u, ok := s.rr.(runeUnreader); ok {
- u.UnreadRune()
- } else {
- s.peekRune = s.prevRune
- }
- s.prevRune = -1
+ s.rs.UnreadRune()
+ s.atEOF = false
s.count--
return nil
}
}
// readRune is a structure to enable reading UTF-8 encoded code points
-// from an io.Reader. It is used if the Reader given to the scanner does
-// not already implement io.RuneReader.
+// from an io.Reader. It is used if the Reader given to the scanner does
+// not already implement io.RuneScanner.
type readRune struct {
- reader io.Reader
- buf [utf8.UTFMax]byte // used only inside ReadRune
- pending int // number of bytes in pendBuf; only >0 for bad UTF-8
- pendBuf [utf8.UTFMax]byte // bytes left over
+ reader io.Reader
+ buf [utf8.UTFMax]byte // used only inside ReadRune
+ pending int // number of bytes in pendBuf; only >0 for bad UTF-8
+ pendBuf [utf8.UTFMax]byte // bytes left over
+ peekRune rune // if >=0 next rune; when <0 is ^(previous Rune)
}
// readByte returns the next byte from the input, which may be
r.pending--
return
}
- n, err := io.ReadFull(r.reader, r.pendBuf[0:1])
- if n != 1 {
- return 0, err
+ _, err = r.reader.Read(r.pendBuf[:1])
+ if err != nil {
+ return
}
return r.pendBuf[0], err
}
-// unread saves the bytes for the next read.
-func (r *readRune) unread(buf []byte) {
- copy(r.pendBuf[r.pending:], buf)
- r.pending += len(buf)
-}
-
// ReadRune returns the next UTF-8 encoded code point from the
// io.Reader inside r.
func (r *readRune) ReadRune() (rr rune, size int, err error) {
+ if r.peekRune >= 0 {
+ rr = r.peekRune
+ r.peekRune = ^r.peekRune
+ size = utf8.RuneLen(rr)
+ return
+ }
r.buf[0], err = r.readByte()
if err != nil {
- return 0, 0, err
+ return
}
if r.buf[0] < utf8.RuneSelf { // fast check for common ASCII case
rr = rune(r.buf[0])
size = 1 // Known to be 1.
+ // Flip the bits of the rune so it's available to UnreadRune.
+ r.peekRune = ^rr
return
}
var n int
- for n = 1; !utf8.FullRune(r.buf[0:n]); n++ {
+ for n = 1; !utf8.FullRune(r.buf[:n]); n++ {
r.buf[n], err = r.readByte()
if err != nil {
if err == io.EOF {
return
}
}
- rr, size = utf8.DecodeRune(r.buf[0:n])
- if size < n { // an error
- r.unread(r.buf[size:n])
+ rr, size = utf8.DecodeRune(r.buf[:n])
+ if size < n { // an error, save the bytes for the next read
+ copy(r.pendBuf[r.pending:], r.buf[size:n])
+ r.pending += n - size
}
+ // Flip the bits of the rune so it's available to UnreadRune.
+ r.peekRune = ^rr
return
}
+func (r *readRune) UnreadRune() error {
+ if r.peekRune >= 0 {
+ return errors.New("fmt: scanning called UnreadRune with no rune available")
+ }
+ // Reverse bit flip of previously read rune to obtain valid >=0 state.
+ r.peekRune = ^r.peekRune
+ return nil
+}
+
var ssFree = sync.Pool{
New: func() interface{} { return new(ss) },
}
// newScanState allocates a new ss struct or grab a cached one.
func newScanState(r io.Reader, nlIsSpace, nlIsEnd bool) (s *ss, old ssave) {
s = ssFree.Get().(*ss)
- if rr, ok := r.(io.RuneReader); ok {
- s.rr = rr
+ if rs, ok := r.(io.RuneScanner); ok {
+ s.rs = rs
} else {
- s.rr = &readRune{reader: r}
+ s.rs = &readRune{reader: r, peekRune: -1}
}
s.nlIsSpace = nlIsSpace
s.nlIsEnd = nlIsEnd
- s.prevRune = -1
- s.peekRune = -1
s.atEOF = false
s.limit = hugeWid
s.argLimit = hugeWid
return
}
s.buf = s.buf[:0]
- s.rr = nil
+ s.rs = nil
ssFree.Put(s)
}
}
}
-// token returns the next space-delimited string from the input. It
-// skips white space. For Scanln, it stops at newlines. For Scan,
+// token returns the next space-delimited string from the input. It
+// skips white space. For Scanln, it stops at newlines. For Scan,
// newlines are treated as spaces.
func (s *ss) token(skipSpace bool, f func(rune) bool) []byte {
if skipSpace {
s.UnreadRune()
}
-// accept checks the next rune in the input. If it's a byte (sic) in the string, it puts it in the
+// accept checks the next rune in the input. If it's a byte (sic) in the string, it puts it in the
// buffer and returns true. Otherwise it return false.
func (s *ss) accept(ok string) bool {
return s.consume(ok, true)
if !s.okVerb(verb, "tv", "boolean") {
return false
}
- // Syntax-checking a boolean is annoying. We're not fastidious about case.
+ // Syntax-checking a boolean is annoying. We're not fastidious about case.
switch s.getRune() {
case '0':
return false
}
// scanInt returns the value of the integer represented by the next
-// token, checking for overflow. Any error is stored in s.err.
+// token, checking for overflow. Any error is stored in s.err.
func (s *ss) scanInt(verb rune, bitSize int) int64 {
if verb == 'c' {
return s.scanRune(bitSize)
}
// scanUint returns the value of the unsigned integer represented
-// by the next token, checking for overflow. Any error is stored in s.err.
+// by the next token, checking for overflow. Any error is stored in s.err.
func (s *ss) scanUint(verb rune, bitSize int) uint64 {
if verb == 'c' {
return uint64(s.scanRune(bitSize))
return string(s.buf)
case '"':
// Double-quoted: Include the quotes and let strconv.Unquote do the backslash escapes.
- s.buf.WriteRune(quote)
+ s.buf.WriteByte('"')
for {
r := s.mustReadRune()
s.buf.WriteRune(r)
}
// doScanf does the real work when scanning with a format string.
-// At the moment, it handles only pointers to basic types.
+// At the moment, it handles only pointers to basic types.
func (s *ss) doScanf(format string, a []interface{}) (numProcessed int, err error) {
defer errorHandler(&err)
end := len(format) - 1
}
// Either we failed to advance, we have a percent character, or we ran out of input.
if format[i] != '%' {
- // Can't advance format. Why not?
+ // Can't advance format. Why not?
if w < 0 {
s.errorString("input does not match format")
}
if err != nil {
if test.err == "" {
t.Errorf("got error scanning (%q, %q): %q", test.format, test.text, err)
- } else if strings.Index(err.Error(), test.err) < 0 {
+ } else if !strings.Contains(err.Error(), test.err) {
t.Errorf("got wrong error scanning (%q, %q): %q; expected %q", test.format, test.text, err, test.err)
}
continue
_, err := Fscan(r, a)
if err == nil {
t.Error("expected error scanning non-pointer")
- } else if strings.Index(err.Error(), "pointer") < 0 {
+ } else if !strings.Contains(err.Error(), "pointer") {
t.Errorf("expected pointer error scanning non-pointer, got: %s", err)
}
}
_, err := Sscanln("1 x\n", &a)
if err == nil {
t.Error("expected error scanning string missing newline")
- } else if strings.Index(err.Error(), "newline") < 0 {
+ } else if !strings.Contains(err.Error(), "newline") {
t.Errorf("expected newline error scanning string missing newline, got: %s", err)
}
}
_, err := Fscanln(r, &a, &b)
if err == nil {
t.Error("expected error scanning string with extra newline")
- } else if strings.Index(err.Error(), "newline") < 0 {
+ } else if !strings.Contains(err.Error(), "newline") {
t.Errorf("expected newline error scanning string with extra newline, got: %s", err)
}
}
type TwoLines string
-// Scan attempts to read two lines into the object. Scanln should prevent this
+// Scan attempts to read two lines into the object. Scanln should prevent this
// because it stops at newline; Scan and Scanf should be fine.
func (t *TwoLines) Scan(state ScanState, verb rune) error {
chars := make([]rune, 0, 100)
}
}
+func BenchmarkScanRecursiveIntReaderWrapper(b *testing.B) {
+ b.ResetTimer()
+ ints := makeInts(intCount)
+ var r RecursiveInt
+ for i := b.N - 1; i >= 0; i-- {
+ buf := newReader(string(ints))
+ b.StartTimer()
+ Fscan(buf, &r)
+ b.StopTimer()
+ }
+}
+
// Issue 9124.
// %x on bytes couldn't handle non-space bytes terminating the scan.
func TestHexBytes(t *testing.T) {
specs := d.Specs[:0]
for j, s := range d.Specs {
if j > i && fset.Position(s.Pos()).Line > 1+fset.Position(d.Specs[j-1].End()).Line {
- // j begins a new run. End this one.
+ // j begins a new run. End this one.
specs = append(specs, sortSpecs(fset, f, d.Specs[i:j])...)
i = j
}
// struct fields for which f(fieldname, fieldvalue) is true are
// printed; all others are filtered from the output. Unexported
// struct fields are never printed.
-//
-func Fprint(w io.Writer, fset *token.FileSet, x interface{}, f FieldFilter) (err error) {
+func Fprint(w io.Writer, fset *token.FileSet, x interface{}, f FieldFilter) error {
+ return fprint(w, fset, x, f)
+}
+
+func fprint(w io.Writer, fset *token.FileSet, x interface{}, f FieldFilter) (err error) {
// setup printer
p := printer{
output: w,
// indexed by package id (canonical import path).
// An Importer must determine the canonical import path and
// check the map to see if it is already present in the imports map.
-// If so, the Importer can return the map entry. Otherwise, the
+// If so, the Importer can return the map entry. Otherwise, the
// Importer should load the package data for the given path into
// a new *Object (pkg), record pkg in the imports map, and then
// return pkg.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
InstallSuffix string
// By default, Import uses the operating system's file system calls
- // to read directories and files. To read from other sources,
- // callers can set the following functions. They all have default
+ // to read directories and files. To read from other sources,
+ // callers can set the following functions. They all have default
// behaviors that use the local file system, so clients need only set
// the functions whose behaviors they wish to change.
// ReadDir returns a slice of os.FileInfo, sorted by Name,
// describing the content of the named directory.
// If ReadDir is nil, Import uses ioutil.ReadDir.
- ReadDir func(dir string) (fi []os.FileInfo, err error)
+ ReadDir func(dir string) ([]os.FileInfo, error)
// OpenFile opens a file (not a directory) for reading.
// If OpenFile is nil, Import uses os.Open.
- OpenFile func(path string) (r io.ReadCloser, err error)
+ OpenFile func(path string) (io.ReadCloser, error)
}
// joinPath calls ctxt.JoinPath (if not nil) or else filepath.Join.
// if set, or else the compiled code's GOARCH, GOOS, and GOROOT.
var Default Context = defaultContext()
-// Also known to cmd/dist/build.go.
-var cgoEnabled = map[string]bool{
- "darwin/386": true,
- "darwin/amd64": true,
- "darwin/arm": true,
- "darwin/arm64": true,
- "dragonfly/amd64": true,
- "freebsd/386": true,
- "freebsd/amd64": true,
- "freebsd/arm": true,
- "linux/386": true,
- "linux/amd64": true,
- "linux/arm": true,
- "linux/arm64": true,
- "linux/ppc64le": true,
- "android/386": true,
- "android/amd64": true,
- "android/arm": true,
- "netbsd/386": true,
- "netbsd/amd64": true,
- "netbsd/arm": true,
- "openbsd/386": true,
- "openbsd/amd64": true,
- "solaris/amd64": true,
- "windows/386": true,
- "windows/amd64": true,
-}
-
func defaultContext() Context {
var c Context
const (
// If FindOnly is set, Import stops after locating the directory
- // that should contain the sources for a package. It does not
+ // that should contain the sources for a package. It does not
// read any files in the directory.
FindOnly ImportMode = 1 << iota
CXXFiles []string // .cc, .cpp and .cxx source files
MFiles []string // .m (Objective-C) source files
HFiles []string // .h, .hh, .hpp and .hxx source files
+ FFiles []string // .f, .F, .for and .f90 Fortran source files
SFiles []string // .s source files
SwigFiles []string // .swig files
SwigCXXFiles []string // .swigcxx files
CgoCFLAGS []string // Cgo CFLAGS directives
CgoCPPFLAGS []string // Cgo CPPFLAGS directives
CgoCXXFLAGS []string // Cgo CXXFLAGS directives
+ CgoFFLAGS []string // Cgo FFLAGS directives
CgoLDFLAGS []string // Cgo LDFLAGS directives
CgoPkgConfig []string // Cgo pkg-config directives
continue
}
- // Going to save the file. For non-Go files, can stop here.
+ // Going to save the file. For non-Go files, can stop here.
switch ext {
case ".c":
p.CFiles = append(p.CFiles, name)
case ".h", ".hh", ".hpp", ".hxx":
p.HFiles = append(p.HFiles, name)
continue
+ case ".f", ".F", ".for", ".f90":
+ p.FFiles = append(p.FFiles, name)
+ continue
case ".s":
p.SFiles = append(p.SFiles, name)
continue
}
switch ext {
- case ".go", ".c", ".cc", ".cxx", ".cpp", ".m", ".s", ".h", ".hh", ".hpp", ".hxx", ".S", ".swig", ".swigcxx":
+ case ".go", ".c", ".cc", ".cxx", ".cpp", ".m", ".s", ".h", ".hh", ".hpp", ".hxx", ".f", ".F", ".f90", ".S", ".swig", ".swigcxx":
// tentatively okay - read to make sure
case ".syso":
// binary, no reading
// lines beginning with '// +build' are taken as build directives.
//
// The file is accepted only if each such line lists something
-// matching the file. For example:
+// matching the file. For example:
//
// // +build windows linux
//
di.CgoCPPFLAGS = append(di.CgoCPPFLAGS, args...)
case "CXXFLAGS":
di.CgoCXXFLAGS = append(di.CgoCXXFLAGS, args...)
+ case "FFLAGS":
+ di.CgoFFLAGS = append(di.CgoFFLAGS, args...)
case "LDFLAGS":
di.CgoLDFLAGS = append(di.CgoLDFLAGS, args...)
case "pkg-config":
// the result is safe for the shell.
func expandSrcDir(str string, srcdir string) (string, bool) {
// "\" delimited paths cause safeCgoName to fail
- // so convert native paths with a different delimeter
+ // so convert native paths with a different delimiter
// to "/" before starting (eg: on windows).
srcdir = filepath.ToSlash(srcdir)
// Single quotes and double quotes are recognized to prevent splitting within the
// quoted region, and are removed from the resulting substrings. If a quote in s
// isn't closed err will be set and r will have the unclosed argument as the
-// last element. The backslash is used for escaping.
+// last element. The backslash is used for escaping.
//
// For example, the following string:
//
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
t.Errorf("shouldBuild(file1) = false, want true")
}
if !reflect.DeepEqual(m, want1) {
- t.Errorf("shoudBuild(file1) tags = %v, want %v", m, want1)
+ t.Errorf("shouldBuild(file1) tags = %v, want %v", m, want1)
}
m = map[string]bool{}
if ctx.shouldBuild([]byte(file2), m) {
- t.Errorf("shouldBuild(file2) = true, want fakse")
+ t.Errorf("shouldBuild(file2) = true, want false")
}
if !reflect.DeepEqual(m, want2) {
- t.Errorf("shoudBuild(file2) tags = %v, want %v", m, want2)
+ t.Errorf("shouldBuild(file2) tags = %v, want %v", m, want2)
}
m = map[string]bool{}
t.Errorf("shouldBuild(file3) = false, want true")
}
if !reflect.DeepEqual(m, want3) {
- t.Errorf("shoudBuild(file3) tags = %v, want %v", m, want3)
+ t.Errorf("shouldBuild(file3) tags = %v, want %v", m, want3)
}
}
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
)
// pkgDeps defines the expected dependencies between packages in
-// the Go source tree. It is a statement of policy.
+// the Go source tree. It is a statement of policy.
// Changes should not be made to this map without prior discussion.
//
// The map contains two kinds of entries:
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//
// The Go path is a list of directory trees containing Go source code.
// It is consulted to resolve imports that cannot be found in the standard
-// Go tree. The default path is the value of the GOPATH environment
+// Go tree. The default path is the value of the GOPATH environment
// variable, interpreted as a path list appropriate to the operating system
// (on Unix, the variable is a colon-separated string;
// on Windows, a semicolon-separated string;
//
// Each directory listed in the Go path must have a prescribed structure:
//
-// The src/ directory holds source code. The path below 'src' determines
+// The src/ directory holds source code. The path below 'src' determines
// the import path or executable name.
//
// The pkg/ directory holds installed package objects.
//
// The bin/ directory holds compiled commands.
// Each command is named for its source directory, but only
-// using the final element, not the entire path. That is, the
+// using the final element, not the entire path. That is, the
// command with source in DIR/src/foo/quux is installed into
-// DIR/bin/quux, not DIR/bin/foo/quux. The foo/ is stripped
+// DIR/bin/quux, not DIR/bin/foo/quux. The foo/ is stripped
// so that you can add DIR/bin to your PATH to get at the
// installed commands.
//
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// only the first maxLen-3 runes; then add "...".
i := 0
for n := 0; n < maxLen-3; n++ {
- _, size := utf8.DecodeRuneInString(s)
+ _, size := utf8.DecodeRuneInString(s[i:])
i += size
}
s = s[:i] + "..."
// MakeUnknown returns the Unknown value.
func MakeUnknown() Value { return unknownVal{} }
-// MakeBool returns the Bool value for x.
+// MakeBool returns the Bool value for b.
func MakeBool(b bool) Value { return boolVal(b) }
-// MakeString returns the String value for x.
+// MakeString returns the String value for s.
func MakeString(s string) Value { return stringVal(s) }
// MakeInt64 returns the Int value for x.
// MakeFromLiteral returns the corresponding integer, floating-point,
// imaginary, character, or string value for a Go literal string. The
-// tok value must be one of token.INT, token.FLOAT, toke.IMAG, token.
-// CHAR, or token.STRING. The final argument must be zero.
+// tok value must be one of token.INT, token.FLOAT, token.IMAG,
+// token.CHAR, or token.STRING. The final argument must be zero.
// If the literal string syntax is invalid, the result is an Unknown.
func MakeFromLiteral(lit string, tok token.Token, zero uint) Value {
if zero != 0 {
// String tests
var xxx = strings.Repeat("x", 68)
+var issue14262 = `"بموجب الشروط التالية نسب المصنف — يجب عليك أن تنسب العمل بالطريقة التي تحددها المؤلف أو المرخص (ولكن ليس بأي حال من الأحوال أن توحي وتقترح بتحول أو استخدامك للعمل). المشاركة على قدم المساواة — إذا كنت يعدل ، والتغيير ، أو الاستفادة من هذا العمل ، قد ينتج عن توزيع العمل إلا في ظل تشابه او تطابق فى واحد لهذا الترخيص."`
var stringTests = []struct {
input, short, exact string
{`"` + xxx + `xx"`, `"` + xxx + `xx"`, `"` + xxx + `xx"`},
{`"` + xxx + `xxx"`, `"` + xxx + `...`, `"` + xxx + `xxx"`},
{`"` + xxx + xxx + `xxx"`, `"` + xxx + `...`, `"` + xxx + xxx + `xxx"`},
+ {issue14262, `"بموجب الشروط التالية نسب المصنف — يجب عليك أن تنسب العمل بالطريقة ال...`, issue14262},
// Int
{"0", "0", "0"},
}
// exclude lines with illegal characters
- if strings.IndexAny(line, ",.;:!?+*/=()[]{}_^°&§~%#@<\">\\") >= 0 {
+ if strings.ContainsAny(line, ",.;:!?+*/=()[]{}_^°&§~%#@<\">\\") {
return ""
}
// ToText prepares comment text for presentation in textual output.
// It wraps paragraphs of text to width or fewer Unicode code points
-// and then prefixes each line with the indent. In preformatted sections
+// and then prefixes each line with the indent. In preformatted sections
// (such as program text), it prefixes each non-blank line with preIndent.
func ToText(w io.Writer, text string, indent, preIndent string, width int) {
l := lineWrapper{
result BenchmarkResult
}
-// StartTimer starts timing a test. This function is called automatically
+// StartTimer starts timing a test. This function is called automatically
// before a benchmark starts, but it can also used to resume timing after
// a call to StopTimer.
func (b *B) StartTimer() {
}
}
-// StopTimer stops timing a test. This can be used to pause the timer
+// StopTimer stops timing a test. This can be used to pause the timer
// while performing complex initialization that you don't
// want to measure.
func (b *B) StopTimer() {
return b.result
}
-// launch launches the benchmark function. It gradually increases the number
+// launch launches the benchmark function. It gradually increases the number
// of benchmark iterations until the benchmark runs for a second in order
-// to get a reasonable measurement. It prints timing information in this form
+// to get a reasonable measurement. It prints timing information in this form
// testing.BenchmarkHello 100000 19 ns/op
// launch is run by the fun function as a separate goroutine.
func (b *B) launch() {
// SetBytes records the number of bytes processed in a single ...
func (b *B) SetBytes(n int64)
- // StartTimer starts timing a test. This function is called ...
+ // StartTimer starts timing a test. This function is called ...
func (b *B) StartTimer()
- // StopTimer stops timing a test. This can be used to pause the ...
+ // StopTimer stops timing a test. This can be used to pause the ...
func (b *B) StopTimer()
// The results of a benchmark run.
//
var (
// The short flag requests that tests run more quickly, but its functionality
- // is provided by test writers themselves. The testing package is just its
- // home. The all.bash installation script sets it to make installation more
+ // is provided by test writers themselves. The testing package is just its
+ // home. The all.bash installation script sets it to make installation more
// efficient, but by default the flag is off so a plain "go test" will do a
// full test of the package.
short = flag.Bool("test.short", false, "run smaller test suite to save time")
// SetBytes records the number of bytes processed in a single ...
func (b *B) SetBytes(n int64)
- // StartTimer starts timing a test. This function is called ...
+ // StartTimer starts timing a test. This function is called ...
func (b *B) StartTimer()
- // StopTimer stops timing a test. This can be used to pause the ...
+ // StopTimer stops timing a test. This can be used to pause the ...
func (b *B) StopTimer()
- // launch launches the benchmark function. It gradually increases ...
+ // launch launches the benchmark function. It gradually increases ...
func (b *B) launch()
// log generates the output. It's always at the same stack depth.
// SetBytes records the number of bytes processed in a single ...
func (b *B) SetBytes(n int64)
- // StartTimer starts timing a test. This function is called ...
+ // StartTimer starts timing a test. This function is called ...
func (b *B) StartTimer()
- // StopTimer stops timing a test. This can be used to pause the ...
+ // StopTimer stops timing a test. This can be used to pause the ...
func (b *B) StopTimer()
// The results of a benchmark run.
// }
// }
// The benchmark package will vary b.N until the benchmark function lasts
-// long enough to be timed reliably. The output
+// long enough to be timed reliably. The output
// testing.BenchmarkHello 10000000 282 ns/op
// means that the loop ran 10000000 times at a speed of 282 ns per loop.
//
var (
// The short flag requests that tests run more quickly, but its functionality
- // is provided by test writers themselves. The testing package is just its
- // home. The all.bash installation script sets it to make installation more
+ // is provided by test writers themselves. The testing package is just its
+ // home. The all.bash installation script sets it to make installation more
// efficient, but by default the flag is off so a plain "go test" will do a
// full test of the package.
short = flag.Bool("test.short", false, "run smaller test suite to save time")
// This previous version duplicated code (those lines are in
// tRunner no matter what), but worse the goroutine teardown
// implicit in runtime.Goexit was not guaranteed to complete
- // before the test exited. If a test deferred an important cleanup
+ // before the test exited. If a test deferred an important cleanup
// function (like removing temporary files), there was no guarantee
- // it would run on a test failure. Because we send on c.signal during
+ // it would run on a test failure. Because we send on c.signal during
// a top-of-stack deferred function now, we know that the send
// only happens after any other stacked defers have completed.
runtime.Goexit()
) {
// Try as whole source file.
file, err = parser.ParseFile(fset, filename, src, parserMode)
- // If there's no error, return. If the error is that the source file didn't begin with a
+ // If there's no error, return. If the error is that the source file didn't begin with a
// package line and source fragments are ok, fall through to
- // try as a source fragment. Stop and return on any other error.
+ // try as a source fragment. Stop and return on any other error.
if err == nil || !fragmentOk || !strings.Contains(err.Error(), "expected 'package'") {
return
}
// If this is a statement list, make it a source file
// by inserting a package clause and turning the list
- // into a function body. This handles expressions too.
+ // into a function body. This handles expressions too.
// Insert using a ;, not a newline, so that the line numbers
// in fsrc match the ones in src. Add an extra '\n' before the '}'
// to make sure comments are flushed before the '}'.
// package unsafe
types.Typ[types.UnsafePointer],
+
+ // any type, for builtin export data
+ // TODO(mdempsky): Provide an actual Type value to represent "any"?
+ types.Typ[types.Invalid],
}
if string(line) == "!<arch>\n" {
// Archive file. Scan to __.PKGDEF.
var name string
- var size int
- if name, size, err = readGopackHeader(r); err != nil {
+ if name, _, err = readGopackHeader(r); err != nil {
return
}
- // Optional leading __.GOSYMDEF or __.SYMDEF.
- // Read and discard.
- if name == "__.SYMDEF" || name == "__.GOSYMDEF" {
- const block = 4096
- tmp := make([]byte, block)
- for size > 0 {
- n := size
- if n > block {
- n = block
- }
- if _, err = io.ReadFull(r, tmp[:n]); err != nil {
- return
- }
- size -= n
- }
-
- if name, _, err = readGopackHeader(r); err != nil {
- return
- }
- }
-
- // First real entry should be __.PKGDEF.
+ // First entry should be __.PKGDEF.
if name != "__.PKGDEF" {
err = errors.New("go archive is missing __.PKGDEF")
return
// FindPkg returns the filename and unique package id for an import
// path based on package information provided by build.Import (using
-// the build.Default build.Context).
+// the build.Default build.Context). A relative srcDir is interpreted
+// relative to the current working directory.
// If no file was found, an empty filename is returned.
//
func FindPkg(path, srcDir string) (filename, id string) {
default:
// "x" -> "$GOPATH/pkg/$GOOS_$GOARCH/x.ext", "x"
// Don't require the source files to be present.
+ if abs, err := filepath.Abs(srcDir); err == nil { // see issue 14282
+ srcDir = abs
+ }
bp, _ := build.Import(path, srcDir, build.FindOnly|build.AllowBinary)
if bp.PkgObj == "" {
return
if pname := pkg.Name(); pname == "" {
pkg.SetName(name)
} else if pname != name {
- p.errorf("%s package name mismatch: %s (given) vs %s (expected)", pname, name)
+ p.errorf("%s package name mismatch: %s (given) vs %s (expected)", id, pname, name)
}
}
return pkg
// For qualified names, the returned package is nil (and not created if
// it doesn't exist yet) unless materializePkg is set (which creates an
// unnamed package with valid package path). In the latter case, a
-// subequent import clause is expected to provide a name for the package.
+// subsequent import clause is expected to provide a name for the package.
//
func (p *parser) parseName(parent *types.Package, materializePkg bool) (pkg *types.Package, name string) {
pkg = parent
}
x := p.parseUnaryExpr(lhs)
- for _, prec := p.tokPrec(); prec >= prec1; prec-- {
- for {
- op, oprec := p.tokPrec()
- if oprec != prec {
- break
- }
- pos := p.expect(op)
- if lhs {
- p.resolve(x)
- lhs = false
- }
- y := p.parseBinaryExpr(false, prec+1)
- x = &ast.BinaryExpr{X: p.checkExpr(x), OpPos: pos, Op: op, Y: p.checkExpr(y)}
+ for {
+ op, oprec := p.tokPrec()
+ if oprec < prec1 {
+ return x
+ }
+ pos := p.expect(op)
+ if lhs {
+ p.resolve(x)
+ lhs = false
}
+ y := p.parseBinaryExpr(false, oprec+1)
+ x = &ast.BinaryExpr{X: p.checkExpr(x), OpPos: pos, Op: op, Y: p.checkExpr(y)}
}
-
- return x
}
// If lhs is set and the result is an identifier, it is not resolved.
type ErrorHandler func(pos token.Position, msg string)
// A Scanner holds the scanner's internal state while processing
-// a given text. It can be allocated as part of another data
+// a given text. It can be allocated as part of another data
// structure but must be initialized via Init before use.
//
type Scanner struct {
} else {
// an untyped non-constant argument may appear if
// it contains a (yet untyped non-constant) shift
- // epression: convert it to complex128 which will
+ // expression: convert it to complex128 which will
// result in an error (shift of complex value)
check.convertUntyped(x, Typ[Complex128])
// x should be invalid now, but be conservative and check
}
// Files checks the provided files as part of the checker's package.
-func (check *Checker) Files(files []*ast.File) (err error) {
+func (check *Checker) Files(files []*ast.File) error { return check.checkFiles(files) }
+
+func (check *Checker) checkFiles(files []*ast.File) (err error) {
defer check.handleBailout(&err)
check.initFiles(files)
// level untyped constants will return an untyped type rather then the
// respective context-specific type.
//
-func Eval(fset *token.FileSet, pkg *Package, pos token.Pos, expr string) (tv TypeAndValue, err error) {
+func Eval(fset *token.FileSet, pkg *Package, pos token.Pos, expr string) (TypeAndValue, error) {
// determine scope
var scope *Scope
if pkg == nil {
scope = pkg.scope
} else {
// The package scope extent (position information) may be
- // incorrect (files spread accross a wide range of fset
+ // incorrect (files spread across a wide range of fset
// positions) - ignore it and just consider its children
// (file scopes).
for _, fscope := range pkg.scope.children {
// provided (only needed for int/uint sizes).
//
// If rounded != nil, *rounded is set to the rounded value of x for
-// representable floating-point values; it is left alone otherwise.
+// representable floating-point and complex values, and to an Int
+// value for integer values; it is left alone otherwise.
// It is ok to provide the addressof the first argument for rounded.
func representableConst(x constant.Value, conf *Config, typ *Basic, rounded *constant.Value) bool {
if x.Kind() == constant.Unknown {
if x.Kind() != constant.Int {
return false
}
+ if rounded != nil {
+ *rounded = x
+ }
if x, ok := constant.Int64Val(x); ok {
switch typ.kind {
case Int:
return
}
// rhs must be within reasonable bounds
- const stupidShift = 1023 - 1 + 52 // so we can express smallestFloat64
+ const shiftBound = 1023 - 1 + 52 // so we can express smallestFloat64
s, ok := constant.Uint64Val(yval)
- if !ok || s > stupidShift {
- check.invalidOp(y.pos(), "stupid shift count %s", y)
+ if !ok || s > shiftBound {
+ check.invalidOp(y.pos(), "invalid shift count %s", y)
x.mode = invalid
return
}
typ := x.typ.Underlying().(*Basic)
// force integer division of integer operands
if op == token.QUO && isInteger(typ) {
- xval = constant.ToInt(xval)
- yval = constant.ToInt(yval)
op = token.QUO_ASSIGN
}
x.val = constant.BinaryOp(xval, op, yval)
// initOrder computes the Info.InitOrder for package variables.
func (check *Checker) initOrder() {
// An InitOrder may already have been computed if a package is
- // built from several calls to (*Checker).Files. Clear it.
+ // built from several calls to (*Checker).Files. Clear it.
check.Info.InitOrder = check.Info.InitOrder[:0]
// compute the object dependency graph and
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
func (obj *Var) IsField() bool { return obj.isField }
// A Func represents a declared function, concrete method, or abstract
-// (interface) method. Its Type() is always a *Signature.
+// (interface) method. Its Type() is always a *Signature.
// An abstract method may belong to many interfaces due to embedding.
type Func struct {
object
// pkg; the list is in source order. Package unsafe is excluded.
//
// If pkg was loaded from export data, Imports includes packages that
-// provide package-level objects referenced by pkg. This may be more or
+// provide package-level objects referenced by pkg. This may be more or
// less than the set of packages directly imported by pkg's source code.
func (pkg *Package) Imports() []*Package { return pkg.imports }
// (Per the go/build package dependency tests, we cannot import
// path/filepath and simply use filepath.Dir.)
func dir(path string) string {
- if i := strings.LastIndexAny(path, "/\\"); i >= 0 {
- path = path[:i]
+ if i := strings.LastIndexAny(path, `/\`); i > 0 {
+ return path[:i]
}
- if path == "" {
- path = "."
- }
- return path
+ // i <= 0
+ return "."
}
}
// NewScope returns a new, empty scope contained in the given parent
-// scope, if any. The comment is for debugging only.
+// scope, if any. The comment is for debugging only.
func NewScope(parent *Scope, pos, end token.Pos, comment string) *Scope {
s := &Scope{parent, nil, nil, pos, end, comment}
// don't add children to Universe scope!
}
// typecheck package in directory
- files, err := pkgFilenames(dir)
- if err != nil {
- t.Error(err)
- return
- }
- if files != nil {
- typecheck(t, dir, files)
+ // but ignore files directly under $GOROOT/src (might be temporary test files).
+ if dir != filepath.Join(runtime.GOROOT(), "src") {
+ files, err := pkgFilenames(dir)
+ if err != nil {
+ t.Error(err)
+ return
+ }
+ if files != nil {
+ typecheck(t, dir, files)
+ }
}
// traverse subdirectories, but don't walk into testdata
type A [iota /* ERROR "cannot use iota" */ ]int
-// constant expressions with operands accross different
+// constant expressions with operands across different
// constant declarations must use the right iota values
const (
_c0 = iota
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
make(chan I1) <- i0 /* ERROR cannot use .* in send: missing method foo */
make(chan I1) <- i2 /* ERROR cannot use .* in send: wrong type for method foo */
}
+
+// Check that constants representable as integers are in integer form
+// before being used in operations that are only defined on integers.
+func issue14229() {
+ // from the issue
+ const _ = int64(-1<<63) % 1e6
+
+ // related
+ const (
+ a int = 3
+ b = 4.0
+ _ = a / b
+ _ = a % b
+ _ = b / a
+ _ = b % a
+ )
+}
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
s = 10
_ = 0<<0
_ = 1<<s
- _ = 1<<- /* ERROR "stupid shift" */ 1
- _ = 1<<1075 /* ERROR "stupid shift" */
+ _ = 1<<- /* ERROR "invalid shift" */ 1
+ _ = 1<<1075 /* ERROR "invalid shift" */
_ = 2.0<<1
_ int = 2<<s
_ = float64 /* ERROR "must be integer" */ (0) >> 2
_ = complex64 /* ERROR "must be integer" */ (0) << 3
_ = complex64 /* ERROR "must be integer" */ (0) >> 4
-}
\ No newline at end of file
+}
// function.
//
// For an abstract method, Recv returns the enclosing interface either
-// as a *Named or an *Interface. Due to embedding, an interface may
+// as a *Named or an *Interface. Due to embedding, an interface may
// contain methods whose receiver type is a different interface.
func (s *Signature) Recv() *Var { return s.recv }
//
func def(obj Object) {
name := obj.Name()
- if strings.Index(name, " ") >= 0 {
+ if strings.Contains(name, " ") {
return // nothing to do
}
// fix Obj link for named types
// different context than an earlier pass, there is no single context.
// In the example, there is missing a quote, so it is not clear
// whether {{.}} is meant to be inside a JS string or in a JS value
- // context. The second iteration would produce something like
+ // context. The second iteration would produce something like
//
// <script>var x = ['firstValue,'secondValue]</script>
ErrRangeLoopReentry
// escapeTemplate rewrites the named template, which must be
// associated with t, to guarantee that the output of any of the named
-// templates is properly escaped. If no error is returned, then the named templates have
-// been modified. Otherwise the named templates have been rendered
+// templates is properly escaped. If no error is returned, then the named templates have
+// been modified. Otherwise the named templates have been rendered
// unusable.
func escapeTemplate(tmpl *Template, node parse.Node, name string) error {
e := newEscaper(tmpl)
t.Fatal(err)
}
if len(clone.Templates()) != len(orig.Templates()) {
- t.Fatalf("Invalid lenth of t.Clone().Templates()")
+ t.Fatalf("Invalid length of t.Clone().Templates()")
}
const want = "stuff"
func ModelFunc(f func(Color) Color) Model {
// Note: using *modelFunc as the implementation
// means that callers can still use comparisons
- // like m == RGBAModel. This is not possible if
+ // like m == RGBAModel. This is not possible if
// we use the func value directly, because funcs
// are no longer comparable.
return &modelFunc{f}
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// blockReader parses the block structure of GIF image data, which
// comprises (n, (n bytes)) blocks, with 1 <= n <= 255. It is the
// reader given to the LZW decoder, which is thus immune to the
-// blocking. After the LZW decoder completes, there will be a 0-byte
+// blocking. After the LZW decoder completes, there will be a 0-byte
// block remaining (0, ()), which is consumed when checking that the
// blockReader is exhausted.
type blockReader struct {
// for an image". In practice, though, giflib (a widely used C
// library) does not enforce this, so we also accept lzwr returning
// io.ErrUnexpectedEOF (meaning that the encoded stream hit io.EOF
- // before the LZW decoder saw an explict end code), provided that
+ // before the LZW decoder saw an explicit end code), provided that
// the io.ReadFull call above successfully read len(m.Pix) bytes.
// See https://golang.org/issue/9856 for an example GIF.
if n, err := lzwr.Read(d.tmp[:1]); n != 0 || (err != io.EOF && err != io.ErrUnexpectedEOF) {
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package unix
+
+import (
+ "syscall"
+ "unsafe"
+)
+
+// getentropy(2)'s syscall number, from /usr/src/sys/kern/syscalls.master
+const entropyTrap uintptr = 7
+
+// GetEntropy calls the OpenBSD getentropy system call.
+func GetEntropy(p []byte) error {
+ _, _, errno := syscall.Syscall(entropyTrap,
+ uintptr(unsafe.Pointer(&p[0])),
+ uintptr(len(p)),
+ 0)
+ if errno != 0 {
+ return errno
+ }
+ return nil
+}
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
}
for n, v := range haveNames {
- t.Errorf("value %s (%v) is found while enumerating, but has not been cretaed", n, v)
+ t.Errorf("value %s (%v) is found while enumerating, but has not been created", n, v)
}
}
// read data with short buffer
gotsize, gottype, err = k.GetValue(test.Name, make([]byte, size-1))
if err == nil {
- t.Errorf("GetValue(%s, [%d]byte) should fail, but suceeded", test.Name, size-1)
+ t.Errorf("GetValue(%s, [%d]byte) should fail, but succeeded", test.Name, size-1)
return
}
if err != registry.ErrShortBuffer {
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Reader is the interface that wraps the basic Read method.
//
-// Read reads up to len(p) bytes into p. It returns the number of bytes
-// read (0 <= n <= len(p)) and any error encountered. Even if Read
+// Read reads up to len(p) bytes into p. It returns the number of bytes
+// read (0 <= n <= len(p)) and any error encountered. Even if Read
// returns n < len(p), it may use all of p as scratch space during the call.
// If some data is available but not len(p) bytes, Read conventionally
// returns what is available instead of waiting for more.
//
// When Read encounters an error or end-of-file condition after
// successfully reading n > 0 bytes, it returns the number of
-// bytes read. It may return the (non-nil) error from the same call
+// bytes read. It may return the (non-nil) error from the same call
// or return the error (and n == 0) from a subsequent call.
// An instance of this general case is that a Reader returning
// a non-zero number of bytes at the end of the input stream may
-// return either err == EOF or err == nil. The next Read should
+// return either err == EOF or err == nil. The next Read should
// return 0, EOF.
//
// Callers should always process the n > 0 bytes returned before
-// considering the error err. Doing so correctly handles I/O errors
+// considering the error err. Doing so correctly handles I/O errors
// that happen after reading some bytes and also both of the
// allowed EOF behaviors.
//
// ReaderAt is the interface that wraps the basic ReadAt method.
//
// ReadAt reads len(p) bytes into p starting at offset off in the
-// underlying input source. It returns the number of bytes
+// underlying input source. It returns the number of bytes
// read (0 <= n <= len(p)) and any error encountered.
//
// When ReadAt returns n < len(p), it returns a non-nil error
-// explaining why more bytes were not returned. In this respect,
+// explaining why more bytes were not returned. In this respect,
// ReadAt is stricter than Read.
//
// Even if ReadAt returns n < len(p), it may use all of p as scratch
-// space during the call. If some data is available but not len(p) bytes,
+// space during the call. If some data is available but not len(p) bytes,
// ReadAt blocks until either all the data is available or an error occurs.
// In this respect ReadAt is different from Read.
//
// WriterAt is the interface that wraps the basic WriteAt method.
//
// WriteAt writes len(p) bytes from p to the underlying data stream
-// at offset off. It returns the number of bytes written from p (0 <= n <= len(p))
+// at offset off. It returns the number of bytes written from p (0 <= n <= len(p))
// and any error encountered that caused the write to stop early.
// WriteAt must return a non-nil error if it returns n < len(p).
//
//
// ReadByte reads and returns the next byte from the input.
type ByteReader interface {
- ReadByte() (c byte, err error)
+ ReadByte() (byte, error)
}
// ByteScanner is the interface that adds the UnreadByte method to the
}
// Copy copies from src to dst until either EOF is reached
-// on src or an error occurs. It returns the number of bytes
+// on src or an error occurs. It returns the number of bytes
// copied and the first error encountered while copying, if any.
//
// A successful Copy returns err == nil, not err == EOF.
// TeeReader returns a Reader that writes to w what it reads from r.
// All reads from r performed through it are matched with
-// corresponding writes to w. There is no internal buffering -
+// corresponding writes to w. There is no internal buffering -
// the write must complete before the read completes.
// Any error encountered while writing is reported as a read error.
func TeeReader(r Reader, w Writer) Reader {
}
}
// As initial capacity for readAll, use n + a little extra in case Size is zero,
- // and to avoid another allocation after Read has filled the buffer. The readAll
- // call will read into its allocated internal buffer cheaply. If the size was
+ // and to avoid another allocation after Read has filled the buffer. The readAll
+ // call will read into its allocated internal buffer cheaply. If the size was
// wrong, we'll either waste some space off the end or reallocate as needed, but
// in the overwhelmingly common case we'll get it just right.
return readAll(f, n+bytes.MinRead)
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// If dir is the empty string, TempFile uses the default directory
// for temporary files (see os.TempDir).
// Multiple programs calling TempFile simultaneously
-// will not choose the same file. The caller can use f.Name()
-// to find the pathname of the file. It is the caller's responsibility
+// will not choose the same file. The caller can use f.Name()
+// to find the pathname of the file. It is the caller's responsibility
// to remove the file when no longer needed.
func TempFile(dir, prefix string) (f *os.File, err error) {
if dir == "" {
// TempDir creates a new temporary directory in the directory dir
// with a name beginning with prefix and returns the path of the
-// new directory. If dir is the empty string, TempDir uses the
+// new directory. If dir is the empty string, TempDir uses the
// default directory for temporary files (see os.TempDir).
// Multiple programs calling TempDir simultaneously
-// will not choose the same directory. It is the caller's responsibility
+// will not choose the same directory. It is the caller's responsibility
// to remove the directory when no longer needed.
func TempDir(dir, prefix string) (name string, err error) {
if dir == "" {
}
// MultiReader returns a Reader that's the logical concatenation of
-// the provided input readers. They're read sequentially. Once all
+// the provided input readers. They're read sequentially. Once all
// inputs have returned EOF, Read will return EOF. If any of the readers
// return a non-nil, non-EOF error, Read will return that error.
func MultiReader(readers ...Reader) Reader {
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
)
// A Logger represents an active logging object that generates lines of
-// output to an io.Writer. Each logging operation makes a single call to
-// the Writer's Write method. A Logger can be used simultaneously from
+// output to an io.Writer. Each logging operation makes a single call to
+// the Writer's Write method. A Logger can be used simultaneously from
// multiple goroutines; it guarantees to serialize access to the Writer.
type Logger struct {
mu sync.Mutex // ensures atomic writes; protects the following fields
buf []byte // for accumulating text to write
}
-// New creates a new Logger. The out variable sets the
+// New creates a new Logger. The out variable sets the
// destination to which log data will be written.
// The prefix appears at the beginning of each generated log line.
// The flag argument defines the logging properties.
}
}
-// Output writes the output for a logging event. The string s contains
+// Output writes the output for a logging event. The string s contains
// the text to print after the prefix specified by the flags of the
-// Logger. A newline is appended if the last character of s is not
-// already a newline. Calldepth is used to recover the PC and is
+// Logger. A newline is appended if the last character of s is not
+// already a newline. Calldepth is used to recover the PC and is
// provided for generality, although at the moment on all pre-defined
// paths it will be 2.
func (l *Logger) Output(calldepth int, s string) error {
panic(s)
}
-// Output writes the output for a logging event. The string s contains
+// Output writes the output for a logging event. The string s contains
// the text to print after the prefix specified by the flags of the
-// Logger. A newline is appended if the last character of s is not
-// already a newline. Calldepth is the count of the number of
+// Logger. A newline is appended if the last character of s is not
+// already a newline. Calldepth is the count of the number of
// frames to skip when computing the file name and line number
// if Llongfile or Lshortfile is set; a value of 1 will print the details
// for the caller of Output.
}
// This interface and the separate syslog_unix.go file exist for
-// Solaris support as implemented by gccgo. On Solaris you can not
-// simply open a TCP connection to the syslog daemon. The gccgo
+// Solaris support as implemented by gccgo. On Solaris you cannot
+// simply open a TCP connection to the syslog daemon. The gccgo
// sources have a syslog_solaris.go file that implements unixSyslog to
// return a type that satisfies this interface and simply calls the C
// library syslog function.
conn net.Conn
}
-// New establishes a new connection to the system log daemon. Each
+// New establishes a new connection to the system log daemon. Each
// write to the returned writer sends a log message with the given
// priority and prefix.
-func New(priority Priority, tag string) (w *Writer, err error) {
+func New(priority Priority, tag string) (*Writer, error) {
return Dial("", "", priority, tag)
}
// Dial establishes a connection to a log daemon by connecting to
-// address raddr on the specified network. Each write to the returned
+// address raddr on the specified network. Each write to the returned
// writer sends a log message with the given facility, severity and
// tag.
// If network is empty, Dial will connect to the local syslog server.
// Emerg logs a message with severity LOG_EMERG, ignoring the severity
// passed to New.
-func (w *Writer) Emerg(m string) (err error) {
- _, err = w.writeAndRetry(LOG_EMERG, m)
+func (w *Writer) Emerg(m string) error {
+ _, err := w.writeAndRetry(LOG_EMERG, m)
return err
}
// Alert logs a message with severity LOG_ALERT, ignoring the severity
// passed to New.
-func (w *Writer) Alert(m string) (err error) {
- _, err = w.writeAndRetry(LOG_ALERT, m)
+func (w *Writer) Alert(m string) error {
+ _, err := w.writeAndRetry(LOG_ALERT, m)
return err
}
// Crit logs a message with severity LOG_CRIT, ignoring the severity
// passed to New.
-func (w *Writer) Crit(m string) (err error) {
- _, err = w.writeAndRetry(LOG_CRIT, m)
+func (w *Writer) Crit(m string) error {
+ _, err := w.writeAndRetry(LOG_CRIT, m)
return err
}
// Err logs a message with severity LOG_ERR, ignoring the severity
// passed to New.
-func (w *Writer) Err(m string) (err error) {
- _, err = w.writeAndRetry(LOG_ERR, m)
+func (w *Writer) Err(m string) error {
+ _, err := w.writeAndRetry(LOG_ERR, m)
return err
}
// Warning logs a message with severity LOG_WARNING, ignoring the
// severity passed to New.
-func (w *Writer) Warning(m string) (err error) {
- _, err = w.writeAndRetry(LOG_WARNING, m)
+func (w *Writer) Warning(m string) error {
+ _, err := w.writeAndRetry(LOG_WARNING, m)
return err
}
// Notice logs a message with severity LOG_NOTICE, ignoring the
// severity passed to New.
-func (w *Writer) Notice(m string) (err error) {
- _, err = w.writeAndRetry(LOG_NOTICE, m)
+func (w *Writer) Notice(m string) error {
+ _, err := w.writeAndRetry(LOG_NOTICE, m)
return err
}
// Info logs a message with severity LOG_INFO, ignoring the severity
// passed to New.
-func (w *Writer) Info(m string) (err error) {
- _, err = w.writeAndRetry(LOG_INFO, m)
+func (w *Writer) Info(m string) error {
+ _, err := w.writeAndRetry(LOG_INFO, m)
return err
}
// Debug logs a message with severity LOG_DEBUG, ignoring the severity
// passed to New.
-func (w *Writer) Debug(m string) (err error) {
- _, err = w.writeAndRetry(LOG_DEBUG, m)
+func (w *Writer) Debug(m string) error {
+ _, err := w.writeAndRetry(LOG_DEBUG, m)
return err
}
# This is used by cgo. Default is CXX, or, if that is not set,
# "g++" or "clang++".
#
+# FC: Command line to run to compile Fortran code for GOARCH.
+# This is used by cgo. Default is "gfortran".
+#
# GO_DISTFLAGS: extra flags to provide to "dist bootstrap".
set -e
::
:: CC_FOR_TARGET: Command line to run compile C code for GOARCH.
:: This is used by cgo. Default is CC.
+::
+:: FC: Command line to run to compile Fortran code.
+:: This is used by cgo. Default is "gfortran".
@echo off
// The original C code, the long comment, and the constants
// below are from FreeBSD's /usr/src/lib/msun/src/e_acosh.c
-// and came with this notice. The go code is a simplified
+// and came with this notice. The go code is a simplified
// version of the original C.
//
// ====================================================
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// The original C code, the long comment, and the constants
// below are from FreeBSD's /usr/src/lib/msun/src/s_asinh.c
-// and came with this notice. The go code is a simplified
+// and came with this notice. The go code is a simplified
// version of the original C.
//
// ====================================================
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// The original C code, the long comment, and the constants
// below are from FreeBSD's /usr/src/lib/msun/src/e_atanh.c
-// and came with this notice. The go code is a simplified
+// and came with this notice. The go code is a simplified
// version of the original C.
//
// ====================================================
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
}
-// Individual bitLen tests. Numbers chosen to examine both sides
+// Individual bitLen tests. Numbers chosen to examine both sides
// of powers-of-two boundaries.
func BenchmarkBitLen0(b *testing.B) { benchmarkBitLenN(b, 0) }
func BenchmarkBitLen1(b *testing.B) { benchmarkBitLenN(b, 1) }
}
if x.form == finite && y.form == finite {
- // x + y (commom case)
+ // x + y (common case)
z.neg = x.neg
if x.neg == y.neg {
// x + y == x + y
if fcount < 0 {
// The mantissa has a "decimal" point ddd.dddd; and
// -fcount is the number of digits to the right of '.'.
- // Adjust relevant exponent accodingly.
+ // Adjust relevant exponent accordingly.
d := int64(fcount)
switch b {
case 10:
// x & -x leaves only the right-most bit set in the word. Let k be the
// index of that bit. Since only a single bit is set, the value is two
// to the power of k. Multiplying by a power of two is equivalent to
- // left shifting, in this case by k bits. The de Bruijn constant is
+ // left shifting, in this case by k bits. The de Bruijn constant is
// such that all six bit, consecutive substrings are distinct.
// Therefore, if we have a left shifted version of this constant we can
// find by how many bits it was shifted by looking at which six bit
for j := 0; j < _W; j += n {
if i != len(y)-1 || j != 0 {
// Unrolled loop for significant performance
- // gain. Use go test -bench=".*" in crypto/rsa
+ // gain. Use go test -bench=".*" in crypto/rsa
// to check performance before making changes.
zz = zz.mul(z, z)
zz, z = z, zz
// quotToFloat32 returns the non-negative float32 value
// nearest to the quotient a/b, using round-to-even in
-// halfway cases. It does not mutate its arguments.
+// halfway cases. It does not mutate its arguments.
// Preconditions: b is non-zero; a and b have no common factors.
func quotToFloat32(a, b nat) (f float32, exact bool) {
const (
// quotToFloat64 returns the non-negative float64 value
// nearest to the quotient a/b, using round-to-even in
-// halfway cases. It does not mutate its arguments.
+// halfway cases. It does not mutate its arguments.
// Preconditions: b is non-zero; a and b have no common factors.
func quotToFloat64(a, b nat) (f float64, exact bool) {
const (
)
func ratTok(ch rune) bool {
- return strings.IndexRune("+-/0123456789.eE", ch) >= 0
+ return strings.ContainsRune("+-/0123456789.eE", ch)
}
// Scan is a support routine for fmt.Scanner. It accepts the formats
if err != nil {
return err
}
- if strings.IndexRune("efgEFGv", ch) < 0 {
+ if !strings.ContainsRune("efgEFGv", ch) {
return errors.New("Rat.Scan: invalid verb")
}
if _, ok := z.SetString(string(tok)); !ok {
}
}
-// Test inputs to Rat.SetString. The prefix "long:" causes the test
+// Test inputs to Rat.SetString. The prefix "long:" causes the test
// to be skipped in --test.short mode. (The threshold is about 500us.)
var float64inputs = []string{
// Constants plundered from strconv/testfp.txt.
// Cancelation error in r-x or r+x is avoided by using the
// identity 2 Re w Im w = y.
//
-// Note that -w is also a square root of z. The root chosen
+// Note that -w is also a square root of z. The root chosen
// is always in the right half plane and Im w has the same sign as y.
//
// ACCURACY:
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// The original C code and the long comment below are
// from FreeBSD's /usr/src/lib/msun/src/s_erf.c and
-// came with this notice. The go code is a simplified
+// came with this notice. The go code is a simplified
// version of the original C.
//
// ====================================================
// The original C code, the long comment, and the constants
// below are from FreeBSD's /usr/src/lib/msun/src/e_exp.c
-// and came with this notice. The go code is a simplified
+// and came with this notice. The go code is a simplified
// version of the original C.
//
// ====================================================
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// The original C code, the long comment, and the constants
// below are from FreeBSD's /usr/src/lib/msun/src/s_expm1.c
-// and came with this notice. The go code is a simplified
+// and came with this notice. The go code is a simplified
// version of the original C.
//
// ====================================================
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//
// DESCRIPTION:
//
-// Returns gamma function of the argument. The result is
+// Returns gamma function of the argument. The result is
// correctly signed, and the sign (+1 or -1) is also
// returned in a global (extern) variable named signgam.
// This variable is also filled in by the logarithmic gamma
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// The original C code and the long comment below are
// from FreeBSD's /usr/src/lib/msun/src/e_j0.c and
-// came with this notice. The go code is a simplified
+// came with this notice. The go code is a simplified
// version of the original C.
//
// ====================================================
// The original C code and the long comment below are
// from FreeBSD's /usr/src/lib/msun/src/e_j1.c and
-// came with this notice. The go code is a simplified
+// came with this notice. The go code is a simplified
// version of the original C.
//
// ====================================================
// The original C code and the long comment below are
// from FreeBSD's /usr/src/lib/msun/src/e_jn.c and
-// came with this notice. The go code is a simplified
+// came with this notice. The go code is a simplified
// version of the original C.
//
// ====================================================
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// The original C code and the long comment below are
// from FreeBSD's /usr/src/lib/msun/src/e_lgamma_r.c and
-// came with this notice. The go code is a simplified
+// came with this notice. The go code is a simplified
// version of the original C.
//
// ====================================================
// The original C code, the long comment, and the constants
// below are from FreeBSD's /usr/src/lib/msun/src/e_log.c
-// and came with this notice. The go code is a simpler
+// and came with this notice. The go code is a simpler
// version of the original C.
//
// ====================================================
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// The original C code, the long comment, and the constants
// below are from FreeBSD's /usr/src/lib/msun/src/s_log1p.c
-// and came with this notice. The go code is a simplified
+// and came with this notice. The go code is a simplified
// version of the original C.
//
// ====================================================
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package math
// Modf returns integer and fractional floating-point numbers
-// that sum to f. Both values have the same sign as f.
+// that sum to f. Both values have the same sign as f.
//
// Special cases are:
// Modf(±Inf) = ±Inf, NaN
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// The original C code and the comment below are from
// FreeBSD's /usr/src/lib/msun/src/e_remainder.c and came
-// with this notice. The go code is a simplified version of
+// with this notice. The go code is a simplified version of
// the original C.
//
// ====================================================
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// The original C code and the long comment below are
// from FreeBSD's /usr/src/lib/msun/src/e_sqrt.c and
-// came with this notice. The go code is a simplified
+// came with this notice. The go code is a simplified
// version of the original C.
//
// ====================================================
// equal to huge for some floating point number "huge" and "tiny".
//
//
-// Notes: Rounding mode detection omitted. The constants "mask", "shift",
+// Notes: Rounding mode detection omitted. The constants "mask", "shift",
// and "bias" are found in src/math/bits.go
// Sqrt returns the square root of x.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
var currentLen, last, runeLen int
for i := 0; i < len(s); i += runeLen {
- // Multi-byte characters must not be split accross encoded-words.
+ // Multi-byte characters must not be split across encoded-words.
// See RFC 2047, section 5.3.
_, runeLen = utf8.DecodeRuneInString(s[i:])
var currentLen, runeLen int
for i := 0; i < len(s); i += runeLen {
b := s[i]
- // Multi-byte characters must not be split accross encoded-words.
+ // Multi-byte characters must not be split across encoded-words.
// See RFC 2047, section 5.3.
var encLen int
if b >= ' ' && b <= '~' && b != '=' && b != '?' && b != '_' {
dec := new(WordDecoder)
for i := 0; i < b.N; i++ {
- dec.Decode("=?utf-8?q?=C2=A1Hola,_se=C3=B1or!?=")
+ dec.DecodeHeader("=?utf-8?q?=C2=A1Hola,_se=C3=B1or!?=")
}
}
// isTSpecial reports whether rune is in 'tspecials' as defined by RFC
// 1521 and RFC 2045.
func isTSpecial(r rune) bool {
- return strings.IndexRune(`()<>@,;:\"/[]?=`, r) != -1
+ return strings.ContainsRune(`()<>@,;:\"/[]?=`, r)
}
// isTokenChar reports whether rune is in 'token' as defined by RFC
// consumeToken consumes a token from the beginning of provided
// string, per RFC 2045 section 5.1 (referenced from 2183), and return
-// the token consumed and the rest of the string. Returns ("", v) on
+// the token consumed and the rest of the string. Returns ("", v) on
// failure to consume at least one character.
func consumeToken(v string) (token, rest string) {
notPos := strings.IndexFunc(v, isNotTokenChar)
// consumeValue consumes a "value" per RFC 2045, where a value is
// either a 'token' or a 'quoted-string'. On success, consumeValue
// returns the value consumed (and de-quoted/escaped, if a
-// quoted-string) and the rest of the string. On failure, returns
+// quoted-string) and the rest of the string. On failure, returns
// ("", v).
func consumeValue(v string) (value, rest string) {
if v == "" {
// a Content-Disposition of "form-data".
// It stores up to maxMemory bytes of the file parts in memory
// and the remainder on disk in temporary files.
-func (r *Reader) ReadForm(maxMemory int64) (f *Form, err error) {
+func (r *Reader) ReadForm(maxMemory int64) (*Form, error) {
+ return r.readForm(maxMemory)
+}
+
+func (r *Reader) readForm(maxMemory int64) (_ *Form, err error) {
form := &Form{make(map[string][]string), make(map[string][]*FileHeader)}
defer func() {
if err != nil {
}()
if p.buffer.Len() >= len(d) {
// Internal buffer of unconsumed data is large enough for
- // the read request. No need to parse more at the moment.
+ // the read request. No need to parse more at the moment.
return p.buffer.Read(d)
}
peek, err := p.mr.bufReader.Peek(peekBufferSize) // TODO(bradfitz): add buffer size accessor
// Look for an immediate empty part without a leading \r\n
- // before the boundary separator. Some MIME code makes empty
+ // before the boundary separator. Some MIME code makes empty
// parts like this. Most browsers, however, write the \r\n
// before the subsequent boundary even for empty parts and
// won't hit this path.
}
// Reader is an iterator over parts in a MIME multipart body.
-// Reader's underlying parser consumes its input as needed. Seeking
+// Reader's underlying parser consumes its input as needed. Seeking
// isn't supported.
type Reader struct {
bufReader *bufio.Reader
rest = skipLWSPChar(rest)
// On the first part, see our lines are ending in \n instead of \r\n
- // and switch into that mode if so. This is a violation of the spec,
+ // and switch into that mode if so. This is a violation of the spec,
// but occurs in practice.
if mr.partsRead == 0 && len(rest) == 1 && rest[0] == '\n' {
mr.nl = mr.nl[1:]
var parseTests = []parseTest{
// Actual body from App Engine on a blob upload. The final part (the
// Content-Type: message/external-body) is what App Engine replaces
- // the uploaded file with. The other form fields (prefixed with
- // "other" in their form-data name) are unchanged. A bug was
+ // the uploaded file with. The other form fields (prefixed with
+ // "other" in their form-data name) are unchanged. A bug was
// reported with blob uploads failing when the other fields were
// empty. This was the MIME POST body that previously failed.
{
// (e.g., https://golang.org/issue/13283). Glibc instead only
// uses CommonPrefixLen for IPv4 when the source and destination
// addresses are on the same subnet, but that requires extra
- // work to find the netmask for our source addresses. As a
+ // work to find the netmask for our source addresses. As a
// simpler heuristic, we limit its use to when the source and
// destination belong to the same special purpose block.
if da4 {
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
hostname = hostname[:len(hostname)-1]
}
if stringsHasSuffixFold(hostname, ".local") {
- // Per RFC 6762, the ".local" TLD is special. And
+ // Per RFC 6762, the ".local" TLD is special. And
// because Go's native resolver doesn't do mDNS or
// similar local resolution mechanisms, assume that
// libc might (via Avahi, etc) and use cgo.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
import (
"errors"
+ "runtime"
"time"
)
finalDeadline: finalDeadline,
}
+ // DualStack mode requires that dialTCP support cancelation. This is
+ // not available on plan9 (golang.org/issue/11225), so we ignore it.
var primaries, fallbacks addrList
- if d.DualStack && network == "tcp" {
+ if d.DualStack && network == "tcp" && runtime.GOOS != "plan9" {
primaries, fallbacks = addrs.partition(isIPv4)
} else {
primaries = addrs
if len(fallbacks) == 0 {
// dialParallel can accept an empty fallbacks list,
// but this shortcut avoids the goroutine/channel overhead.
- c, err = dialSerial(ctx, primaries, nil)
+ c, err = dialSerial(ctx, primaries, ctx.Cancel)
} else {
- c, err = dialParallel(ctx, primaries, fallbacks)
+ c, err = dialParallel(ctx, primaries, fallbacks, ctx.Cancel)
}
if d.KeepAlive > 0 && err == nil {
// head start. It returns the first established connection and
// closes the others. Otherwise it returns an error from the first
// primary address.
-func dialParallel(ctx *dialContext, primaries, fallbacks addrList) (Conn, error) {
- results := make(chan dialResult) // unbuffered, so dialSerialAsync can detect race loss & cleanup
+func dialParallel(ctx *dialContext, primaries, fallbacks addrList, userCancel <-chan struct{}) (Conn, error) {
+ results := make(chan dialResult, 2)
cancel := make(chan struct{})
- defer close(cancel)
// Spawn the primary racer.
go dialSerialAsync(ctx, primaries, nil, cancel, results)
fallbackTimer := time.NewTimer(ctx.fallbackDelay())
go dialSerialAsync(ctx, fallbacks, fallbackTimer, cancel, results)
- var primaryErr error
- for nracers := 2; nracers > 0; nracers-- {
- res := <-results
- // If we're still waiting for a connection, then hasten the delay.
- // Otherwise, disable the Timer and let cancel take over.
- if fallbackTimer.Stop() && res.error != nil {
- fallbackTimer.Reset(0)
- }
- if res.error == nil {
- return res.Conn, nil
+ // Wait for both racers to succeed or fail.
+ var primaryResult, fallbackResult dialResult
+ for !primaryResult.done || !fallbackResult.done {
+ select {
+ case <-userCancel:
+ // Forward an external cancelation request.
+ if cancel != nil {
+ close(cancel)
+ cancel = nil
+ }
+ userCancel = nil
+ case res := <-results:
+ // Drop the result into its assigned bucket.
+ if res.primary {
+ primaryResult = res
+ } else {
+ fallbackResult = res
+ }
+ // On success, cancel the other racer (if one exists.)
+ if res.error == nil && cancel != nil {
+ close(cancel)
+ cancel = nil
+ }
+ // If the fallbackTimer was pending, then either we've canceled the
+ // fallback because we no longer want it, or we haven't canceled yet
+ // and therefore want it to wake up immediately.
+ if fallbackTimer.Stop() && cancel != nil {
+ fallbackTimer.Reset(0)
+ }
}
- if res.primary {
- primaryErr = res.error
+ }
+
+ // Return, in order of preference:
+ // 1. The primary connection (but close the other if we got both.)
+ // 2. The fallback connection.
+ // 3. The primary error.
+ if primaryResult.error == nil {
+ if fallbackResult.error == nil {
+ fallbackResult.Conn.Close()
}
+ return primaryResult.Conn, nil
+ } else if fallbackResult.error == nil {
+ return fallbackResult.Conn, nil
+ } else {
+ return nil, primaryResult.error
}
- return nil, primaryErr
}
type dialResult struct {
Conn
error
primary bool
+ done bool
}
// dialSerialAsync runs dialSerial after some delay, and returns the
select {
case <-timer.C:
case <-cancel:
- return
+ // dialSerial will immediately return errCanceled in this case.
}
}
c, err := dialSerial(ctx, ras, cancel)
- select {
- case results <- dialResult{c, err, timer == nil}:
- // We won the race.
- case <-cancel:
- // The other goroutine won the race.
- if c != nil {
- c.Close()
- }
- }
+ results <- dialResult{Conn: c, error: err, primary: timer == nil, done: true}
}
// dialSerial connects to a list of addresses in sequence, returning
break
}
- // dialTCP does not support cancelation (see golang.org/issue/11225),
- // so if cancel fires, we'll continue trying to connect until the next
- // timeout, or return a spurious connection for the caller to close.
+ // If this dial is canceled, the implementation is expected to complete
+ // quickly, but it's still possible that we could return a spurious Conn,
+ // which the caller must Close.
dialer := func(d time.Time) (Conn, error) {
- return dialSingle(ctx, ra, d)
+ return dialSingle(ctx, ra, d, cancel)
}
c, err := dial(ctx.network, ra, dialer, partialDeadline)
if err == nil {
// dialSingle attempts to establish and returns a single connection to
// the destination address. This must be called through the OS-specific
// dial function, because some OSes don't implement the deadline feature.
-func dialSingle(ctx *dialContext, ra Addr, deadline time.Time) (c Conn, err error) {
+func dialSingle(ctx *dialContext, ra Addr, deadline time.Time, cancel <-chan struct{}) (c Conn, err error) {
la := ctx.LocalAddr
if la != nil && la.Network() != ra.Network() {
return nil, &OpError{Op: "dial", Net: ctx.network, Source: la, Addr: ra, Err: errors.New("mismatched local address type " + la.Network())}
switch ra := ra.(type) {
case *TCPAddr:
la, _ := la.(*TCPAddr)
- c, err = testHookDialTCP(ctx.network, la, ra, deadline, ctx.Cancel)
+ c, err = testHookDialTCP(ctx.network, la, ra, deadline, cancel)
case *UDPAddr:
la, _ := la.(*UDPAddr)
c, err = dialUDP(ctx.network, la, ra, deadline)
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
t.Skip("both IPv4 and IPv6 are required")
}
+ before := sw.Sockets()
origTestHookLookupIP := testHookLookupIP
defer func() { testHookLookupIP = origTestHookLookupIP }()
testHookLookupIP = lookupLocalhost
if err != nil {
t.Fatal(err)
}
- defer dss.teardown()
if err := dss.buildup(handler); err != nil {
+ dss.teardown()
t.Fatal(err)
}
- before := sw.Sockets()
- const T = 100 * time.Millisecond
const N = 10
var wg sync.WaitGroup
wg.Add(N)
- d := &Dialer{DualStack: true, Timeout: T}
+ d := &Dialer{DualStack: true, Timeout: 100 * time.Millisecond}
for i := 0; i < N; i++ {
go func() {
defer wg.Done()
}()
}
wg.Wait()
- time.Sleep(2 * T) // wait for the dial racers to stop
+ dss.teardown()
after := sw.Sockets()
if len(after) != len(before) {
t.Errorf("got %d; want %d", len(after), len(before))
// expected to hang until the timeout elapses. These addresses are reserved
// for benchmarking by RFC 6890.
const (
- slowDst4 = "192.18.0.254"
- slowDst6 = "2001:2::254"
- slowTimeout = 1 * time.Second
+ slowDst4 = "198.18.0.254"
+ slowDst6 = "2001:2::254"
)
// In some environments, the slow IPs may be explicitly unreachable, and fail
func slowDialTCP(net string, laddr, raddr *TCPAddr, deadline time.Time, cancel <-chan struct{}) (*TCPConn, error) {
c, err := dialTCP(net, laddr, raddr, deadline, cancel)
if ParseIP(slowDst4).Equal(raddr.IP) || ParseIP(slowDst6).Equal(raddr.IP) {
- time.Sleep(deadline.Sub(time.Now()))
+ select {
+ case <-cancel:
+ case <-time.After(deadline.Sub(time.Now())):
+ }
}
return c, err
}
if !supportsIPv4 || !supportsIPv6 {
t.Skip("both IPv4 and IPv6 are required")
}
+ if runtime.GOOS == "plan9" {
+ t.Skip("skipping on plan9; cannot cancel dialTCP, golang.org/issue/11225")
+ }
closedPortDelay, expectClosedPortDelay := dialClosedPort()
if closedPortDelay > expectClosedPortDelay {
const fallbackDelay = 200 * time.Millisecond
// Some cases will run quickly when "connection refused" is fast,
- // or trigger the fallbackDelay on Windows. This value holds the
+ // or trigger the fallbackDelay on Windows. This value holds the
// lesser of the two delays.
var closedPortOrFallbackDelay time.Duration
if closedPortDelay < fallbackDelay {
fallbacks := makeAddrs(tt.fallbacks, dss.port)
d := Dialer{
FallbackDelay: fallbackDelay,
- Timeout: slowTimeout,
}
ctx := &dialContext{
Dialer: d,
finalDeadline: d.deadline(time.Now()),
}
startTime := time.Now()
- c, err := dialParallel(ctx, primaries, fallbacks)
+ c, err := dialParallel(ctx, primaries, fallbacks, nil)
elapsed := time.Now().Sub(startTime)
if c != nil {
} else if !(elapsed <= expectElapsedMax) {
t.Errorf("#%d: got %v; want <= %v", i, elapsed, expectElapsedMax)
}
+
+ // Repeat each case, ensuring that it can be canceled quickly.
+ cancel := make(chan struct{})
+ var wg sync.WaitGroup
+ wg.Add(1)
+ go func() {
+ time.Sleep(5 * time.Millisecond)
+ close(cancel)
+ wg.Done()
+ }()
+ startTime = time.Now()
+ c, err = dialParallel(ctx, primaries, fallbacks, cancel)
+ if c != nil {
+ c.Close()
+ }
+ elapsed = time.Now().Sub(startTime)
+ if elapsed > 100*time.Millisecond {
+ t.Errorf("#%d (cancel): got %v; want <= 100ms", i, elapsed)
+ }
+ wg.Wait()
}
- // Wait for any slowDst4/slowDst6 connections to timeout.
- time.Sleep(slowTimeout * 3 / 2)
}
func lookupSlowFast(fn func(string) ([]IPAddr, error), host string) ([]IPAddr, error) {
{true, 200 * time.Millisecond, 200 * time.Millisecond},
// The default is 300ms.
{true, 0, 300 * time.Millisecond},
- // This case is last, in order to wait for hanging slowDst6 connections.
- {false, 0, slowTimeout},
}
handler := func(dss *dualStackServer, ln Listener) {
}
for i, tt := range testCases {
- d := &Dialer{DualStack: tt.dualstack, FallbackDelay: tt.delay, Timeout: slowTimeout}
+ d := &Dialer{DualStack: tt.dualstack, FallbackDelay: tt.delay}
startTime := time.Now()
c, err := d.Dial("tcp", JoinHostPort("slow6loopback4", dss.port))
}
}
-func TestDialSerialAsyncSpuriousConnection(t *testing.T) {
+func TestDialParallelSpuriousConnection(t *testing.T) {
+ if !supportsIPv4 || !supportsIPv6 {
+ t.Skip("both IPv4 and IPv6 are required")
+ }
if runtime.GOOS == "plan9" {
- t.Skip("skipping on plan9; no deadline support, golang.org/issue/11932")
+ t.Skip("skipping on plan9; cannot cancel dialTCP, golang.org/issue/11225")
+ }
+
+ var wg sync.WaitGroup
+ wg.Add(2)
+ handler := func(dss *dualStackServer, ln Listener) {
+ // Accept one connection per address.
+ c, err := ln.Accept()
+ if err != nil {
+ t.Fatal(err)
+ }
+ // The client should close itself, without sending data.
+ c.SetReadDeadline(time.Now().Add(1 * time.Second))
+ var b [1]byte
+ if _, err := c.Read(b[:]); err != io.EOF {
+ t.Errorf("got %v; want %v", err, io.EOF)
+ }
+ c.Close()
+ wg.Done()
}
- ln, err := newLocalListener("tcp")
+ dss, err := newDualStackServer([]streamListener{
+ {network: "tcp4", address: "127.0.0.1"},
+ {network: "tcp6", address: "::1"},
+ })
if err != nil {
t.Fatal(err)
}
- defer ln.Close()
+ defer dss.teardown()
+ if err := dss.buildup(handler); err != nil {
+ t.Fatal(err)
+ }
+
+ const fallbackDelay = 100 * time.Millisecond
- d := Dialer{}
+ origTestHookDialTCP := testHookDialTCP
+ defer func() { testHookDialTCP = origTestHookDialTCP }()
+ testHookDialTCP = func(net string, laddr, raddr *TCPAddr, deadline time.Time, cancel <-chan struct{}) (*TCPConn, error) {
+ // Sleep long enough for Happy Eyeballs to kick in, and inhibit cancelation.
+ // This forces dialParallel to juggle two successful connections.
+ time.Sleep(fallbackDelay * 2)
+ cancel = nil
+ return dialTCP(net, laddr, raddr, deadline, cancel)
+ }
+
+ d := Dialer{
+ FallbackDelay: fallbackDelay,
+ }
ctx := &dialContext{
Dialer: d,
network: "tcp",
finalDeadline: d.deadline(time.Now()),
}
- results := make(chan dialResult)
- cancel := make(chan struct{})
-
- // Spawn a connection in the background.
- go dialSerialAsync(ctx, addrList{ln.Addr()}, nil, cancel, results)
+ makeAddr := func(ip string) addrList {
+ addr, err := ResolveTCPAddr("tcp", JoinHostPort(ip, dss.port))
+ if err != nil {
+ t.Fatal(err)
+ }
+ return addrList{addr}
+ }
- // Receive it at the server.
- c, err := ln.Accept()
+ // dialParallel returns one connection (and closes the other.)
+ c, err := dialParallel(ctx, makeAddr("127.0.0.1"), makeAddr("::1"), nil)
if err != nil {
t.Fatal(err)
}
- defer c.Close()
-
- // Tell dialSerialAsync that someone else won the race.
- close(cancel)
+ c.Close()
- // The connection should close itself, without sending data.
- c.SetReadDeadline(time.Now().Add(1 * time.Second))
- var b [1]byte
- if _, err := c.Read(b[:]); err != io.EOF {
- t.Errorf("got %v; want %v", err, io.EOF)
- }
+ // The server should've seen both connections.
+ wg.Wait()
}
func TestDialerPartialDeadline(t *testing.T) {
c.Close()
}
}
- time.Sleep(timeout * 3 / 2) // wait for the dial racers to stop
}
func TestDialerKeepAlive(t *testing.T) {
}
if dns.rcode != dnsRcodeSuccess {
// None of the error codes make sense
- // for the query we sent. If we didn't get
+ // for the query we sent. If we didn't get
// a name error and we didn't get success,
// the server is behaving incorrectly or
// having temporary trouble.
return ok
}
-// absDomainName returns an absoulte domain name which ends with a
+// absDomainName returns an absolute domain name which ends with a
// trailing dot to match pure Go reverse resolver and all other lookup
// routines.
// See golang.org/issue/12189.
// time to recheck resolv.conf.
ch chan struct{} // guards lastChecked and modTime
lastChecked time.Time // last time resolv.conf was checked
- modTime time.Time // time of resolv.conf modification
mu sync.RWMutex // protects dnsConfig
dnsConfig *dnsConfig // parsed resolv.conf structure used in lookups
// init initializes conf and is only called via conf.initOnce.
func (conf *resolverConfig) init() {
- // Set dnsConfig, modTime, and lastChecked so we don't parse
+ // Set dnsConfig and lastChecked so we don't parse
// resolv.conf twice the first time.
conf.dnsConfig = systemConf().resolv
if conf.dnsConfig == nil {
conf.dnsConfig = dnsReadConfig("/etc/resolv.conf")
}
-
- if fi, err := os.Stat("/etc/resolv.conf"); err == nil {
- conf.modTime = fi.ModTime()
- }
conf.lastChecked = time.Now()
// Prepare ch so that only one update of resolverConfig may
}
conf.lastChecked = now
+ var mtime time.Time
if fi, err := os.Stat(name); err == nil {
- if fi.ModTime().Equal(conf.modTime) {
- return
- }
- conf.modTime = fi.ModTime()
- } else {
- // If modTime wasn't set prior, assume nothing has changed.
- if conf.modTime.IsZero() {
- return
- }
- conf.modTime = time.Time{}
+ mtime = fi.ModTime()
+ }
+ if mtime.Equal(conf.dnsConfig.mtime) {
+ return
}
dnsConf := dnsReadConfig(name)
// Name resolution APIs and libraries should recognize the
// followings as special and should not send any queries.
- // Though, we test those names here for verifying nagative
+ // Though, we test those names here for verifying negative
// answers at DNS query-response interaction level.
{"localhost.", dnsTypeALL, dnsRcodeNameError},
{"invalid.", dnsTypeALL, dnsRcodeNameError},
return err
}
f.Close()
- if err := conf.forceUpdate(conf.path); err != nil {
+ if err := conf.forceUpdate(conf.path, time.Now().Add(time.Hour)); err != nil {
return err
}
return nil
}
-func (conf *resolvConfTest) forceUpdate(name string) error {
+func (conf *resolvConfTest) forceUpdate(name string, lastChecked time.Time) error {
dnsConf := dnsReadConfig(name)
conf.mu.Lock()
conf.dnsConfig = dnsConf
conf.mu.Unlock()
for i := 0; i < 5; i++ {
if conf.tryAcquireSema() {
- conf.lastChecked = time.Time{}
+ conf.lastChecked = lastChecked
conf.releaseSema()
return nil
}
}
func (conf *resolvConfTest) teardown() error {
- err := conf.forceUpdate("/etc/resolv.conf")
+ err := conf.forceUpdate("/etc/resolv.conf", time.Time{})
os.RemoveAll(conf.dir)
return err
}
t.Error(err)
continue
}
- conf.tryUpdate(conf.path)
addrs, err := goLookupIP(tt.name)
if err != nil {
- if err, ok := err.(*DNSError); !ok || (err.Name != tt.error.(*DNSError).Name || err.Server != tt.error.(*DNSError).Server || err.IsTimeout != tt.error.(*DNSError).IsTimeout) {
+ // This test uses external network connectivity.
+ // We need to take care with errors on both
+ // DNS message exchange layer and DNS
+ // transport layer because goLookupIP may fail
+ // when the IP connectivty on node under test
+ // gets lost during its run.
+ if err, ok := err.(*DNSError); !ok || tt.error != nil && (err.Name != tt.error.(*DNSError).Name || err.Server != tt.error.(*DNSError).Server || err.IsTimeout != tt.error.(*DNSError).IsTimeout) {
t.Errorf("got %v; want %v", err, tt.error)
}
continue
if err := conf.writeAndUpdate([]string{}); err != nil {
t.Fatal(err)
}
- conf.tryUpdate(conf.path)
// Redirect host file lookups.
defer func(orig string) { testHookHostsPath = orig }(testHookHostsPath)
testHookHostsPath = "testdata/hosts"
for _, order := range []hostLookupOrder{hostLookupFilesDNS, hostLookupDNSFiles} {
name := fmt.Sprintf("order %v", order)
- // First ensure that we get an error when contacting a non-existant host.
+ // First ensure that we get an error when contacting a non-existent host.
_, err := goLookupIPOrder("notarealhost", order)
if err == nil {
t.Errorf("%s: expected error while looking up name not in hosts file", name)
package net
+import "time"
+
var defaultNS = []string{"127.0.0.1", "::1"}
type dnsConfig struct {
- servers []string // servers to use
- search []string // suffixes to append to local name
- ndots int // number of dots in name to trigger absolute lookup
- timeout int // seconds before giving up on packet
- attempts int // lost packets before giving up on server
- rotate bool // round robin among servers
- unknownOpt bool // anything unknown was encountered
- lookup []string // OpenBSD top-level database "lookup" order
- err error // any error that occurs during open of resolv.conf
+ servers []string // servers to use
+ search []string // suffixes to append to local name
+ ndots int // number of dots in name to trigger absolute lookup
+ timeout int // seconds before giving up on packet
+ attempts int // lost packets before giving up on server
+ rotate bool // round robin among servers
+ unknownOpt bool // anything unknown was encountered
+ lookup []string // OpenBSD top-level database "lookup" order
+ err error // any error that occurs during open of resolv.conf
+ mtime time.Time // time of resolv.conf modification
}
// See resolv.conf(5) on a Linux machine.
return conf
}
defer file.close()
+ if fi, err := file.file.Stat(); err == nil {
+ conf.mtime = fi.ModTime()
+ } else {
+ conf.servers = defaultNS
+ conf.err = err
+ return conf
+ }
for line, ok := file.readLine(); ok; line, ok = file.readLine() {
if len(line) > 0 && (line[0] == ';' || line[0] == '#') {
// comment.
case "nameserver": // add one name server
if len(f) > 1 && len(conf.servers) < 3 { // small, but the standard limit
// One more check: make sure server name is
- // just an IP address. Otherwise we need DNS
+ // just an IP address. Otherwise we need DNS
// to look it up.
if parseIPv4(f[1]) != nil {
conf.servers = append(conf.servers, f[1])
"os"
"reflect"
"testing"
+ "time"
)
var dnsReadConfigTests = []struct {
if conf.err != nil {
t.Fatal(conf.err)
}
+ conf.mtime = time.Time{}
if !reflect.DeepEqual(conf, tt.want) {
t.Errorf("%s:\ngot: %+v\nwant: %+v", tt.name, conf, tt.want)
}
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// DNS packet assembly. See RFC 1035.
+// DNS packet assembly. See RFC 1035.
//
// This is intended to support name resolution during Dial.
// It doesn't have to be blazing fast.
// generic pack/unpack routines.
//
// TODO(rsc): There are enough names defined in this file that they're all
-// prefixed with dns. Perhaps put this in its own package later.
+// prefixed with dns. Perhaps put this in its own package later.
package net
if !f(&txt, "Txt", "") {
return false
}
- // more bytes than rr.Hdr.Rdlength said there woudld be
+ // more bytes than rr.Hdr.Rdlength said there would be
if rr.Hdr.Rdlength-n < uint16(len(txt))+1 {
return false
}
// All the packers and unpackers take a (msg []byte, off int)
// and return (off1 int, ok bool). If they return ok==false, they
// also return off1==len(msg), so that the next unpacker will
-// also fail. This lets us avoid checks of ok until the end of a
+// also fail. This lets us avoid checks of ok until the end of a
// packing sequence.
// Map of constructors for each RR wire type.
// Pack a domain name s into msg[off:].
// Domain names are a sequence of counted strings
-// split at the dots. They end with a zero-length string.
+// split at the dots. They end with a zero-length string.
func packDomainName(s string, msg []byte, off int) (off1 int, ok bool) {
// Add trailing dot to canonicalize name.
if n := len(s); n == 0 || s[n-1] != '.' {
s += "."
}
+ // Allow root domain.
+ if s == "." {
+ msg[off] = 0
+ off++
+ return off, true
+ }
+
// Each dot ends a segment of the name.
// We trade each dot byte for a length byte.
// There is also a trailing zero.
if i-begin >= 1<<6 { // top two bits of length must be clear
return len(msg), false
}
+ if i-begin == 0 {
+ return len(msg), false
+ }
+
msg[off] = byte(i - begin)
off++
+
for j := begin; j < i; j++ {
msg[off] = s[j]
off++
// In addition to the simple sequences of counted strings above,
// domain names are allowed to refer to strings elsewhere in the
// packet, to avoid repeating common suffixes when returning
-// many entries in a single domain. The pointers are marked
-// by a length byte with the top two bits set. Ignoring those
+// many entries in a single domain. The pointers are marked
+// by a length byte with the top two bits set. Ignoring those
// two bits, that byte and the next give a 14 bit offset from msg[0]
// where we should pick up the trail.
// Note that if we jump elsewhere in the packet,
return "", len(msg), false
}
}
+ if len(s) == 0 {
+ s = "."
+ }
if ptr == 0 {
off1 = off
}
// Pack it in: header and then the pieces.
off := 0
off, ok = packStruct(&dh, msg, off)
+ if !ok {
+ return nil, false
+ }
for i := 0; i < len(question); i++ {
off, ok = packStruct(&question[i], msg, off)
+ if !ok {
+ return nil, false
+ }
}
for i := 0; i < len(answer); i++ {
off, ok = packRR(answer[i], msg, off)
+ if !ok {
+ return nil, false
+ }
}
for i := 0; i < len(ns); i++ {
off, ok = packRR(ns[i], msg, off)
+ if !ok {
+ return nil, false
+ }
}
for i := 0; i < len(extra); i++ {
off, ok = packRR(extra[i], msg, off)
- }
- if !ok {
- return nil, false
+ if !ok {
+ return nil, false
+ }
}
return msg[0:off], true
}
for i := 0; i < len(dns.question); i++ {
off, ok = unpackStruct(&dns.question[i], msg, off)
+ if !ok {
+ return false
+ }
}
for i := 0; i < int(dh.Ancount); i++ {
rec, off, ok = unpackRR(msg, off)
"testing"
)
+func TestStructPackUnpack(t *testing.T) {
+ want := dnsQuestion{
+ Name: ".",
+ Qtype: dnsTypeA,
+ Qclass: dnsClassINET,
+ }
+ buf := make([]byte, 50)
+ n, ok := packStruct(&want, buf, 0)
+ if !ok {
+ t.Fatal("packing failed")
+ }
+ buf = buf[:n]
+ got := dnsQuestion{}
+ n, ok = unpackStruct(&got, buf, 0)
+ if !ok {
+ t.Fatal("unpacking failed")
+ }
+ if n != len(buf) {
+ t.Error("unpacked different amount than packed: got n = %d, want = %d", n, len(buf))
+ }
+ if !reflect.DeepEqual(got, want) {
+ t.Errorf("got = %+v, want = %+v", got, want)
+ }
+}
+
+func TestDomainNamePackUnpack(t *testing.T) {
+ tests := []struct {
+ in string
+ want string
+ ok bool
+ }{
+ {"", ".", true},
+ {".", ".", true},
+ {"google..com", "", false},
+ {"google.com", "google.com.", true},
+ {"google..com.", "", false},
+ {"google.com.", "google.com.", true},
+ {".google.com.", "", false},
+ {"www..google.com.", "", false},
+ {"www.google.com.", "www.google.com.", true},
+ }
+
+ for _, test := range tests {
+ buf := make([]byte, 30)
+ n, ok := packDomainName(test.in, buf, 0)
+ if ok != test.ok {
+ t.Errorf("packing of %s: got ok = %t, want = %t", test.in, ok, test.ok)
+ continue
+ }
+ if !test.ok {
+ continue
+ }
+ buf = buf[:n]
+ got, n, ok := unpackDomainName(buf, 0)
+ if !ok {
+ t.Errorf("unpacking for %s failed", test.in)
+ continue
+ }
+ if n != len(buf) {
+ t.Error(
+ "unpacked different amount than packed for %s: got n = %d, want = %d",
+ test.in,
+ n,
+ len(buf),
+ )
+ }
+ if got != test.want {
+ t.Errorf("unpacking packing of %s: got = %s, want = %s", test.in, got, test.want)
+ }
+ }
+}
+
+func TestDNSPackUnpack(t *testing.T) {
+ want := dnsMsg{
+ question: []dnsQuestion{{
+ Name: ".",
+ Qtype: dnsTypeAAAA,
+ Qclass: dnsClassINET,
+ }},
+ answer: []dnsRR{},
+ ns: []dnsRR{},
+ extra: []dnsRR{},
+ }
+ b, ok := want.Pack()
+ if !ok {
+ t.Fatal("packing failed")
+ }
+ var got dnsMsg
+ ok = got.Unpack(b)
+ if !ok {
+ t.Fatal("unpacking failed")
+ }
+ if !reflect.DeepEqual(got, want) {
+ t.Errorf("got = %+v, want = %+v", got, want)
+ }
+}
+
func TestDNSParseSRVReply(t *testing.T) {
data, err := hex.DecodeString(dnsSRVReply)
if err != nil {
var (
errTimedout = syscall.ETIMEDOUT
errOpNotSupported = syscall.EPLAN9
+
+ abortedConnRequestErrors []error
)
func isPlatformError(err error) bool {
"testing"
)
-var (
- errTimedout = syscall.ETIMEDOUT
- errOpNotSupported = syscall.EOPNOTSUPP
-)
-
-func isPlatformError(err error) bool {
- _, ok := err.(syscall.Errno)
- return ok
-}
-
func TestSpuriousENOTAVAIL(t *testing.T) {
for _, tt := range []struct {
error
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build !plan9,!windows
+
+package net
+
+import "syscall"
+
+var (
+ errTimedout = syscall.ETIMEDOUT
+ errOpNotSupported = syscall.EOPNOTSUPP
+
+ abortedConnRequestErrors = []error{syscall.ECONNABORTED} // see accept in fd_unix.go
+)
+
+func isPlatformError(err error) bool {
+ _, ok := err.(syscall.Errno)
+ return ok
+}
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package net
+
+import "syscall"
+
+var (
+ errTimedout = syscall.ETIMEDOUT
+ errOpNotSupported = syscall.EOPNOTSUPP
+
+ abortedConnRequestErrors = []error{syscall.ERROR_NETNAME_DELETED, syscall.WSAECONNRESET} // see accept in fd_windows.go
+)
+
+func isPlatformError(err error) bool {
+ _, ok := err.(syscall.Errno)
+ return ok
+}
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
func setDeadlineImpl(fd *netFD, t time.Time, mode int) error {
- d := runtimeNano() + int64(t.Sub(time.Now()))
+ diff := int64(t.Sub(time.Now()))
+ d := runtimeNano() + diff
+ if d <= 0 && diff > 0 {
+ // If the user has a deadline in the future, but the delay calculation
+ // overflows, then set the deadline to the maximum possible value.
+ d = 1<<63 - 1
+ }
if t.IsZero() {
d = 0
}
}
// Unblock any I/O. Once it all unblocks and returns,
// so that it cannot be referring to fd.sysfd anymore,
- // the final decref will close fd.sysfd. This should happen
+ // the final decref will close fd.sysfd. This should happen
// fairly quickly, since all the I/O is non-blocking, and any
// attempts to block in the pollDesc will return errClosing.
fd.pd.Evict()
// and fcntl there falls back (undocumented)
// to doing an ioctl instead, returning EBADF
// in this case because fd is not of the
- // expected device fd type. Treat it as
+ // expected device fd type. Treat it as
// EINVAL instead, so we fall back to the
// normal dup path.
// TODO: only do this on 10.6 if we can detect 10.6
o.handle = s
o.rsan = int32(unsafe.Sizeof(rawsa[0]))
_, err = rsrv.ExecIO(o, "AcceptEx", func(o *operation) error {
- return syscall.AcceptEx(o.fd.sysfd, o.handle, (*byte)(unsafe.Pointer(&rawsa[0])), 0, uint32(o.rsan), uint32(o.rsan), &o.qty, &o.o)
+ return acceptFunc(o.fd.sysfd, o.handle, (*byte)(unsafe.Pointer(&rawsa[0])), 0, uint32(o.rsan), uint32(o.rsan), &o.qty, &o.o)
})
if err != nil {
netfd.Close()
testHookDialChannel = func() { time.Sleep(time.Millisecond) } // see golang.org/issue/5349
// Placeholders for socket system calls.
- socketFunc func(int, int, int) (syscall.Handle, error) = syscall.Socket
- closeFunc func(syscall.Handle) error = syscall.Closesocket
- connectFunc func(syscall.Handle, syscall.Sockaddr) error = syscall.Connect
- connectExFunc func(syscall.Handle, syscall.Sockaddr, *byte, uint32, *uint32, *syscall.Overlapped) error = syscall.ConnectEx
- listenFunc func(syscall.Handle, int) error = syscall.Listen
+ socketFunc func(int, int, int) (syscall.Handle, error) = syscall.Socket
+ closeFunc func(syscall.Handle) error = syscall.Closesocket
+ connectFunc func(syscall.Handle, syscall.Sockaddr) error = syscall.Connect
+ connectExFunc func(syscall.Handle, syscall.Sockaddr, *byte, uint32, *uint32, *syscall.Overlapped) error = syscall.ConnectEx
+ listenFunc func(syscall.Handle, int) error = syscall.Listen
+ acceptFunc func(syscall.Handle, syscall.Handle, *byte, uint32, uint32, uint32, *uint32, *syscall.Overlapped) error = syscall.AcceptEx
)
lowerHost := []byte(host)
lowerASCIIBytes(lowerHost)
if ips, ok := hosts.byName[absDomainName(lowerHost)]; ok {
- return ips
+ ipsCp := make([]string, len(ips))
+ copy(ipsCp, ips)
+ return ipsCp
}
}
return nil
}
if len(hosts.byAddr) != 0 {
if hosts, ok := hosts.byAddr[addr]; ok {
- return hosts
+ hostsCp := make([]string, len(hosts))
+ copy(hostsCp, hosts)
+ return hostsCp
}
}
return nil
for _, tt := range lookupStaticHostTests {
testHookHostsPath = tt.name
for _, ent := range tt.ents {
- ins := []string{ent.in, absDomainName([]byte(ent.in)), strings.ToLower(ent.in), strings.ToUpper(ent.in)}
- for _, in := range ins {
- addrs := lookupStaticHost(in)
- if !reflect.DeepEqual(addrs, ent.out) {
- t.Errorf("%s, lookupStaticHost(%s) = %v; want %v", tt.name, in, addrs, ent.out)
- }
- }
+ testStaticHost(t, tt.name, ent)
+ }
+ }
+}
+
+func testStaticHost(t *testing.T, hostsPath string, ent staticHostEntry) {
+ ins := []string{ent.in, absDomainName([]byte(ent.in)), strings.ToLower(ent.in), strings.ToUpper(ent.in)}
+ for _, in := range ins {
+ addrs := lookupStaticHost(in)
+ if !reflect.DeepEqual(addrs, ent.out) {
+ t.Errorf("%s, lookupStaticHost(%s) = %v; want %v", hostsPath, in, addrs, ent.out)
}
}
}
for _, tt := range lookupStaticAddrTests {
testHookHostsPath = tt.name
for _, ent := range tt.ents {
- hosts := lookupStaticAddr(ent.in)
- for i := range ent.out {
- ent.out[i] = absDomainName([]byte(ent.out[i]))
- }
- if !reflect.DeepEqual(hosts, ent.out) {
- t.Errorf("%s, lookupStaticAddr(%s) = %v; want %v", tt.name, ent.in, hosts, ent.out)
- }
+ testStaticAddr(t, tt.name, ent)
}
}
}
+
+func testStaticAddr(t *testing.T, hostsPath string, ent staticHostEntry) {
+ hosts := lookupStaticAddr(ent.in)
+ for i := range ent.out {
+ ent.out[i] = absDomainName([]byte(ent.out[i]))
+ }
+ if !reflect.DeepEqual(hosts, ent.out) {
+ t.Errorf("%s, lookupStaticAddr(%s) = %v; want %v", hostsPath, ent.in, hosts, ent.out)
+ }
+}
+
+func TestHostCacheModification(t *testing.T) {
+ // Ensure that programs can't modify the internals of the host cache.
+ // See https://github.com/golang/go/issues/14212.
+ defer func(orig string) { testHookHostsPath = orig }(testHookHostsPath)
+
+ testHookHostsPath = "testdata/ipv4-hosts"
+ ent := staticHostEntry{"localhost", []string{"127.0.0.1", "127.0.0.2", "127.0.0.3"}}
+ testStaticHost(t, testHookHostsPath, ent)
+ // Modify the addresses return by lookupStaticHost.
+ addrs := lookupStaticHost(ent.in)
+ for i := range addrs {
+ addrs[i] += "junk"
+ }
+ testStaticHost(t, testHookHostsPath, ent)
+
+ testHookHostsPath = "testdata/ipv6-hosts"
+ ent = staticHostEntry{"::1", []string{"localhost"}}
+ testStaticAddr(t, testHookHostsPath, ent)
+ // Modify the hosts return by lookupStaticAddr.
+ hosts := lookupStaticAddr(ent.in)
+ for i := range hosts {
+ hosts[i] += "junk"
+ }
+ testStaticAddr(t, testHookHostsPath, ent)
+}
//
// Note that using CGI means starting a new process to handle each
// request, which is typically less efficient than using a
-// long-running server. This package is intended primarily for
+// long-running server. This package is intended primarily for
// compatibility with existing systems.
package cgi
}
// Most the callers of send (Get, Post, et al) don't need
- // Headers, leaving it uninitialized. We guarantee to the
+ // Headers, leaving it uninitialized. We guarantee to the
// Transport that this has been initialized, though.
if req.Header == nil {
forkReq()
t.Fatal("didn't see redirect")
}
if lastReq.Cancel != cancel {
- t.Errorf("expected lastReq to have the cancel channel set on the inital req")
+ t.Errorf("expected lastReq to have the cancel channel set on the initial req")
}
checkErr = errors.New("no redirects allowed")
}
// Tests that the HTTP/1 and HTTP/2 servers prevent handlers from
-// writing more than they declared. This test does not test whether
+// writing more than they declared. This test does not test whether
// the transport deals with too much data, though, since the server
// doesn't make it possible to send bogus data. For those tests, see
// transport_test.go (for HTTP/1) or x/net/http2/transport_test.go
return cookies
}
-// validCookieDomain returns wheter v is a valid cookie domain-value.
+// validCookieDomain returns whether v is a valid cookie domain-value.
func validCookieDomain(v string) bool {
if isCookieDomainName(v) {
return true
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
func (t fileTransport) RoundTrip(req *Request) (resp *Response, err error) {
// We start ServeHTTP in a goroutine, which may take a long
- // time if the file is large. The newPopulateResponseWriter
+ // time if the file is large. The newPopulateResponseWriter
// call returns a channel which either ServeHTTP or finish()
// sends our *Response on, once the *Response itself has been
// populated (even if the body itself is still being
type Dir string
func (d Dir) Open(name string) (File, error) {
- if filepath.Separator != '/' && strings.IndexRune(name, filepath.Separator) >= 0 ||
+ if filepath.Separator != '/' && strings.ContainsRune(name, filepath.Separator) ||
strings.Contains(name, "\x00") {
return nil, errors.New("http: invalid character in file path")
}
}
// ServeContent replies to the request using the content in the
-// provided ReadSeeker. The main benefit of ServeContent over io.Copy
+// provided ReadSeeker. The main benefit of ServeContent over io.Copy
// is that it handles Range requests properly, sets the MIME type, and
// handles If-Modified-Since requests.
//
// never sent in the response.
//
// If modtime is not the zero time or Unix epoch, ServeContent
-// includes it in a Last-Modified header in the response. If the
+// includes it in a Last-Modified header in the response. If the
// request includes an If-Modified-Since header, ServeContent uses
// modtime to decide whether the content needs to be sent at all.
//
// The total number of bytes in all the ranges
// is larger than the size of the file by
// itself, so this is probably an attack, or a
- // dumb client. Ignore the range request.
+ // dumb client. Ignore the range request.
ranges = nil
}
switch {
// checkETag implements If-None-Match and If-Range checks.
//
// The ETag or modtime must have been previously set in the
-// ResponseWriter's headers. The modtime is only compared at second
+// ResponseWriter's headers. The modtime is only compared at second
// granularity and may be the zero value to mean unknown.
//
// The return value is the effective request "Range" header to use and
}
// TODO(bradfitz): deal with comma-separated or multiple-valued
- // list of If-None-match values. For now just handle the common
+ // list of If-None-match values. For now just handle the common
// case of a single item.
if inm == etag || inm == "*" {
h := w.Header()
// ServeFile replies to the request with the contents of the named
// file or directory.
//
-// If the provided file or direcory name is a relative path, it is
+// If the provided file or directory name is a relative path, it is
// interpreted relative to the current directory and may ascend to parent
// directories. If the provided name is constructed from user input, it
// should be sanitized before calling ServeFile. As a precaution, ServeFile
-// Code generated by golang.org/x/tools/cmd/bundle command:
-// $ bundle golang.org/x/net/http2 net/http http2
+// Code generated by golang.org/x/tools/cmd/bundle.
+//go:generate bundle -o h2_bundle.go -prefix http2 -import golang.org/x/net/http2/hpack=internal/golang.org/x/net/http2/hpack golang.org/x/net/http2
// Package http2 implements the HTTP/2 protocol.
//
}
// registerHTTPSProtocol calls Transport.RegisterProtocol but
-// convering panics into errors.
+// converting panics into errors.
func http2registerHTTPSProtocol(t *Transport, rt RoundTripper) (err error) {
defer func() {
if e := recover(); e != nil {
}
// noDialClientConnPool is an implementation of http2.ClientConnPool
-// which never dials. We let the HTTP/1.1 client dial and use its TLS
+// which never dials. We let the HTTP/1.1 client dial and use its TLS
// connection instead.
type http2noDialClientConnPool struct{ *http2clientConnPool }
http2PriorityParam
}
-// PriorityParam are the stream prioritzation parameters.
+// PriorityParam are the stream prioritization parameters.
type http2PriorityParam struct {
// StreamDep is a 31-bit stream identifier for the
// stream that this stream depends on. Zero means no
Exclusive bool
// Weight is the stream's zero-indexed weight. It should be
- // set together with StreamDep, or neither should be set. Per
+ // set together with StreamDep, or neither should be set. Per
// the spec, "Add one to the value to obtain a weight between
// 1 and 256."
Weight uint8
'~': true,
}
-// pipe is a goroutine-safe io.Reader/io.Writer pair. It's like
+type http2connectionStater interface {
+ ConnectionState() tls.ConnectionState
+}
+
+// pipe is a goroutine-safe io.Reader/io.Writer pair. It's like
// io.Pipe except there are no PipeReader/PipeWriter halves, and the
// underlying buffer is an interface. (io.Pipe is always unbuffered)
type http2pipe struct {
if http2testHookOnConn != nil {
http2testHookOnConn()
}
- conf.handleConn(hs, c, h)
+ conf.ServeConn(c, &http2ServeConnOpts{
+ Handler: h,
+ BaseConfig: hs,
+ })
}
s.TLSNextProto[http2NextProtoTLS] = protoHandler
s.TLSNextProto["h2-14"] = protoHandler
return nil
}
-func (srv *http2Server) handleConn(hs *Server, c net.Conn, h Handler) {
+// ServeConnOpts are options for the Server.ServeConn method.
+type http2ServeConnOpts struct {
+ // BaseConfig optionally sets the base configuration
+ // for values. If nil, defaults are used.
+ BaseConfig *Server
+
+ // Handler specifies which handler to use for processing
+ // requests. If nil, BaseConfig.Handler is used. If BaseConfig
+ // or BaseConfig.Handler is nil, http.DefaultServeMux is used.
+ Handler Handler
+}
+
+func (o *http2ServeConnOpts) baseConfig() *Server {
+ if o != nil && o.BaseConfig != nil {
+ return o.BaseConfig
+ }
+ return new(Server)
+}
+
+func (o *http2ServeConnOpts) handler() Handler {
+ if o != nil {
+ if o.Handler != nil {
+ return o.Handler
+ }
+ if o.BaseConfig != nil && o.BaseConfig.Handler != nil {
+ return o.BaseConfig.Handler
+ }
+ }
+ return DefaultServeMux
+}
+
+// ServeConn serves HTTP/2 requests on the provided connection and
+// blocks until the connection is no longer readable.
+//
+// ServeConn starts speaking HTTP/2 assuming that c has not had any
+// reads or writes. It writes its initial settings frame and expects
+// to be able to read the preface and settings frame from the
+// client. If c has a ConnectionState method like a *tls.Conn, the
+// ConnectionState is used to verify the TLS ciphersuite and to set
+// the Request.TLS field in Handlers.
+//
+// ServeConn does not support h2c by itself. Any h2c support must be
+// implemented in terms of providing a suitably-behaving net.Conn.
+//
+// The opts parameter is optional. If nil, default values are used.
+func (s *http2Server) ServeConn(c net.Conn, opts *http2ServeConnOpts) {
sc := &http2serverConn{
- srv: srv,
- hs: hs,
+ srv: s,
+ hs: opts.baseConfig(),
conn: c,
remoteAddrStr: c.RemoteAddr().String(),
bw: http2newBufferedWriter(c),
- handler: h,
+ handler: opts.handler(),
streams: make(map[uint32]*http2stream),
readFrameCh: make(chan http2readFrameResult),
wantWriteFrameCh: make(chan http2frameWriteMsg, 8),
wroteFrameCh: make(chan http2frameWriteResult, 1),
bodyReadCh: make(chan http2bodyReadMsg),
doneServing: make(chan struct{}),
- advMaxStreams: srv.maxConcurrentStreams(),
+ advMaxStreams: s.maxConcurrentStreams(),
writeSched: http2writeScheduler{
maxFrameSize: http2initialMaxFrameSize,
},
sc.hpackDecoder.SetMaxStringLength(sc.maxHeaderStringLen())
fr := http2NewFramer(sc.bw, c)
- fr.SetMaxReadFrameSize(srv.maxReadFrameSize())
+ fr.SetMaxReadFrameSize(s.maxReadFrameSize())
sc.framer = fr
- if tc, ok := c.(*tls.Conn); ok {
+ if tc, ok := c.(http2connectionStater); ok {
sc.tlsState = new(tls.ConnectionState)
*sc.tlsState = tc.ConnectionState()
}
- if !srv.PermitProhibitedCipherSuites && http2isBadCipher(sc.tlsState.CipherSuite) {
+ if !s.PermitProhibitedCipherSuites && http2isBadCipher(sc.tlsState.CipherSuite) {
sc.rejectConn(http2ErrCodeInadequateSecurity, fmt.Sprintf("Prohibited TLS 1.2 Cipher Suite: %x", sc.tlsState.CipherSuite))
return
weight uint8
state http2streamState
sentReset bool // only true once detached from streams map
- gotReset bool // only true once detacted from streams map
+ gotReset bool // only true once detached from streams map
gotTrailerHeader bool // HEADER frame for trailers was seen
trailer Header // accumulated trailers
return
}
-// responseWriter is the http.ResponseWriter implementation. It's
-// intentionally small (1 pointer wide) to minimize garbage. The
+// responseWriter is the http.ResponseWriter implementation. It's
+// intentionally small (1 pointer wide) to minimize garbage. The
// responseWriterState pointer inside is zeroed at the end of a
// request (in handlerDone) and calls on the responseWriter thereafter
// simply crash (caller's mistake), but the much larger responseWriterState
// says you SHOULD (but not must) predeclare any trailers in the
// header, the official ResponseWriter rules said trailers in Go must
// be predeclared, and then we reuse the same ResponseWriter.Header()
-// map to mean both Headers and Trailers. When it's time to write the
+// map to mean both Headers and Trailers. When it's time to write the
// Trailers, we pick out the fields of Headers that were declared as
// trailers. That worked for a while, until we found the first major
// user of Trailers in the wild: gRPC (using them only over http2),
// and gRPC libraries permit setting trailers mid-stream without
-// predeclarnig them. So: change of plans. We still permit the old
+// predeclaring them. So: change of plans. We still permit the old
// way, but we also permit this hack: if a Header() key begins with
// "Trailer:", the suffix of that key is a Trailer. Because ':' is an
// invalid token byte anyway, there is no ambiguity. (And it's already
// send in the initial settings frame. It is how many bytes
// of response headers are allow. Unlike the http2 spec, zero here
// means to use a default limit (currently 10MB). If you actually
- // want to advertise an ulimited value to the peer, Transport
+ // want to advertise an unlimited value to the peer, Transport
// interprets the highest possible value here (0xffffffff or 1<<32-1)
// to mean no limit.
MaxHeaderListSize uint32
cc.henc = hpack.NewEncoder(&cc.hbuf)
- type connectionStater interface {
- ConnectionState() tls.ConnectionState
- }
- if cs, ok := c.(connectionStater); ok {
+ if cs, ok := c.(http2connectionStater); ok {
state := cs.ConnectionState()
cc.tlsState = &state
}
// frameBuffer returns a scratch buffer suitable for writing DATA frames.
// They're capped at the min of the peer's max frame size or 512KB
// (kinda arbitrarily), but definitely capped so we don't allocate 4GB
-// bufers.
+// buffers.
func (cc *http2ClientConn) frameScratchBuffer() []byte {
cc.mu.Lock()
size := cc.maxFrameSize
return 0
}
+// checkConnHeaders checks whether req has any invalid connection-level headers.
+// per RFC 7540 section 8.1.2.2: Connection-Specific Header Fields.
+// Certain headers are special-cased as okay but not transmitted later.
+func http2checkConnHeaders(req *Request) error {
+ if v := req.Header.Get("Upgrade"); v != "" {
+ return errors.New("http2: invalid Upgrade request header")
+ }
+ if v := req.Header.Get("Transfer-Encoding"); (v != "" && v != "chunked") || len(req.Header["Transfer-Encoding"]) > 1 {
+ return errors.New("http2: invalid Transfer-Encoding request header")
+ }
+ if v := req.Header.Get("Connection"); (v != "" && v != "close" && v != "keep-alive") || len(req.Header["Connection"]) > 1 {
+ return errors.New("http2: invalid Connection request header")
+ }
+ return nil
+}
+
func (cc *http2ClientConn) RoundTrip(req *Request) (*Response, error) {
+ if err := http2checkConnHeaders(req); err != nil {
+ return nil, err
+ }
+
trailers, err := http2commaSeparatedTrailers(req)
if err != nil {
return nil, err
var didUA bool
for k, vv := range req.Header {
lowKey := strings.ToLower(k)
- if lowKey == "host" || lowKey == "content-length" {
+ switch lowKey {
+ case "host", "content-length":
+
continue
- }
- if lowKey == "user-agent" {
+ case "connection", "proxy-connection", "transfer-encoding", "upgrade":
+
+ continue
+ case "user-agent":
didUA = true
if len(vv) < 1 {
// clientConnReadLoop is the state owned by the clientConn's frame-reading readLoop.
type http2clientConnReadLoop struct {
- cc *http2ClientConn
- activeRes map[uint32]*http2clientStream // keyed by streamID
+ cc *http2ClientConn
+ activeRes map[uint32]*http2clientStream // keyed by streamID
+ closeWhenIdle bool
hdec *hpack.Decoder
func (rl *http2clientConnReadLoop) run() error {
cc := rl.cc
- closeWhenIdle := cc.t.disableKeepAlives()
+ rl.closeWhenIdle = cc.t.disableKeepAlives()
gotReply := false
for {
f, err := cc.fr.ReadFrame()
if err != nil {
return err
}
- if closeWhenIdle && gotReply && maybeIdle && len(rl.activeRes) == 0 {
+ if rl.closeWhenIdle && gotReply && maybeIdle && len(rl.activeRes) == 0 {
cc.closeIfIdle()
}
}
}
cs.bufPipe.closeWithErrorAndCode(err, code)
delete(rl.activeRes, cs.ID)
+ if cs.req.Close || cs.req.Header.Get("Connection") == "close" {
+ rl.closeWhenIdle = true
+ }
}
func (cs *http2clientStream) copyTrailers() {
// call gzip.NewReader on the first call to Read
type http2gzipReader struct {
body io.ReadCloser // underlying Response.Body
- zr io.Reader // lazily-initialized gzip reader
+ zr *gzip.Reader // lazily-initialized gzip reader
+ zerr error // sticky error
}
func (gz *http2gzipReader) Read(p []byte) (n int, err error) {
+ if gz.zerr != nil {
+ return 0, gz.zerr
+ }
if gz.zr == nil {
gz.zr, err = gzip.NewReader(gz.body)
if err != nil {
+ gz.zerr = err
return 0, err
}
}
return ws.takeFrom(q.streamID(), q)
}
-// zeroCanSend is defered from take.
+// zeroCanSend is deferred from take.
func (ws *http2writeScheduler) zeroCanSend() {
for i := range ws.canSend {
ws.canSend[i] = nil
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// Set sets the header entries associated with key to
-// the single element value. It replaces any existing
+// the single element value. It replaces any existing
// values associated with key.
func (h Header) Set(key, value string) {
textproto.MIMEHeader(h).Set(key, value)
}
// CanonicalHeaderKey returns the canonical format of the
-// header key s. The canonicalization converts the first
+// header key s. The canonicalization converts the first
// letter and any letter following a hyphen to upper case;
-// the rest are converted to lowercase. For example, the
+// the rest are converted to lowercase. For example, the
// canonical key for "accept-encoding" is "Accept-Encoding".
// If s contains a space or invalid header field bytes, it is
// returned without modifications.
for sp := 0; sp <= len(v)-len(token); sp++ {
// Check that first character is good.
// The token is ASCII, so checking only a single byte
- // is sufficient. We skip this potential starting
+ // is sufficient. We skip this potential starting
// position if both the first byte and its potential
// ASCII uppercase equivalent (b|0x20) don't match.
// False positives ('^' => '~') are caught by EqualFold.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// previously-flaky tests) in the case of
// socket-late-binding races from the http Client
// dialing this server and then getting an idle
- // connection before the dial completed. There is thus
+ // connection before the dial completed. There is thus
// a connected connection in StateNew with no
// associated Request. We only close StateIdle and
// StateNew because they're not doing anything. It's
// few milliseconds wasn't liked (early versions of
// https://golang.org/cl/15151) so now we just
// forcefully close StateNew. The docs for Server.Close say
- // we wait for "oustanding requests", so we don't close things
+ // we wait for "outstanding requests", so we don't close things
// in StateActive.
if st == http.StateIdle || st == http.StateNew {
s.closeConn(c)
// CloseClientConnections closes any open HTTP connections to the test Server.
func (s *Server) CloseClientConnections() {
+ var conns int
+ ch := make(chan bool)
+
s.mu.Lock()
- defer s.mu.Unlock()
for c := range s.conns {
- s.closeConn(c)
+ conns++
+ s.closeConnChan(c, ch)
+ }
+ s.mu.Unlock()
+
+ // Wait for outstanding closes to finish.
+ //
+ // Out of paranoia for making a late change in Go 1.6, we
+ // bound how long this can wait, since golang.org/issue/14291
+ // isn't fully understood yet. At least this should only be used
+ // in tests.
+ timer := time.NewTimer(5 * time.Second)
+ defer timer.Stop()
+ for i := 0; i < conns; i++ {
+ select {
+ case <-ch:
+ case <-timer.C:
+ // Too slow. Give up.
+ return
+ }
}
}
}
}
-// closeConn closes c. Except on plan9, which is special. See comment below.
+// closeConn closes c.
// s.mu must be held.
-func (s *Server) closeConn(c net.Conn) {
+func (s *Server) closeConn(c net.Conn) { s.closeConnChan(c, nil) }
+
+// closeConnChan is like closeConn, but takes an optional channel to receive a value
+// when the goroutine closing c is done.
+func (s *Server) closeConnChan(c net.Conn, done chan<- bool) {
if runtime.GOOS == "plan9" {
// Go's Plan 9 net package isn't great at unblocking reads when
- // their underlying TCP connections are closed. Don't trust
+ // their underlying TCP connections are closed. Don't trust
// that that the ConnState state machine will get to
// StateClosed. Instead, just go there directly. Plan 9 may leak
// resources if the syscall doesn't end up returning. Oh well.
s.forgetConn(c)
}
- go c.Close()
+
+ // Somewhere in the chaos of https://golang.org/cl/15151 we found that
+ // some types of conns were blocking in Close too long (or deadlocking?)
+ // and we had to call Close in a goroutine. I (bradfitz) forget what
+ // that was at this point, but I suspect it was *tls.Conns, which
+ // were later fixed in https://golang.org/cl/18572, so this goroutine
+ // is _probably_ unnecessary now. But it's too late in Go 1.6 too remove
+ // it with confidence.
+ // TODO(bradfitz): try to remove it for Go 1.7. (golang.org/issue/14291)
+ go func() {
+ c.Close()
+ if done != nil {
+ done <- true
+ }
+ }()
}
// forgetConn removes c from the set of tracked conns and decrements it from the
res, err = http.Get(ts.URL)
if err == nil {
body, _ := ioutil.ReadAll(res.Body)
- t.Fatalf("Unexected response after close: %v, %v, %s", res.Status, res.Header, body)
+ t.Fatalf("Unexpected response after close: %v, %v, %s", res.Status, res.Header, body)
}
}
ts.Close() // test we don't hang here forever.
}
+
+// Issue 14290
+func TestServerCloseClientConnections(t *testing.T) {
+ var s *Server
+ s = NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ s.CloseClientConnections()
+ }))
+ defer s.Close()
+ res, err := http.Get(s.URL)
+ if err == nil {
+ res.Body.Close()
+ t.Fatal("Unexpected response: %#v", res)
+ }
+}
// If we used a dummy body above, remove it now.
// TODO: if the req.ContentLength is large, we allocate memory
- // unnecessarily just to slice it off here. But this is just
+ // unnecessarily just to slice it off here. But this is just
// a debug function, so this is acceptable for now. We could
// discard the body earlier if this matters.
if dummyBody {
return
}
-// errNoBody is a sentinel error value used by failureToReadBody so we can detect
-// that the lack of body was intentional.
+// errNoBody is a sentinel error value used by failureToReadBody so we
+// can detect that the lack of body was intentional.
var errNoBody = errors.New("sentinel error value")
// failureToReadBody is a io.ReadCloser that just returns errNoBody on
-// Read. It's swapped in when we don't actually want to consume the
-// body, but need a non-nil one, and want to distinguish the error
-// from reading the dummy body.
+// Read. It's swapped in when we don't actually want to consume
+// the body, but need a non-nil one, and want to distinguish the
+// error from reading the dummy body.
type failureToReadBody struct{}
func (failureToReadBody) Read([]byte) (int, error) { return 0, errNoBody }
func (failureToReadBody) Close() error { return nil }
+// emptyBody is an instance of empty reader.
var emptyBody = ioutil.NopCloser(strings.NewReader(""))
// DumpResponse is like DumpRequest but dumps a response.
savecl := resp.ContentLength
if !body {
- resp.Body = failureToReadBody{}
+ // For content length of zero. Make sure the body is an empty
+ // reader, instead of returning error through failureToReadBody{}.
+ if resp.ContentLength == 0 {
+ resp.Body = emptyBody
+ } else {
+ resp.Body = failureToReadBody{}
+ }
} else if resp.Body == nil {
resp.Body = emptyBody
} else {
foo
0`,
},
+ {
+ res: &http.Response{
+ Status: "200 OK",
+ StatusCode: 200,
+ Proto: "HTTP/1.1",
+ ProtoMajor: 1,
+ ProtoMinor: 1,
+ ContentLength: 0,
+ Header: http.Header{
+ // To verify if headers are not filtered out.
+ "Foo1": []string{"Bar1"},
+ "Foo2": []string{"Bar2"},
+ },
+ Body: nil,
+ },
+ body: false, // to verify we see 0, not empty.
+ want: `HTTP/1.1 200 OK
+Foo1: Bar1
+Foo2: Bar2
+Content-Length: 0`,
+ },
}
func TestDumpResponse(t *testing.T) {
writeReq func(*http.Request, io.Writer) error
}
-// NewClientConn returns a new ClientConn reading and writing c. If r is not
+// NewClientConn returns a new ClientConn reading and writing c. If r is not
// nil, it is the buffer to use when reading c.
//
// ClientConn is low-level and old. Applications should use Client or
outreq.ProtoMinor = 1
outreq.Close = false
- // Remove hop-by-hop headers to the backend. Especially
+ // Remove hop-by-hop headers to the backend. Especially
// important is "Connection" because we want a persistent
- // connection, regardless of what the client sent to us. This
+ // connection, regardless of what the client sent to us. This
// is modifying the same underlying map from req (shallow
// copied above) so we only copy it if necessary.
copiedHeaders := false
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// import _ "net/http/pprof"
//
// If your application is not already running an http server, you
-// need to start one. Add "net/http" and "log" to your imports and
+// need to start one. Add "net/http" and "log" to your imports and
// the following code to your main function:
//
// go func() {
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
// We have to read the whole POST body before
- // writing any output. Buffer the output here.
+ // writing any output. Buffer the output here.
var buf bytes.Buffer
// We don't know how many symbols we have, but we
- // do have symbol information. Pprof only cares whether
+ // do have symbol information. Pprof only cares whether
// this number is 0 (no symbols available) or > 0.
fmt.Fprintf(&buf, "num_symbols: 1\n")
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
ProtoMajor int // 1
ProtoMinor int // 0
- // A header maps request lines to their values.
- // If the header says
+ // Header contains the request header fields either received
+ // by the server or to be sent by the client.
//
+ // If a server received a request with header lines,
+ //
+ // Host: example.com
// accept-encoding: gzip, deflate
// Accept-Language: en-us
- // Connection: keep-alive
+ // fOO: Bar
+ // foo: two
//
// then
//
// Header = map[string][]string{
// "Accept-Encoding": {"gzip, deflate"},
// "Accept-Language": {"en-us"},
- // "Connection": {"keep-alive"},
+ // "Foo": {"Bar", "two"},
// }
//
- // HTTP defines that header names are case-insensitive.
- // The request parser implements this by canonicalizing the
- // name, making the first character and any characters
- // following a hyphen uppercase and the rest lowercase.
+ // For incoming requests, the Host header is promoted to the
+ // Request.Host field and removed from the Header map.
//
- // For client requests certain headers are automatically
- // added and may override values in Header.
+ // HTTP defines that header names are case-insensitive. The
+ // request parser implements this by using CanonicalHeaderKey,
+ // making the first character and any characters following a
+ // hyphen uppercase and the rest lowercase.
//
- // See the documentation for the Request.Write method.
+ // For client requests, certain headers such as Content-Length
+ // and Connection are automatically written when needed and
+ // values in Header may be ignored. See the documentation
+ // for the Request.Write method.
Header Header
// Body is the request's body.
TransferEncoding []string
// Close indicates whether to close the connection after
- // replying to this request (for servers) or after sending
- // the request (for clients).
+ // replying to this request (for servers) or after sending this
+ // request and reading its response (for clients).
+ //
+ // For server requests, the HTTP server handles this automatically
+ // and this field is not needed by Handlers.
+ //
+ // For client requests, setting this field prevents re-use of
+ // TCP connections between requests to the same hosts, as if
+ // Transport.DisableKeepAlives were set.
Close bool
// For server requests Host specifies the host on which the
return nil, ErrNoCookie
}
-// AddCookie adds a cookie to the request. Per RFC 6265 section 5.4,
-// AddCookie does not attach more than one Cookie header field. That
+// AddCookie adds a cookie to the request. Per RFC 6265 section 5.4,
+// AddCookie does not attach more than one Cookie header field. That
// means all cookies, if any, are written into the same line,
// separated by semicolon.
func (r *Request) AddCookie(c *Cookie) {
}
// WriteProxy is like Write but writes the request in the form
-// expected by an HTTP proxy. In particular, WriteProxy writes the
+// expected by an HTTP proxy. In particular, WriteProxy writes the
// initial Request-URI line of the request with an absolute URI, per
// section 5.1.2 of RFC 2616, including the scheme and host.
// In either case, WriteProxy also writes a Host header, using
return in
}
-// removeZone removes IPv6 zone identifer from host.
+// removeZone removes IPv6 zone identifier from host.
// E.g., "[fe80::1%en0]:8080" to "[fe80::1]:8080"
func removeZone(host string) string {
if !strings.HasPrefix(host, "[") {
}
// ReadRequest reads and parses an incoming request from b.
-func ReadRequest(b *bufio.Reader) (req *Request, err error) { return readRequest(b, deleteHostHeader) }
+func ReadRequest(b *bufio.Reader) (*Request, error) {
+ return readRequest(b, deleteHostHeader)
+}
// Constants for readRequest's deleteHostHeader parameter.
const (
// and
// GET http://www.google.com/index.html HTTP/1.1
// Host: doesntmatter
- // the same. In the second case, any Host line is ignored.
+ // the same. In the second case, any Host line is ignored.
req.Host = req.URL.Host
if req.Host == "" {
req.Host = req.Header.get("Host")
var parseContentTypeTests = []parseContentTypeTest{
{false, stringMap{"Content-Type": {"text/plain"}}},
- // Empty content type is legal - shoult be treated as
+ // Empty content type is legal - should be treated as
// application/octet-stream (RFC 2616, section 7.2.1)
{false, stringMap{}},
{true, stringMap{"Content-Type": {"text/plain; boundary="}}},
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
"Transfer-Encoding: chunked\r\n\r\n" +
// TODO: currently we don't buffer before chunking, so we get a
// single "m" chunk before the other chunks, as this was the 1-byte
- // read from our MultiReader where we stiched the Body back together
+ // read from our MultiReader where we stitched the Body back together
// after sniffing whether the Body was 0 bytes or not.
chunk("m") +
chunk("y body") +
failAfter, writeCount := 0, 0
errFail := errors.New("fake write failure")
- // w is the buffered io.Writer to write the request to. It
+ // w is the buffered io.Writer to write the request to. It
// fails exactly once on its Nth Write call, as controlled by
// failAfter. It also tracks the number of calls in
// writeCount.
ProtoMajor int // e.g. 1
ProtoMinor int // e.g. 0
- // Header maps header keys to values. If the response had multiple
+ // Header maps header keys to values. If the response had multiple
// headers with the same key, they may be concatenated, with comma
// delimiters. (Section 4.2 of RFC 2616 requires that multiple headers
// be semantically equivalent to a comma-delimited sequence.) Values
// with a "chunked" Transfer-Encoding.
Body io.ReadCloser
- // ContentLength records the length of the associated content. The
- // value -1 indicates that the length is unknown. Unless Request.Method
+ // ContentLength records the length of the associated content. The
+ // value -1 indicates that the length is unknown. Unless Request.Method
// is "HEAD", values >= 0 indicate that the given number of bytes may
// be read from Body.
ContentLength int64
TransferEncoding []string
// Close records whether the header directed that the connection be
- // closed after reading Body. The value is advice for clients: neither
+ // closed after reading Body. The value is advice for clients: neither
// ReadResponse nor Response.Write ever closes a connection.
Close bool
var ErrNoLocation = errors.New("http: no Location header in response")
// Location returns the URL of the response's "Location" header,
-// if present. Relative redirects are resolved relative to
-// the Response's Request. ErrNoLocation is returned if no
+// if present. Relative redirects are resolved relative to
+// the Response's Request. ErrNoLocation is returned if no
// Location header is present.
func (r *Response) Location() (*url.URL, error) {
lv := r.Header.Get("Location")
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
defer ts.Close()
// Note: this relies on the assumption (which is true) that
- // Get sends HTTP/1.1 or greater requests. Otherwise the
+ // Get sends HTTP/1.1 or greater requests. Otherwise the
// server wouldn't have the choice to send back chunked
// responses.
for _, te := range []string{"", "identity"} {
defer ts.Close()
// Connect an idle TCP connection to this server before we run
- // our real tests. This idle connection used to block forever
+ // our real tests. This idle connection used to block forever
// in the TLS handshake, preventing future connections from
// being accepted. It may prevent future accidental blocking
// in newConn.
}
func TestAutomaticHTTP2_ListenAndServe(t *testing.T) {
- defer afterTest(t)
- defer SetTestHookServerServe(nil)
cert, err := tls.X509KeyPair(internal.LocalhostCert, internal.LocalhostKey)
if err != nil {
t.Fatal(err)
}
+ testAutomaticHTTP2_ListenAndServe(t, &tls.Config{
+ Certificates: []tls.Certificate{cert},
+ })
+}
+
+func TestAutomaticHTTP2_ListenAndServe_GetCertificate(t *testing.T) {
+ cert, err := tls.X509KeyPair(internal.LocalhostCert, internal.LocalhostKey)
+ if err != nil {
+ t.Fatal(err)
+ }
+ testAutomaticHTTP2_ListenAndServe(t, &tls.Config{
+ GetCertificate: func(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
+ return &cert, nil
+ },
+ })
+}
+
+func testAutomaticHTTP2_ListenAndServe(t *testing.T, tlsConf *tls.Config) {
+ defer afterTest(t)
+ defer SetTestHookServerServe(nil)
var ok bool
var s *Server
const maxTries = 5
lnc <- ln
})
s = &Server{
- Addr: addr,
- TLSConfig: &tls.Config{
- Certificates: []tls.Certificate{cert},
- },
+ Addr: addr,
+ TLSConfig: tlsConf,
}
errc := make(chan error, 1)
go func() { errc <- s.ListenAndServeTLS("", "") }()
func testHandlerPanic(t *testing.T, withHijack, h2 bool, panicValue interface{}) {
defer afterTest(t)
// Unlike the other tests that set the log output to ioutil.Discard
- // to quiet the output, this test uses a pipe. The pipe serves three
+ // to quiet the output, this test uses a pipe. The pipe serves three
// purposes:
//
// 1) The log.Print from the http server (generated by the caught
defer cst.close()
// Do a blocking read on the log output pipe so its logging
- // doesn't bleed into the next test. But wait only 5 seconds
+ // doesn't bleed into the next test. But wait only 5 seconds
// for it.
done := make(chan bool, 1)
go func() {
nWritten := new(int64)
req, _ := NewRequest("POST", cst.ts.URL, io.LimitReader(countReader{neverEnding('a'), nWritten}, limit*200))
- // Send the POST, but don't care it succeeds or not. The
+ // Send the POST, but don't care it succeeds or not. The
// remote side is going to reply and then close the TCP
// connection, and HTTP doesn't really define if that's
- // allowed or not. Some HTTP clients will get the response
+ // allowed or not. Some HTTP clients will get the response
// and some (like ours, currently) will complain that the
// request write failed, without reading the response.
//
if err != nil {
t.Fatalf("error dialing: %v", err)
}
- diec := make(chan bool, 2)
+ diec := make(chan bool, 1)
go func() {
const req = "GET / HTTP/1.1\r\nConnection: keep-alive\r\nHost: foo\r\n\r\n"
_, err = io.WriteString(conn, req+req) // two requests
<-diec
conn.Close()
}()
+ reqs := 0
+ closes := 0
For:
for {
select {
case <-gotReq:
- diec <- true
+ reqs++
+ if reqs > 2 {
+ t.Fatal("too many requests")
+ } else if reqs > 1 {
+ diec <- true
+ }
case <-sawClose:
- break For
+ closes++
+ if closes > 1 {
+ break For
+ }
case <-time.After(5 * time.Second):
ts.CloseClientConnections()
t.Fatal("timeout")
}
// Tests regarding the ordering of Write, WriteHeader, Header, and
-// Flush calls. In Go 1.0, rw.WriteHeader immediately flushed the
+// Flush calls. In Go 1.0, rw.WriteHeader immediately flushed the
// (*response).header to the wire. In Go 1.1, the actual wire flush is
// delayed, so we could maybe tack on a Content-Length and better
// Content-Type after we see more (or all) of the output. To preserve
const bodySize = 1 << 20
- // errorf is like t.Errorf, but also writes to println. When
+ // errorf is like t.Errorf, but also writes to println. When
// this test fails, it hangs. This helps debugging and I've
// added this enough times "temporarily". It now gets added
// full time.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// HTTP server. See RFC 2616.
+// HTTP server. See RFC 2616.
package http
// Write writes the data to the connection as part of an HTTP reply.
// If WriteHeader has not yet been called, Write calls WriteHeader(http.StatusOK)
- // before writing the data. If the Header does not contain a
+ // before writing the data. If the Header does not contain a
// Content-Type line, Write adds a Content-Type set to the result of passing
// the initial 512 bytes of written data to DetectContentType.
Write([]byte) (int, error)
requestBodyLimitHit bool
// trailers are the headers to be sent after the handler
- // finishes writing the body. This field is initialized from
+ // finishes writing the body. This field is initialized from
// the Trailer response header when the response header is
// written.
trailers []string
// maxPostHandlerReadBytes is the max number of Request.Body bytes not
// consumed by a handler that the server will read from the client
-// in order to keep a connection alive. If there are more bytes than
+// in order to keep a connection alive. If there are more bytes than
// this then the server to be paranoid instead sends a "Connection:
// close" response.
//
// to cw.res.conn.bufw.
//
// p is not written by writeHeader, but is the first chunk of the body
-// that will be written. It is sniffed for a Content-Type if none is
-// set explicitly. It's also used to set the Content-Length, if the
+// that will be written. It is sniffed for a Content-Type if none is
+// set explicitly. It's also used to set the Content-Length, if the
// total body size was small and the handler has already finished
// running.
func (cw *chunkWriter) writeHeader(p []byte) {
// Exceptions: 304/204/1xx responses never get Content-Length, and if
// it was a HEAD request, we don't know the difference between
// 0 actual bytes and 0 bytes because the handler noticed it
- // was a HEAD request and chose not to write anything. So for
+ // was a HEAD request and chose not to write anything. So for
// HEAD, the handler should either write the Content-Length or
- // write non-zero bytes. If it's actually 0 bytes and the
+ // write non-zero bytes. If it's actually 0 bytes and the
// handler never looked at the Request.Method, we just don't
// send a Content-Length header.
// Further, we don't send an automatic Content-Length if they
}
// Per RFC 2616, we should consume the request body before
- // replying, if the handler hasn't already done so. But we
+ // replying, if the handler hasn't already done so. But we
// don't want to do an unbounded amount of reading here for
// DoS reasons, so we only try up to a threshold.
if w.req.ContentLength != 0 && !w.closeAfterReply {
w.closeAfterReply = true
}
default:
- // Some other kind of error occured, like a read timeout, or
+ // Some other kind of error occurred, like a read timeout, or
// corrupt chunked encoding. In any case, whatever remains
// on the wire must not be parsed as another HTTP request.
w.closeAfterReply = true
// The Life Of A Write is like this:
//
// Handler starts. No header has been sent. The handler can either
-// write a header, or just start writing. Writing before sending a header
+// write a header, or just start writing. Writing before sending a header
// sends an implicitly empty 200 OK header.
//
// If the handler didn't declare a Content-Length up front, we either
// initial header contains both a Content-Type and Content-Length.
// Also short-circuit in (1) when the header's been sent and not in
// chunking mode, writing directly to (4) instead, if (2) has no
-// buffered data. More generally, we could short-circuit from (1) to
+// buffered data. More generally, we could short-circuit from (1) to
// (3) even in chunking mode if the write size from (1) is over some
// threshold and nothing is in (2). The answer might be mostly making
// bufferBeforeChunkingSize smaller and having bufio's fast-paths deal
var _ closeWriter = (*net.TCPConn)(nil)
// closeWrite flushes any outstanding data and sends a FIN packet (if
-// client is connected via TCP), signalling that we're done. We then
+// client is connected via TCP), signalling that we're done. We then
// pause for a bit, hoping the client processes it before any
// subsequent RST.
//
}
// validNPN reports whether the proto is not a blacklisted Next
-// Protocol Negotiation protocol. Empty and built-in protocol types
+// Protocol Negotiation protocol. Empty and built-in protocol types
// are blacklisted and can't be overridden with alternate
// implementations.
func validNPN(proto string) bool {
// badRequestError is a literal string (used by in the server in HTML,
// unescaped) to tell the user why their request was bad. It should
-// be plain text without user info or other embeddded errors.
+// be plain text without user info or other embedded errors.
type badRequestError string
func (e badRequestError) Error() string { return "Bad Request: " + string(e) }
// able to read this if we're
// responding to them and hanging up
// while they're still writing their
- // request. Undefined behavior.
+ // request. Undefined behavior.
io.WriteString(c.rwc, "HTTP/1.1 431 Request Header Fields Too Large\r\nContent-Type: text/plain\r\nConnection: close\r\n\r\n431 Request Header Fields Too Large")
c.closeWriteAndWait()
return
// HTTP cannot have multiple simultaneous active requests.[*]
// Until the server replies to this request, it can't read another,
// so we might as well run the handler in this goroutine.
- // [*] Not strictly true: HTTP pipelining. We could let them all process
+ // [*] Not strictly true: HTTP pipelining. We could let them all process
// in parallel even if their responses need to be serialized.
serverHandler{c.server}.ServeHTTP(w, w.req)
if c.hijacked() {
// TODO(bradfitz): let ServeHTTP handlers handle
// requests with non-standard expectation[s]? Seems
// theoretical at best, and doesn't fit into the
- // current ServeHTTP model anyway. We'd need to
+ // current ServeHTTP model anyway. We'd need to
// make the ResponseWriter an optional
// "ExpectReplier" interface or something.
//
}
// The HandlerFunc type is an adapter to allow the use of
-// ordinary functions as HTTP handlers. If f is a function
+// ordinary functions as HTTP handlers. If f is a function
// with the appropriate signature, HandlerFunc(f) is a
// Handler that calls f.
type HandlerFunc func(ResponseWriter, *Request)
// been registered separately.
//
// Patterns may optionally begin with a host name, restricting matches to
-// URLs on that host only. Host-specific patterns take precedence over
+// URLs on that host only. Host-specific patterns take precedence over
// general patterns, so that a handler might register for the two patterns
// "/codesearch" and "codesearch.google.com/" without also taking over
// requests for "http://www.google.com/".
}
// Serve accepts incoming HTTP connections on the listener l,
-// creating a new service goroutine for each. The service goroutines
+// creating a new service goroutine for each. The service goroutines
// read requests and then call handler to reply to them.
// Handler is typically nil, in which case the DefaultServeMux is used.
func Serve(l net.Listener, handler Handler) error {
// TLSNextProto optionally specifies a function to take over
// ownership of the provided TLS connection when an NPN
- // protocol upgrade has occurred. The map key is the protocol
+ // protocol upgrade has occurred. The map key is the protocol
// name negotiated. The Handler argument should be used to
// handle HTTP requests and will initialize the Request's TLS
- // and RemoteAddr if not already set. The connection is
+ // and RemoteAddr if not already set. The connection is
// automatically closed when the function returns.
// If TLSNextProto is nil, HTTP/2 support is enabled automatically.
TLSNextProto map[string]func(*Server, *tls.Conn, Handler)
// For HTTP/2, StateActive fires on the transition from zero
// to one active request, and only transitions away once all
// active requests are complete. That means that ConnState
- // can not be used to do per-request work; ConnState only notes
+ // cannot be used to do per-request work; ConnState only notes
// the overall state of the connection.
StateActive
// Accepted connections are configured to enable TCP keep-alives.
//
// Filenames containing a certificate and matching private key for the
-// server must be provided if the Server's TLSConfig.Certificates is
-// not populated. If the certificate is signed by a certificate
-// authority, the certFile should be the concatenation of the server's
-// certificate, any intermediates, and the CA's certificate.
+// server must be provided if neither the Server's TLSConfig.Certificates
+// nor TLSConfig.GetCertificate are populated. If the certificate is
+// signed by a certificate authority, the certFile should be the
+// concatenation of the server's certificate, any intermediates, and
+// the CA's certificate.
//
// If srv.Addr is blank, ":https" is used.
//
config.NextProtos = append(config.NextProtos, "http/1.1")
}
- if len(config.Certificates) == 0 || certFile != "" || keyFile != "" {
+ configHasCert := len(config.Certificates) > 0 || config.GetCertificate != nil
+ if !configHasCert || certFile != "" || keyFile != "" {
var err error
config.Certificates = make([]tls.Certificate, 1)
config.Certificates[0], err = tls.LoadX509KeyPair(certFile, keyFile)
body string
// timeout returns the channel of a *time.Timer and
- // cancelTimer cancels it. They're stored separately for
+ // cancelTimer cancels it. They're stored separately for
// testing purposes.
timeout func() <-chan time.Time // returns channel producing a timeout
cancelTimer func() bool // optional
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// DetectContentType implements the algorithm described
// at http://mimesniff.spec.whatwg.org/ to determine the
-// Content-Type of the given data. It considers at most the
-// first 512 bytes of data. DetectContentType always returns
+// Content-Type of the given data. It considers at most the
+// first 512 bytes of data. DetectContentType always returns
// a valid MIME type: if it cannot determine a more specific one, it
// returns "application/octet-stream".
func DetectContentType(data []byte) string {
}
// bodyAllowedForStatus reports whether a given response status code
-// permits a body. See RFC2616, section 4.4.
+// permits a body. See RFC2616, section 4.4.
func bodyAllowedForStatus(status int) bool {
switch {
case status >= 100 && status <= 199:
}
}
- // Prepare body reader. ContentLength < 0 means chunked encoding
+ // Prepare body reader. ContentLength < 0 means chunked encoding
// or close connection when finished, since multipart is not supported yet
switch {
case chunked(t.TransferEncoding):
func shouldClose(major, minor int, header Header, removeCloseHeader bool) bool {
if major < 1 {
return true
- } else if major == 1 && minor == 0 {
- vv := header["Connection"]
- if headerValuesContainsToken(vv, "close") || !headerValuesContainsToken(vv, "keep-alive") {
- return true
- }
- return false
- } else {
- if headerValuesContainsToken(header["Connection"], "close") {
- if removeCloseHeader {
- header.Del("Connection")
- }
- return true
- }
}
- return false
+
+ conv := header["Connection"]
+ hasClose := headerValuesContainsToken(conv, "close")
+ if major == 1 && minor == 0 {
+ return hasClose || !headerValuesContainsToken(conv, "keep-alive")
+ }
+
+ if hasClose && removeCloseHeader {
+ header.Del("Connection")
+ }
+
+ return hasClose
}
// Parse the trailer header
}
// Make sure there's a header terminator coming up, to prevent
- // a DoS with an unbounded size Trailer. It's not easy to
+ // a DoS with an unbounded size Trailer. It's not easy to
// slip in a LimitReader here, as textproto.NewReader requires
- // a concrete *bufio.Reader. Also, we can't get all the way
+ // a concrete *bufio.Reader. Also, we can't get all the way
// back up to our conn's LimitedReader that *might* be backing
- // this bufio.Reader. Instead, a hack: we iteratively Peek up
+ // this bufio.Reader. Instead, a hack: we iteratively Peek up
// to the bufio.Reader's max size, looking for a double CRLF.
// This limits the trailer to the underlying buffer size, typically 4kB.
if !seeUpcomingDoubleCRLF(b.r) {
DisableCompression bool
// MaxIdleConnsPerHost, if non-zero, controls the maximum idle
- // (keep-alive) to keep per-host. If zero,
+ // (keep-alive) to keep per-host. If zero,
// DefaultMaxIdleConnsPerHost is used.
MaxIdleConnsPerHost int
// TLSNextProto specifies how the Transport switches to an
// alternate protocol (such as HTTP/2) after a TLS NPN/ALPN
- // protocol negotiation. If Transport dials an TLS connection
+ // protocol negotiation. If Transport dials an TLS connection
// with a non-empty protocol name and TLSNextProto contains a
// map entry for that key (such as "h2"), then the func is
// called with the request's authority (such as "example.com"
return
}
if t.TLSNextProto != nil {
+ // This is the documented way to disable http2 on a
+ // Transport.
+ return
+ }
+ if t.TLSClientConfig != nil {
+ // Be conservative for now (for Go 1.6) at least and
+ // don't automatically enable http2 if they've
+ // specified a custom TLS config. Let them opt-in
+ // themselves via http2.ConfigureTransport so we don't
+ // surprise them by modifying their tls.Config.
+ // Issue 14275.
+ return
+ }
+ if t.ExpectContinueTimeout != 0 && t != DefaultTransport {
+ // ExpectContinueTimeout is unsupported in http2, so
+ // if they explicitly asked for it (as opposed to just
+ // using the DefaultTransport, which sets it), then
+ // disable http2 for now.
+ //
+ // Issue 13851. (and changed in Issue 14391)
return
}
t2, err := http2configureTransport(t)
// Get the cached or newly-created connection to either the
// host (for http or https), the http proxy, or the http proxy
- // pre-CONNECTed to https server. In any case, we'll be ready
+ // pre-CONNECTed to https server. In any case, we'll be ready
// to send it requests.
pconn, err := t.getConn(req, cm)
if err != nil {
// CancelRequest cancels an in-flight request by closing its connection.
// CancelRequest should only be called after RoundTrip has returned.
//
-// Deprecated: Use Request.Cancel instead. CancelRequest can not cancel
+// Deprecated: Use Request.Cancel instead. CancelRequest cannot cancel
// HTTP/2 requests.
func (t *Transport) CancelRequest(req *Request) {
t.reqMu.Lock()
// We're done with this pconn and somebody else is
// currently waiting for a conn of this type (they're
// actively dialing, but this conn is ready
- // first). Chrome calls this socket late binding. See
+ // first). Chrome calls this socket late binding. See
// https://insouciant.org/tech/connection-management-in-chromium/
t.idleMu.Unlock()
return nil
}
// getConn dials and creates a new persistConn to the target as
-// specified in the connectMethod. This includes doing a proxy CONNECT
+// specified in the connectMethod. This includes doing a proxy CONNECT
// and/or setting up TLS. If this doesn't return an error, the persistConn
// is ready to write requests to.
func (t *Transport) getConn(req *Request, cm connectMethod) (*persistConn, error) {
// (but may be used for non-keep-alive requests as well)
type persistConn struct {
// alt optionally specifies the TLS NextProto RoundTripper.
- // This is used for HTTP/2 today and future protocol laters.
+ // This is used for HTTP/2 today and future protocols later.
// If it's non-nil, the rest of the fields are unused.
alt RoundTripper
req *transportRequest
ch chan<- error
- // Optional blocking chan for Expect: 100-continue (for recieve).
+ // Optional blocking chan for Expect: 100-continue (for receive).
// If not nil, writeLoop blocks sending request body until
// it receives from this chan.
continueCh <-chan struct{}
// handlePendingDial's putOrCloseIdleConn when
// it turns out the abandoned connection in
// flight ended up negotiating an alternate
- // protocol. We don't use the connection
+ // protocol. We don't use the connection
// freelist for http2. That's done by the
// alternate protocol's RoundTripper.
} else {
// This test has an expected race. Sleeping for 25 ms prevents
// it on most fast machines, causing the next fetch() call to
- // succeed quickly. But if we do get errors, fetch() will retry 5
+ // succeed quickly. But if we do get errors, fetch() will retry 5
// times with some delays between.
time.Sleep(25 * time.Millisecond)
// after each request completes, regardless of whether it failed.
// If these are too high, OS X exhausts its ephemeral ports
// and hangs waiting for them to transition TCP states. That's
- // not what we want to test. TODO(bradfitz): use an io.Pipe
+ // not what we want to test. TODO(bradfitz): use an io.Pipe
// dialer for this test instead?
const (
numClients = 20
{path: "/100", body: []byte("hello"), sent: 5, status: 200}, // Got 100 followed by 200, entire body is sent.
{path: "/200", body: []byte("hello"), sent: 0, status: 200}, // Got 200 without 100. body isn't sent.
{path: "/500", body: []byte("hello"), sent: 0, status: 500}, // Got 500 without 100. body isn't sent.
- {path: "/keepalive", body: []byte("hello"), sent: 0, status: 500}, // Althogh without Connection:close, body isn't sent.
+ {path: "/keepalive", body: []byte("hello"), sent: 0, status: 500}, // Although without Connection:close, body isn't sent.
{path: "/timeout", body: []byte("hello"), sent: 5, status: 200}, // Timeout exceeded and entire body is sent.
}
growth := nfinal - n0
- // We expect 0 or 1 extra goroutine, empirically. Allow up to 5.
+ // We expect 0 or 1 extra goroutine, empirically. Allow up to 5.
// Previously we were leaking one per numReq.
if int(growth) > 5 {
t.Logf("goroutine growth: %d -> %d -> %d (delta: %d)", n0, nhigh, nfinal, growth)
growth := nfinal - n0
- // We expect 0 or 1 extra goroutine, empirically. Allow up to 5.
+ // We expect 0 or 1 extra goroutine, empirically. Allow up to 5.
// Previously we were leaking one per numReq.
t.Logf("goroutine growth: %d -> %d -> %d (delta: %d)", n0, nhigh, nfinal, growth)
if int(growth) > 5 {
}
// Test that the transport doesn't close the TCP connection early,
-// before the response body has been read. This was a regression
-// which sadly lacked a triggering test. The large response body made
+// before the response body has been read. This was a regression
+// which sadly lacked a triggering test. The large response body made
// the old race easier to trigger.
func TestIssue3644(t *testing.T) {
defer afterTest(t)
// Due to the Transport's "socket late binding" (see
// idleConnCh in transport.go), the numReqs HTTP requests
- // below can finish with a dial still outstanding. To keep
+ // below can finish with a dial still outstanding. To keep
// the leak checker happy, keep track of pending dials and
// wait for them to finish (and be closed or returned to the
// idle pool) before we close idle connections.
}
// byteFromChanReader is an io.Reader that reads a single byte at a
-// time from the channel. When the channel is closed, the reader
+// time from the channel. When the channel is closed, the reader
// returns io.EOF.
type byteFromChanReader chan byte
// After the fix to unblock TCP Reads in
// https://golang.org/cl/15941, this sleep is required
// on plan9 to make sure TCP Writes before an
- // immediate TCP close go out on the wire. On Plan 9,
+ // immediate TCP close go out on the wire. On Plan 9,
// it seems that a hangup of a TCP connection with
// queued data doesn't send the queued data first.
// https://golang.org/issue/9554
// from (or finish writing to) the socket.
//
// NOTE: we resend a request only if the request is idempotent, we reused a
-// keep-alive connection, and we haven't yet received any header data. This
+// keep-alive connection, and we haven't yet received any header data. This
// automatically prevents an infinite resend loop because we'll run out of the
// cached keep-alive connections eventually.
func TestRetryIdempotentRequestsOnError(t *testing.T) {
}
func TestTransportAutomaticHTTP2(t *testing.T) {
- tr := &Transport{}
- _, err := tr.RoundTrip(new(Request))
- if err == nil {
- t.Error("expected error from RoundTrip")
- }
- if tr.TLSNextProto["h2"] == nil {
- t.Errorf("HTTP/2 not registered.")
- }
+ testTransportAutoHTTP(t, &Transport{}, true)
+}
+
+// golang.org/issue/14391: also check DefaultTransport
+func TestTransportAutomaticHTTP2_DefaultTransport(t *testing.T) {
+ testTransportAutoHTTP(t, DefaultTransport.(*Transport), true)
+}
+
+func TestTransportAutomaticHTTP2_TLSNextProto(t *testing.T) {
+ testTransportAutoHTTP(t, &Transport{
+ TLSNextProto: make(map[string]func(string, *tls.Conn) RoundTripper),
+ }, false)
+}
+
+func TestTransportAutomaticHTTP2_TLSConfig(t *testing.T) {
+ testTransportAutoHTTP(t, &Transport{
+ TLSClientConfig: new(tls.Config),
+ }, false)
+}
+
+func TestTransportAutomaticHTTP2_ExpectContinueTimeout(t *testing.T) {
+ testTransportAutoHTTP(t, &Transport{
+ ExpectContinueTimeout: 1 * time.Second,
+ }, false)
+}
- // Now with TLSNextProto set:
- tr = &Transport{TLSNextProto: make(map[string]func(string, *tls.Conn) RoundTripper)}
- _, err = tr.RoundTrip(new(Request))
+func testTransportAutoHTTP(t *testing.T, tr *Transport, wantH2 bool) {
+ _, err := tr.RoundTrip(new(Request))
if err == nil {
t.Error("expected error from RoundTrip")
}
- if tr.TLSNextProto["h2"] != nil {
- t.Errorf("HTTP/2 registered, despite non-nil TLSNextProto field")
+ if reg := tr.TLSNextProto["h2"] != nil; reg != wantH2 {
+ t.Errorf("HTTP/2 registered = %v; want %v", reg, wantH2)
}
}
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
)
// Interface represents a mapping between network interface name
-// and index. It also represents network interface facility
+// and index. It also represents network interface facility
// information.
type Interface struct {
Index int // positive integer that starts at one, zero is never used
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
)
// If the ifindex is zero, interfaceTable returns mappings of all
-// network interfaces. Otherwise it returns a mapping of a specific
+// network interfaces. Otherwise it returns a mapping of a specific
// interface.
func interfaceTable(ifindex int) ([]Interface, error) {
tab, err := syscall.RouteRIB(syscall.NET_RT_IFLIST, ifindex)
}
// If the ifi is nil, interfaceAddrTable returns addresses for all
-// network interfaces. Otherwise it returns addresses for a specific
+// network interfaces. Otherwise it returns addresses for a specific
// interface.
func interfaceAddrTable(ifi *Interface) ([]Addr, error) {
index := 0
case *syscall.SockaddrInet6:
ifa.IP = make(IP, IPv6len)
copy(ifa.IP, sa.Addr[:])
- // NOTE: KAME based IPv6 protcol stack usually embeds
+ // NOTE: KAME based IPv6 protocol stack usually embeds
// the interface index in the interface-local or
// link-local address as the kernel-internal form.
if ifa.IP.IsLinkLocalUnicast() {
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
case *syscall.SockaddrInet6:
ifma := IPAddr{IP: make(IP, IPv6len)}
copy(ifma.IP, sa.Addr[:])
- // NOTE: KAME based IPv6 protcol stack usually embeds
+ // NOTE: KAME based IPv6 protocol stack usually embeds
// the interface index in the interface-local or
// link-local address as the kernel-internal form.
if ifma.IP.IsInterfaceLocalMulticast() || ifma.IP.IsLinkLocalMulticast() {
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
case *syscall.SockaddrInet6:
ifma := IPAddr{IP: make(IP, IPv6len)}
copy(ifma.IP, sa.Addr[:])
- // NOTE: KAME based IPv6 protcol stack usually embeds
+ // NOTE: KAME based IPv6 protocol stack usually embeds
// the interface index in the interface-local or
// link-local address as the kernel-internal form.
if ifma.IP.IsInterfaceLocalMulticast() || ifma.IP.IsLinkLocalMulticast() {
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
)
// If the ifindex is zero, interfaceTable returns mappings of all
-// network interfaces. Otherwise it returns a mapping of a specific
+// network interfaces. Otherwise it returns a mapping of a specific
// interface.
func interfaceTable(ifindex int) ([]Interface, error) {
tab, err := syscall.NetlinkRIB(syscall.RTM_GETLINK, syscall.AF_UNSPEC)
}
// If the ifi is nil, interfaceAddrTable returns addresses for all
-// network interfaces. Otherwise it returns addresses for a specific
+// network interfaces. Otherwise it returns addresses for a specific
// interface.
func interfaceAddrTable(ifi *Interface) ([]Addr, error) {
tab, err := syscall.NetlinkRIB(syscall.RTM_GETADDR, syscall.AF_UNSPEC)
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package net
// If the ifindex is zero, interfaceTable returns mappings of all
-// network interfaces. Otherwise it returns a mapping of a specific
+// network interfaces. Otherwise it returns a mapping of a specific
// interface.
func interfaceTable(ifindex int) ([]Interface, error) {
return nil, nil
}
// If the ifi is nil, interfaceAddrTable returns addresses for all
-// network interfaces. Otherwise it returns addresses for a specific
+// network interfaces. Otherwise it returns addresses for a specific
// interface.
func interfaceAddrTable(ifi *Interface) ([]Addr, error) {
return nil, nil
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
)
// loopbackInterface returns an available logical network interface
-// for loopback tests. It returns nil if no suitable interface is
+// for loopback tests. It returns nil if no suitable interface is
// found.
func loopbackInterface() *Interface {
ift, err := Interfaces()
}
// Test the existence of connected unicast routes for
// IPv6. We can assume the existence of ::1/128 when
- // at least one looopback interface is installed.
+ // at least one loopback interface is installed.
if supportsIPv6 && stats.loop > 0 && stats.uni6 == 0 {
t.Errorf("num IPv6 unicast routes = 0; want >0; summary: %+v", stats)
}
}
// Test the existence of connected unicast routes for IPv6.
// We can assume the existence of ::1/128 when at least one
- // looopback interface is installed.
+ // loopback interface is installed.
if supportsIPv6 && stats.loop > 0 && stats.uni6 == 0 {
t.Errorf("num IPv6 unicast routes = 0; want >0; summary: %+v", stats)
}
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
for i := 0; i < 3; i++ {
ti := &testInterface{local: local, remote: remote}
if err := ti.setPointToPoint(5963 + i); err != nil {
- t.Skipf("test requries external command: %v", err)
+ t.Skipf("test requires external command: %v", err)
}
if err := ti.setup(); err != nil {
t.Fatal(err)
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// If the ifindex is zero, interfaceTable returns mappings of all
-// network interfaces. Otherwise it returns a mapping of a specific
+// network interfaces. Otherwise it returns a mapping of a specific
// interface.
func interfaceTable(ifindex int) ([]Interface, error) {
aas, err := adapterAddresses()
}
// If the ifi is nil, interfaceAddrTable returns addresses for all
-// network interfaces. Otherwise it returns addresses for a specific
+// network interfaces. Otherwise it returns addresses for a specific
// interface.
func interfaceAddrTable(ifi *Interface) ([]Addr, error) {
aas, err := adapterAddresses()
FilterSocket FilterType = iota // for Socket
FilterConnect // for Connect or ConnectEx
FilterListen // for Listen
- FilterAccept // for Accept or Accept4
+ FilterAccept // for Accept, Accept4 or AcceptEx
FilterGetsockoptInt // for GetsockoptInt
FilterClose // for Close or Closesocket
)
sw.stats.getLocked(so.Cookie).Listened++
return nil
}
+
+// AcceptEx wraps syscall.AcceptEx.
+func (sw *Switch) AcceptEx(ls syscall.Handle, as syscall.Handle, b *byte, rxdatalen uint32, laddrlen uint32, raddrlen uint32, rcvd *uint32, overlapped *syscall.Overlapped) error {
+ so := sw.sockso(ls)
+ if so == nil {
+ return syscall.AcceptEx(ls, as, b, rxdatalen, laddrlen, raddrlen, rcvd, overlapped)
+ }
+ sw.fmu.RLock()
+ f, _ := sw.fltab[FilterAccept]
+ sw.fmu.RUnlock()
+
+ af, err := f.apply(so)
+ if err != nil {
+ return err
+ }
+ so.Err = syscall.AcceptEx(ls, as, b, rxdatalen, laddrlen, raddrlen, rcvd, overlapped)
+ if err = af.apply(so); err != nil {
+ return err
+ }
+
+ sw.smu.Lock()
+ defer sw.smu.Unlock()
+ if so.Err != nil {
+ sw.stats.getLocked(so.Cookie).AcceptFailed++
+ return so.Err
+ }
+ nso := sw.addLocked(as, so.Cookie.Family(), so.Cookie.Type(), so.Cookie.Protocol())
+ sw.stats.getLocked(nso.Cookie).Accepted++
+ return nil
+}
// RFC 4632 and RFC 4291.
//
// It returns the IP address and the network implied by the IP
-// and mask. For example, ParseCIDR("192.168.100.1/16") returns
+// and mask. For example, ParseCIDR("192.168.100.1/16") returns
// the IP address 192.168.100.1 and the network 192.168.0.0/16.
func ParseCIDR(s string) (IP, *IPNet, error) {
i := byteIndex(s, '/')
{"", "0", ":0"},
{"google.com", "https%foo", "google.com:https%foo"}, // Go 1.0 behavior
- {"127.0.0.1", "", "127.0.0.1:"}, // Go 1.0 behaviour
- {"www.google.com", "", "www.google.com:"}, // Go 1.0 behaviour
+ {"127.0.0.1", "", "127.0.0.1:"}, // Go 1.0 behavior
+ {"www.google.com", "", "www.google.com:"}, // Go 1.0 behavior
}
var splitFailureTests = []struct {
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// ReadMsgIP reads a packet from c, copying the payload into b and the
-// associated out-of-band data into oob. It returns the number of
+// associated out-of-band data into oob. It returns the number of
// bytes copied into b, the number of bytes copied into oob, the flags
// that were set on the packet and the source address of the packet.
func (c *IPConn) ReadMsgIP(b, oob []byte) (n, oobn, flags int, addr *IPAddr, err error) {
//
// WriteToIP can be made to time out and return an error with
// Timeout() == true after a fixed time limit; see SetDeadline and
-// SetWriteDeadline. On packet-oriented connections, write timeouts
+// SetWriteDeadline. On packet-oriented connections, write timeouts
// are rare.
func (c *IPConn) WriteToIP(b []byte, addr *IPAddr) (int, error) {
return 0, &OpError{Op: "write", Net: c.fd.dir, Source: c.fd.laddr, Addr: addr.opAddr(), Err: syscall.EPLAN9}
}
// WriteMsgIP writes a packet to addr via c, copying the payload from
-// b and the associated out-of-band data from oob. It returns the
+// b and the associated out-of-band data from oob. It returns the
// number of payload and out-of-band bytes written.
func (c *IPConn) WriteMsgIP(b, oob []byte, addr *IPAddr) (n, oobn int, err error) {
return 0, 0, &OpError{Op: "write", Net: c.fd.dir, Source: c.fd.laddr, Addr: addr.opAddr(), Err: syscall.EPLAN9}
}
// ListenIP listens for incoming IP packets addressed to the local
-// address laddr. The returned connection's ReadFrom and WriteTo
+// address laddr. The returned connection's ReadFrom and WriteTo
// methods can be used to receive and send IP packets with per-packet
// addressing.
func ListenIP(netProto string, laddr *IPAddr) (*IPConn, error) {
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// ReadMsgIP reads a packet from c, copying the payload into b and the
-// associated out-of-band data into oob. It returns the number of
+// associated out-of-band data into oob. It returns the number of
// bytes copied into b, the number of bytes copied into oob, the flags
// that were set on the packet and the source address of the packet.
func (c *IPConn) ReadMsgIP(b, oob []byte) (n, oobn, flags int, addr *IPAddr, err error) {
//
// WriteToIP can be made to time out and return an error with
// Timeout() == true after a fixed time limit; see SetDeadline and
-// SetWriteDeadline. On packet-oriented connections, write timeouts
+// SetWriteDeadline. On packet-oriented connections, write timeouts
// are rare.
func (c *IPConn) WriteToIP(b []byte, addr *IPAddr) (int, error) {
if !c.ok() {
}
// WriteMsgIP writes a packet to addr via c, copying the payload from
-// b and the associated out-of-band data from oob. It returns the
+// b and the associated out-of-band data from oob. It returns the
// number of payload and out-of-band bytes written.
func (c *IPConn) WriteMsgIP(b, oob []byte, addr *IPAddr) (n, oobn int, err error) {
if !c.ok() {
}
// ListenIP listens for incoming IP packets addressed to the local
-// address laddr. The returned connection's ReadFrom and WriteTo
+// address laddr. The returned connection's ReadFrom and WriteTo
// methods can be used to receive and send IP packets with per-packet
// addressing.
func ListenIP(netProto string, laddr *IPAddr) (*IPConn, error) {
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// supportsIPv4map reports whether the platform supports
// mapping an IPv4 address inside an IPv6 address at transport
- // layer protocols. See RFC 4291, RFC 4038 and RFC 3493.
+ // layer protocols. See RFC 4291, RFC 4038 and RFC 3493.
supportsIPv4map bool
)
// SplitHostPort splits a network address of the form "host:port",
// "[host]:port" or "[ipv6-host%zone]:port" into host or
-// ipv6-host%zone and port. A literal address or host name for IPv6
+// ipv6-host%zone and port. A literal address or host name for IPv6
// must be enclosed in square brackets, as in "[::1]:80",
// "[ipv6-host]:http" or "[ipv6-host%zone]:80".
func SplitHostPort(hostport string) (host, port string, err error) {
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
return probe(netdir+"/iproute", "4i")
}
-// probeIPv6Stack returns two boolean values. If the first boolean
-// value is true, kernel supports basic IPv6 functionality. If the
+// probeIPv6Stack returns two boolean values. If the first boolean
+// value is true, kernel supports basic IPv6 functionality. If the
// second boolean value is true, kernel supports IPv6 IPv4-mapping.
func probeIPv6Stack() (supportsIPv6, supportsIPv4map bool) {
// Plan 9 uses IPv6 natively, see ip(3).
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Should we try to use the IPv4 socket interface if we're
// only dealing with IPv4 sockets? As long as the host system
// understands IPv6, it's okay to pass IPv4 addresses to the IPv6
-// interface. That simplifies our code and is most general.
+// interface. That simplifies our code and is most general.
// Unfortunately, we need to run on kernels built without IPv6
-// support too. So probe the kernel to figure it out.
+// support too. So probe the kernel to figure it out.
//
// probeIPv6Stack probes both basic IPv6 capability and IPv6 IPv4-
// mapping capability which is controlled by IPV6_V6ONLY socket
// option and/or kernel state "net.inet6.ip6.v6only".
-// It returns two boolean values. If the first boolean value is
-// true, kernel supports basic IPv6 functionality. If the second
+// It returns two boolean values. If the first boolean value is
+// true, kernel supports basic IPv6 functionality. If the second
// boolean value is true, kernel supports IPv6 IPv4-mapping.
func probeIPv6Stack() (supportsIPv6, supportsIPv4map bool) {
var probes = []struct {
// Some released versions of DragonFly BSD pretend to
// accept IPV6_V6ONLY=0 successfully, but the state
// still stays IPV6_V6ONLY=1. Eventually DragonFly BSD
- // stops preteding, but the transition period would
+ // stops pretending, but the transition period would
// cause unpredictable behavior and we need to avoid
// it.
//
}
// favoriteAddrFamily returns the appropriate address family to
-// the given net, laddr, raddr and mode. At first it figures
-// address family out from the net. If mode indicates "listen"
+// the given net, laddr, raddr and mode. At first it figures
+// address family out from the net. If mode indicates "listen"
// and laddr is a wildcard, it assumes that the user wants to
// make a passive connection with a wildcard address family, both
// AF_INET and AF_INET6, and a wildcard address like following:
}
// IPv4 callers use 0.0.0.0 to mean "announce on any available address".
// In IPv6 mode, Linux treats that as meaning "announce on 0.0.0.0",
- // which it refuses to do. Rewrite to the IPv6 unspecified address.
+ // which it refuses to do. Rewrite to the IPv6 unspecified address.
if ip.Equal(IPv4zero) {
ip = IPv6zero
}
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
network2, address2 string // second listener
xerr error // expected error value, nil or other
}{
- // Test cases and expected results for the attemping 2nd listen on the same port
+ // Test cases and expected results for the attempting 2nd listen on the same port
// 1st listen 2nd listen darwin freebsd linux openbsd
// ------------------------------------------------------------------------------------
// "tcp" "" "tcp" "" - - - -
// TestDualStackTCPListener tests both single and double listen
// to a test listener with various address families, different
// listening address and same port.
+//
+// On DragonFly BSD, we expect the kernel version of node under test
+// to be greater than or equal to 4.4.
func TestDualStackTCPListener(t *testing.T) {
switch runtime.GOOS {
- case "dragonfly", "nacl", "plan9": // re-enable on dragonfly once the new IP control block management has landed
+ case "nacl", "plan9":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv4 || !supportsIPv6 {
}
// TestDualStackUDPListener tests both single and double listen
-// to a test listener with various address families, differnet
+// to a test listener with various address families, different
// listening address and same port.
+//
+// On DragonFly BSD, we expect the kernel version of node under test
+// to be greater than or equal to 4.4.
func TestDualStackUDPListener(t *testing.T) {
switch runtime.GOOS {
- case "dragonfly", "nacl", "plan9": // re-enable on dragonfly once the new IP control block management has landed
+ case "nacl", "plan9":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv4 || !supportsIPv6 {
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// We could push the deadline down into the name resolution
- // functions. However, the most commonly used implementation
+ // functions. However, the most commonly used implementation
// calls getaddrinfo, which has no timeout.
timeout := deadline.Sub(time.Now())
select {
case <-t.C:
- // The DNS lookup timed out for some reason. Force
+ // The DNS lookup timed out for some reason. Force
// future requests to start the DNS lookup again
// rather than waiting for the current lookup to
- // complete. See issue 8602.
+ // complete. See issue 8602.
lookupGroup.Forget(host)
return nil, errTimeout
}
// LookupSRV tries to resolve an SRV query of the given service,
-// protocol, and domain name. The proto is "tcp" or "udp".
+// protocol, and domain name. The proto is "tcp" or "udp".
// The returned records are sorted by priority and randomized
// by weight within a priority.
//
// LookupSRV constructs the DNS name to look up following RFC 2782.
-// That is, it looks up _service._proto.name. To accommodate services
+// That is, it looks up _service._proto.name. To accommodate services
// publishing SRV records under non-standard names, if both service
// and proto are empty strings, LookupSRV looks up name directly.
func LookupSRV(service, proto, name string) (cname string, addrs []*SRV, err error) {
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
--- /dev/null
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build !nacl,!plan9,!windows
+
+package net
+
+// forceGoDNS forces the resolver configuration to use the pure Go resolver
+// and returns a fixup function to restore the old settings.
+func forceGoDNS() func() {
+ c := systemConf()
+ oldGo := c.netGo
+ oldCgo := c.netCgo
+ fixup := func() {
+ c.netGo = oldGo
+ c.netCgo = oldCgo
+ }
+ c.netGo = true
+ c.netCgo = false
+ return fixup
+}
+
+// forceCgoDNS forces the resolver configuration to use the cgo resolver
+// and returns a fixup function to restore the old settings.
+// (On non-Unix systems forceCgoDNS returns nil.)
+func forceCgoDNS() func() {
+ c := systemConf()
+ oldGo := c.netGo
+ oldCgo := c.netCgo
+ fixup := func() {
+ c.netGo = oldGo
+ c.netCgo = oldCgo
+ }
+ c.netGo = false
+ c.netCgo = true
+ return fixup
+}
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
import "runtime"
-// See unix_test.go for what these (don't) do.
+// See main_conf_test.go for what these (don't) do.
func forceGoDNS() func() {
switch runtime.GOOS {
case "plan9", "windows":
}
}
-// See unix_test.go for what these (don't) do.
+// See main_conf_test.go for what these (don't) do.
func forceCgoDNS() func() { return nil }
origConnect = connectFunc
origConnectEx = connectExFunc
origListen = listenFunc
+ origAccept = acceptFunc
)
func installTestHooks() {
connectFunc = sw.Connect
connectExFunc = sw.ConnectEx
listenFunc = sw.Listen
+ acceptFunc = sw.AcceptEx
}
func uninstallTestHooks() {
connectFunc = origConnect
connectExFunc = origConnectEx
listenFunc = origListen
+ acceptFunc = origAccept
}
func forceCloseSockets() {
func newLocalListener(network string) (Listener, error) {
switch network {
- case "tcp", "tcp4", "tcp6":
+ case "tcp":
+ if supportsIPv4 {
+ if ln, err := Listen("tcp4", "127.0.0.1:0"); err == nil {
+ return ln, nil
+ }
+ }
+ if supportsIPv6 {
+ return Listen("tcp6", "[::1]:0")
+ }
+ case "tcp4":
if supportsIPv4 {
return Listen("tcp4", "127.0.0.1:0")
}
+ case "tcp6":
if supportsIPv6 {
return Listen("tcp6", "[::1]:0")
}
// Multiple goroutines may invoke methods on a PacketConn simultaneously.
type PacketConn interface {
// ReadFrom reads a packet from the connection,
- // copying the payload into b. It returns the number of
+ // copying the payload into b. It returns the number of
// bytes copied into b and the return address that
// was on the packet.
// ReadFrom can be made to time out and return
import (
"io"
+ "net/internal/socktest"
"os"
"runtime"
"testing"
}
t.Fatalf("failed to listen/close/listen on same address after %d tries", maxTries)
}
+
+// See golang.org/issue/6163, golang.org/issue/6987.
+func TestAcceptIgnoreAbortedConnRequest(t *testing.T) {
+ switch runtime.GOOS {
+ case "plan9":
+ t.Skipf("%s does not have full support of socktest", runtime.GOOS)
+ }
+
+ syserr := make(chan error)
+ go func() {
+ defer close(syserr)
+ for _, err := range abortedConnRequestErrors {
+ syserr <- err
+ }
+ }()
+ sw.Set(socktest.FilterAccept, func(so *socktest.Status) (socktest.AfterFilter, error) {
+ if err, ok := <-syserr; ok {
+ return nil, err
+ }
+ return nil, nil
+ })
+ defer sw.Set(socktest.FilterAccept, nil)
+
+ operr := make(chan error, 1)
+ handler := func(ls *localServer, ln Listener) {
+ defer close(operr)
+ c, err := ln.Accept()
+ if err != nil {
+ if perr := parseAcceptError(err); perr != nil {
+ operr <- perr
+ }
+ operr <- err
+ return
+ }
+ c.Close()
+ }
+ ls, err := newLocalServer("tcp")
+ if err != nil {
+ t.Fatal(err)
+ }
+ defer ls.teardown()
+ if err := ls.buildup(handler); err != nil {
+ t.Fatal(err)
+ }
+
+ c, err := Dial(ls.Listener.Addr().Network(), ls.Listener.Addr().String())
+ if err != nil {
+ t.Fatal(err)
+ }
+ c.Close()
+
+ for err := range operr {
+ t.Error(err)
+ }
+}
}
}
-func netshInterfaceIPv4ShowAddress(name string) ([]string, error) {
- out, err := runCmd("netsh", "interface", "ipv4", "show", "address", "name=\""+name+"\"")
- if err != nil {
- return nil, err
- }
- // adress information is listed like:
+func netshInterfaceIPv4ShowAddress(name string, netshOutput []byte) []string {
+ // Address information is listed like:
+ //
+ //Configuration for interface "Local Area Connection"
+ // DHCP enabled: Yes
// IP Address: 10.0.0.2
// Subnet Prefix: 10.0.0.0/24 (mask 255.255.255.0)
// IP Address: 10.0.0.3
// Subnet Prefix: 10.0.0.0/24 (mask 255.255.255.0)
+ // Default Gateway: 10.0.0.254
+ // Gateway Metric: 0
+ // InterfaceMetric: 10
+ //
+ //Configuration for interface "Loopback Pseudo-Interface 1"
+ // DHCP enabled: No
+ // IP Address: 127.0.0.1
+ // Subnet Prefix: 127.0.0.0/8 (mask 255.0.0.0)
+ // InterfaceMetric: 50
+ //
addrs := make([]string, 0)
var addr, subnetprefix string
- lines := bytes.Split(out, []byte{'\r', '\n'})
+ var processingOurInterface bool
+ lines := bytes.Split(netshOutput, []byte{'\r', '\n'})
for _, line := range lines {
+ if !processingOurInterface {
+ if !bytes.HasPrefix(line, []byte("Configuration for interface")) {
+ continue
+ }
+ if !bytes.Contains(line, []byte(`"`+name+`"`)) {
+ continue
+ }
+ processingOurInterface = true
+ continue
+ }
+ if len(line) == 0 {
+ break
+ }
if bytes.Contains(line, []byte("Subnet Prefix:")) {
f := bytes.Split(line, []byte{':'})
if len(f) == 2 {
}
}
}
- return addrs, nil
+ return addrs
}
-func netshInterfaceIPv6ShowAddress(name string) ([]string, error) {
+func netshInterfaceIPv6ShowAddress(name string, netshOutput []byte) []string {
+ // Address information is listed like:
+ //
+ //Address ::1 Parameters
+ //---------------------------------------------------------
+ //Interface Luid : Loopback Pseudo-Interface 1
+ //Scope Id : 0.0
+ //Valid Lifetime : infinite
+ //Preferred Lifetime : infinite
+ //DAD State : Preferred
+ //Address Type : Other
+ //Skip as Source : false
+ //
+ //Address XXXX::XXXX:XXXX:XXXX:XXXX%11 Parameters
+ //---------------------------------------------------------
+ //Interface Luid : Local Area Connection
+ //Scope Id : 0.11
+ //Valid Lifetime : infinite
+ //Preferred Lifetime : infinite
+ //DAD State : Preferred
+ //Address Type : Other
+ //Skip as Source : false
+ //
+
// TODO: need to test ipv6 netmask too, but netsh does not outputs it
- out, err := runCmd("netsh", "interface", "ipv6", "show", "address", "interface=\""+name+"\"")
- if err != nil {
- return nil, err
- }
+ var addr string
addrs := make([]string, 0)
- lines := bytes.Split(out, []byte{'\r', '\n'})
+ lines := bytes.Split(netshOutput, []byte{'\r', '\n'})
for _, line := range lines {
+ if addr != "" {
+ if len(line) == 0 {
+ addr = ""
+ continue
+ }
+ if string(line) != "Interface Luid : "+name {
+ continue
+ }
+ addrs = append(addrs, addr)
+ addr = ""
+ continue
+ }
if !bytes.HasPrefix(line, []byte("Address")) {
continue
}
f[0] = []byte(ParseIP(string(f[0])).String())
}
- addrs = append(addrs, string(bytes.ToLower(bytes.TrimSpace(f[0]))))
+ addr = string(bytes.ToLower(bytes.TrimSpace(f[0])))
}
- return addrs, nil
+ return addrs
}
func TestInterfaceAddrsWithNetsh(t *testing.T) {
if !isEnglishOS(t) {
t.Skip("English version of OS required for this test")
}
+
+ outIPV4, err := runCmd("netsh", "interface", "ipv4", "show", "address")
+ if err != nil {
+ t.Fatal(err)
+ }
+ outIPV6, err := runCmd("netsh", "interface", "ipv6", "show", "address", "level=verbose")
+ if err != nil {
+ t.Fatal(err)
+ }
+
ift, err := Interfaces()
if err != nil {
t.Fatal(err)
}
sort.Strings(have)
- want, err := netshInterfaceIPv4ShowAddress(ifi.Name)
- if err != nil {
- t.Fatal(err)
- }
- wantIPv6, err := netshInterfaceIPv6ShowAddress(ifi.Name)
- if err != nil {
- t.Fatal(err)
- }
+ want := netshInterfaceIPv4ShowAddress(ifi.Name, outIPV4)
+ wantIPv6 := netshInterfaceIPv6ShowAddress(ifi.Name, outIPV6)
want = append(want, wantIPv6...)
sort.Strings(want)
//
//Connection Name: Bluetooth Network Connection
//Network Adapter: Bluetooth Device (Personal Area Network)
- //Physical Address: XX-XX-XX-XX-XX-XX
- //Transport Name: Media disconnected
+ //Physical Address: N/A
+ //Transport Name: Hardware not present
+ //
+ //Connection Name: VMware Network Adapter VMnet8
+ //Network Adapter: VMware Virtual Ethernet Adapter for VMnet8
+ //Physical Address: Disabled
+ //Transport Name: Disconnected
//
want := make(map[string]string)
var name string
if addr == "" {
t.Fatal("empty address on \"Physical Address\" line: %q", line)
}
+ if addr == "disabled" || addr == "n/a" {
+ continue
+ }
addr = strings.Replace(addr, "-", ":", -1)
want[name] = addr
name = ""
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// reading of RPC responses for the client side of an RPC session.
// The client calls WriteRequest to write a request to the connection
// and calls ReadResponseHeader and ReadResponseBody in pairs
-// to read responses. The client calls Close when finished with the
+// to read responses. The client calls Close when finished with the
// connection. ReadResponseBody may be called with a nil
// argument to force the body of the response to be read and then
// discarded.
case call.Done <- call:
// ok
default:
- // We don't want to block here. It is the caller's responsibility to make
+ // We don't want to block here. It is the caller's responsibility to make
// sure the channel has enough buffer space. See comment in Go().
if debugLog {
log.Println("rpc: discarding Call reply due to insufficient Done chan capacity")
return client.codec.Close()
}
-// Go invokes the function asynchronously. It returns the Call structure representing
-// the invocation. The done channel will signal when the call is complete by returning
-// the same Call object. If done is nil, Go will allocate a new channel.
+// Go invokes the function asynchronously. It returns the Call structure representing
+// the invocation. The done channel will signal when the call is complete by returning
+// the same Call object. If done is nil, Go will allocate a new channel.
// If non-nil, done must be buffered or Go will deliberately crash.
func (client *Client) Go(serviceMethod string, args interface{}, reply interface{}, done chan *Call) *Call {
call := new(Call)
} else {
// If caller passes done != nil, it must arrange that
// done has enough buffer for the number of simultaneous
- // RPCs that will be using that channel. If the channel
+ // RPCs that will be using that channel. If the channel
// is totally unbuffered, it's best not to run at all.
if cap(done) == 0 {
log.Panic("rpc: done channel is unbuffered")
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
c.mutex.Unlock()
if b == nil {
- // Invalid request so no id. Use JSON null.
+ // Invalid request so no id. Use JSON null.
b = &null
}
resp := serverResponse{Id: b}
DefaultDebugPath = "/debug/rpc"
)
-// Precompute the reflect type for error. Can't use error directly
-// because Typeof takes an empty interface value. This is annoying.
+// Precompute the reflect type for error. Can't use error directly
+// because Typeof takes an empty interface value. This is annoying.
var typeOfError = reflect.TypeOf((*error)(nil)).Elem()
type methodType struct {
method map[string]*methodType // registered methods
}
-// Request is a header written before every RPC call. It is used internally
+// Request is a header written before every RPC call. It is used internally
// but documented here as an aid to debugging, such as when analyzing
// network traffic.
type Request struct {
next *Request // for free list in Server
}
-// Response is a header written before every RPC return. It is used internally
+// Response is a header written before every RPC return. It is used internally
// but documented here as an aid to debugging, such as when analyzing
// network traffic.
type Response struct {
// ServeConn blocks, serving the connection until the client hangs up.
// The caller typically invokes ServeConn in a go statement.
// ServeConn uses the gob wire format (see package gob) on the
-// connection. To use an alternate codec, use ServeCodec.
+// connection. To use an alternate codec, use ServeCodec.
func (server *Server) ServeConn(conn io.ReadWriteCloser) {
buf := bufio.NewWriter(conn)
srv := &gobServerCodec{
return
}
- // We read the header successfully. If we see an error now,
+ // We read the header successfully. If we see an error now,
// we can still recover and move on to the next request.
keepReading = true
// RPC responses for the server side of an RPC session.
// The server calls ReadRequestHeader and ReadRequestBody in pairs
// to read requests from the connection, and it calls WriteResponse to
-// write a response back. The server calls Close when finished with the
+// write a response back. The server calls Close when finished with the
// connection. ReadRequestBody may be called with a nil
// argument to force the body of the request to be read and discarded.
type ServerCodec interface {
// ServeConn blocks, serving the connection until the client hangs up.
// The caller typically invokes ServeConn in a go statement.
// ServeConn uses the gob wire format (see package gob) on the
-// connection. To use an alternate codec, use ServeCodec.
+// connection. To use an alternate codec, use ServeCodec.
func ServeConn(conn io.ReadWriteCloser) {
DefaultServer.ServeConn(conn)
}
err = client.Call("Arith.Unknown", args, reply)
if err == nil {
t.Error("expected error calling unknown service")
- } else if strings.Index(err.Error(), "method") < 0 {
+ } else if !strings.Contains(err.Error(), "method") {
t.Error("expected error about method; got", err)
}
err = client.Call("Arith.Add", reply, reply) // args, reply would be the correct thing to use
if err == nil {
t.Error("expected error calling Arith.Add with wrong arg type")
- } else if strings.Index(err.Error(), "type") < 0 {
+ } else if !strings.Contains(err.Error(), "type") {
t.Error("expected error about type; got", err)
}
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//
// if handled == false, sendFile performed no work.
//
-// Note that sendfile for windows does not suppport >2GB file.
+// Note that sendfile for windows does not support >2GB file.
func sendFile(fd *netFD, r io.Reader) (written int64, err error, handled bool) {
var n int64 = 0 // by default, copy until EOF
// Hello sends a HELO or EHLO to the server as the given host name.
// Calling this method is only necessary if the client needs control
-// over the host name used. The client will introduce itself as "localhost"
-// automatically otherwise. If Hello is called, it must be called before
+// over the host name used. The client will introduce itself as "localhost"
+// automatically otherwise. If Hello is called, it must be called before
// any of the other methods.
func (c *Client) Hello(localName string) error {
if c.didHello {
// Data issues a DATA command to the server and returns a writer that
// can be used to write the mail headers and body. The caller should
-// close the writer before calling any more methods on c. A call to
+// close the writer before calling any more methods on c. A call to
// Data must be preceded by one or more calls to Rcpt.
func (c *Client) Data() (io.WriteCloser, error) {
_, _, err := c.cmd(354, "DATA")
//
// The msg parameter should be an RFC 822-style email with headers
// first, a blank line, and then the message body. The lines of msg
-// should be CRLF terminated. The msg headers should usually include
+// should be CRLF terminated. The msg headers should usually include
// fields such as "From", "To", "Subject", and "Cc". Sending "Bcc"
// messages is accomplished by including an email address in the to
// parameter but not including it in the msg headers.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
if supportsIPv4map && family == syscall.AF_INET6 && sotype != syscall.SOCK_RAW {
// Allow both IP versions even if the OS default
- // is otherwise. Note that some operating systems
+ // is otherwise. Note that some operating systems
// never admit this option.
syscall.SetsockoptInt(s, syscall.IPPROTO_IPV6, syscall.IPV6_V6ONLY, boolint(ipv6only))
}
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
func setDefaultSockopts(s, family, sotype int, ipv6only bool) error {
if family == syscall.AF_INET6 && sotype != syscall.SOCK_RAW {
// Allow both IP versions even if the OS default
- // is otherwise. Note that some operating systems
+ // is otherwise. Note that some operating systems
// never admit this option.
syscall.SetsockoptInt(s, syscall.IPPROTO_IPV6, syscall.IPV6_V6ONLY, boolint(ipv6only))
}
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
func setDefaultSockopts(s, family, sotype int, ipv6only bool) error {
if family == syscall.AF_INET6 && sotype != syscall.SOCK_RAW {
// Allow both IP versions even if the OS default
- // is otherwise. Note that some operating systems
+ // is otherwise. Note that some operating systems
// never admit this option.
syscall.SetsockoptInt(s, syscall.IPPROTO_IPV6, syscall.IPV6_V6ONLY, boolint(ipv6only))
}
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
func setDefaultSockopts(s syscall.Handle, family, sotype int, ipv6only bool) error {
if family == syscall.AF_INET6 && sotype != syscall.SOCK_RAW {
// Allow both IP versions even if the OS default
- // is otherwise. Note that some operating systems
+ // is otherwise. Note that some operating systems
// never admit this option.
syscall.SetsockoptInt(s, syscall.IPPROTO_IPV6, syscall.IPV6_V6ONLY, boolint(ipv6only))
}
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
return newTCPConn(fd), nil
}
-// TCPListener is a TCP network listener. Clients should typically
+// TCPListener is a TCP network listener. Clients should typically
// use variables of type Listener instead of assuming TCP.
type TCPListener struct {
fd *netFD
}
// File returns a copy of the underlying os.File, set to blocking
-// mode. It is the caller's responsibility to close f when finished.
+// mode. It is the caller's responsibility to close f when finished.
// Closing l does not affect f, and closing f does not affect l.
//
// The returned os.File's file descriptor is different from the
-// connection's. Attempting to change properties of the original
+// connection's. Attempting to change properties of the original
// using this duplicate may or may not have the desired effect.
func (l *TCPListener) File() (f *os.File, err error) {
f, err = l.dup()
}
// ListenTCP announces on the TCP address laddr and returns a TCP
-// listener. Net must be "tcp", "tcp4", or "tcp6". If laddr has a
-// port of 0, ListenTCP will choose an available port. The caller can
+// listener. Net must be "tcp", "tcp4", or "tcp6". If laddr has a
+// port of 0, ListenTCP will choose an available port. The caller can
// use the Addr method of TCPListener to retrieve the chosen address.
func ListenTCP(net string, laddr *TCPAddr) (*TCPListener, error) {
switch net {
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// TCP has a rarely used mechanism called a 'simultaneous connection' in
// which Dial("tcp", addr1, addr2) run on the machine at addr1 can
// connect to a simultaneous Dial("tcp", addr2, addr1) run on the machine
- // at addr2, without either machine executing Listen. If laddr == nil,
+ // at addr2, without either machine executing Listen. If laddr == nil,
// it means we want the kernel to pick an appropriate originating local
- // address. Some Linux kernels cycle blindly through a fixed range of
- // local ports, regardless of destination port. If a kernel happens to
+ // address. Some Linux kernels cycle blindly through a fixed range of
+ // local ports, regardless of destination port. If a kernel happens to
// pick local port 50001 as the source for a Dial("tcp", "", "localhost:50001"),
// then the Dial will succeed, having simultaneously connected to itself.
// This can only happen when we are letting the kernel pick a port (laddr == nil)
// and when there is no listener for the destination address.
- // It's hard to argue this is anything other than a kernel bug. If we
+ // It's hard to argue this is anything other than a kernel bug. If we
// see this happen, rather than expose the buggy effect to users, we
- // close the fd and try again. If it happens twice more, we relent and
- // use the result. See also:
+ // close the fd and try again. If it happens twice more, we relent and
+ // use the result. See also:
// https://golang.org/issue/2690
// http://stackoverflow.com/questions/4949858/
//
return err == syscall.EADDRNOTAVAIL
}
-// TCPListener is a TCP network listener. Clients should typically
+// TCPListener is a TCP network listener. Clients should typically
// use variables of type Listener instead of assuming TCP.
type TCPListener struct {
fd *netFD
}
// File returns a copy of the underlying os.File, set to blocking
-// mode. It is the caller's responsibility to close f when finished.
+// mode. It is the caller's responsibility to close f when finished.
// Closing l does not affect f, and closing f does not affect l.
//
// The returned os.File's file descriptor is different from the
-// connection's. Attempting to change properties of the original
+// connection's. Attempting to change properties of the original
// using this duplicate may or may not have the desired effect.
func (l *TCPListener) File() (f *os.File, err error) {
f, err = l.fd.dup()
}
// ListenTCP announces on the TCP address laddr and returns a TCP
-// listener. Net must be "tcp", "tcp4", or "tcp6". If laddr has a
-// port of 0, ListenTCP will choose an available port. The caller can
+// listener. Net must be "tcp", "tcp4", or "tcp6". If laddr has a
+// port of 0, ListenTCP will choose an available port. The caller can
// use the Addr method of TCPListener to retrieve the chosen address.
func ListenTCP(net string, laddr *TCPAddr) (*TCPListener, error) {
switch net {
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// Set sets the header entries associated with key to
-// the single element value. It replaces any existing
+// the single element value. It replaces any existing
// values associated with key.
func (h MIMEHeader) Set(key, value string) {
h[CanonicalMIMEHeaderKey(key)] = []string{value}
// Get gets the first value associated with the given key.
// If there are no values associated with the key, Get returns "".
-// Get is a convenience method. For more complex queries,
+// Get is a convenience method. For more complex queries,
// access the map directly.
func (h MIMEHeader) Get(key string) string {
if h == nil {
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// A sequencer schedules a sequence of numbered events that must
-// happen in order, one after the other. The event numbering must start
-// at 0 and increment without skipping. The event number wraps around
+// happen in order, one after the other. The event numbering must start
+// at 0 and increment without skipping. The event number wraps around
// safely as long as there are not 2^32 simultaneous events pending.
type sequencer struct {
mu sync.Mutex
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// ReadContinuedLine reads a possibly continued line from r,
// eliding the final trailing ASCII white space.
// Lines after the first are considered continuations if they
-// begin with a space or tab character. In the returned data,
+// begin with a space or tab character. In the returned data,
// continuation lines are separated from the previous line
// only by a single space: the newline and leading white space
// are removed.
// ReadCodeLine reads a response code line of the form
// code message
// where code is a three-digit status code and the message
-// extends to the rest of the line. An example of such a line is:
+// extends to the rest of the line. An example of such a line is:
// 220 plan9.bell-labs.com ESMTP
//
// If the prefix of the status does not match the digits in expectCode,
d.state = stateBeginLine
break
}
- // Not part of \r\n. Emit saved \r
+ // Not part of \r\n. Emit saved \r
br.UnreadByte()
c = '\r'
d.state = stateData
}
// CanonicalMIMEHeaderKey returns the canonical format of the
-// MIME header key s. The canonicalization converts the first
+// MIME header key s. The canonicalization converts the first
// letter and any letter following a hyphen to upper case;
-// the rest are converted to lowercase. For example, the
+// the rest are converted to lowercase. For example, the
// canonical key for "accept-encoding" is "Accept-Encoding".
// MIME header keys are assumed to be ASCII only.
// If s contains a space or invalid header field bytes, it is
const toLower = 'a' - 'A'
// validHeaderFieldByte reports whether b is a valid byte in a header
-// field key. This is actually stricter than RFC 7230, which says:
+// field name. RFC 7230 says:
+// header-field = field-name ":" OWS field-value OWS
+// field-name = token
// tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" / "+" / "-" / "." /
// "^" / "_" / "`" / "|" / "~" / DIGIT / ALPHA
// token = 1*tchar
-// TODO: revisit in Go 1.6+ and possibly expand this. But note that many
-// servers have historically dropped '_' to prevent ambiguities when mapping
-// to CGI environment variables.
func validHeaderFieldByte(b byte) bool {
- return ('A' <= b && b <= 'Z') ||
- ('a' <= b && b <= 'z') ||
- ('0' <= b && b <= '9') ||
- b == '-'
+ return int(b) < len(isTokenTable) && isTokenTable[b]
}
// canonicalMIMEHeaderKey is like CanonicalMIMEHeaderKey but is
commonHeader[v] = v
}
}
+
+// isTokenTable is a copy of net/http/lex.go's isTokenTable.
+// See https://httpwg.github.io/specs/rfc7230.html#rule.token.separators
+var isTokenTable = [127]bool{
+ '!': true,
+ '#': true,
+ '$': true,
+ '%': true,
+ '&': true,
+ '\'': true,
+ '*': true,
+ '+': true,
+ '-': true,
+ '.': true,
+ '0': true,
+ '1': true,
+ '2': true,
+ '3': true,
+ '4': true,
+ '5': true,
+ '6': true,
+ '7': true,
+ '8': true,
+ '9': true,
+ 'A': true,
+ 'B': true,
+ 'C': true,
+ 'D': true,
+ 'E': true,
+ 'F': true,
+ 'G': true,
+ 'H': true,
+ 'I': true,
+ 'J': true,
+ 'K': true,
+ 'L': true,
+ 'M': true,
+ 'N': true,
+ 'O': true,
+ 'P': true,
+ 'Q': true,
+ 'R': true,
+ 'S': true,
+ 'T': true,
+ 'U': true,
+ 'W': true,
+ 'V': true,
+ 'X': true,
+ 'Y': true,
+ 'Z': true,
+ '^': true,
+ '_': true,
+ '`': true,
+ 'a': true,
+ 'b': true,
+ 'c': true,
+ 'd': true,
+ 'e': true,
+ 'f': true,
+ 'g': true,
+ 'h': true,
+ 'i': true,
+ 'j': true,
+ 'k': true,
+ 'l': true,
+ 'm': true,
+ 'n': true,
+ 'o': true,
+ 'p': true,
+ 'q': true,
+ 'r': true,
+ 's': true,
+ 't': true,
+ 'u': true,
+ 'v': true,
+ 'w': true,
+ 'x': true,
+ 'y': true,
+ 'z': true,
+ '|': true,
+ '~': true,
+}
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
{"user-agent", "User-Agent"},
{"USER-AGENT", "User-Agent"},
+ // Other valid tchar bytes in tokens:
+ {"foo-bar_baz", "Foo-Bar_baz"},
+ {"foo-bar$baz", "Foo-Bar$baz"},
+ {"foo-bar~baz", "Foo-Bar~baz"},
+ {"foo-bar*baz", "Foo-Bar*baz"},
+
// Non-ASCII or anything with spaces or non-token chars is unchanged:
{"üser-agenT", "üser-agenT"},
{"a B", "a B"},
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// Cmd is a convenience method that sends a command after
-// waiting its turn in the pipeline. The command text is the
+// waiting its turn in the pipeline. The command text is the
// result of formatting format with args and appending \r\n.
// Cmd returns the id of the command, for use with StartResponse and EndResponse.
//
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// DotWriter returns a writer that can be used to write a dot-encoding to w.
// It takes care of inserting leading dots when necessary,
// translating line-ending \n into \r\n, and adding the final .\r\n line
-// when the DotWriter is closed. The caller should close the
+// when the DotWriter is closed. The caller should close the
// DotWriter before the next call to a method on w.
//
// See the documentation for Reader's DotReader method for details about dot-encoding.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
{-5 * time.Second, 0, -5 * time.Second, 100 * time.Millisecond},
{0, -5 * time.Second, -5 * time.Second, 100 * time.Millisecond},
{-5 * time.Second, 5 * time.Second, -5 * time.Second, 100 * time.Millisecond}, // timeout over deadline
+ {-1 << 63, 0, time.Second, 100 * time.Millisecond},
+ {0, -1 << 63, time.Second, 100 * time.Millisecond},
{50 * time.Millisecond, 0, 100 * time.Millisecond, time.Second},
{0, 50 * time.Millisecond, 100 * time.Millisecond, time.Second},
}
}
+var dialTimeoutMaxDurationTests = []struct {
+ timeout time.Duration
+ delta time.Duration // for deadline
+}{
+ // Large timeouts that will overflow an int64 unix nanos.
+ {1<<63 - 1, 0},
+ {0, 1<<63 - 1},
+}
+
+func TestDialTimeoutMaxDuration(t *testing.T) {
+ t.Parallel()
+
+ ln, err := newLocalListener("tcp")
+ if err != nil {
+ t.Fatal(err)
+ }
+ defer ln.Close()
+
+ for i, tt := range dialTimeoutMaxDurationTests {
+ ch := make(chan error)
+ max := time.NewTimer(100 * time.Millisecond)
+ defer max.Stop()
+ go func() {
+ d := Dialer{Timeout: tt.timeout}
+ if tt.delta != 0 {
+ d.Deadline = time.Now().Add(tt.delta)
+ }
+ c, err := d.Dial(ln.Addr().Network(), ln.Addr().String())
+ if err == nil {
+ c.Close()
+ }
+ ch <- err
+ }()
+
+ select {
+ case <-max.C:
+ t.Fatalf("#%d: Dial didn't return in an expected time", i)
+ case err := <-ch:
+ if perr := parseDialError(err); perr != nil {
+ t.Error(perr)
+ }
+ if err != nil {
+ t.Errorf("#%d: %v", i, err)
+ }
+ }
+ }
+}
+
var acceptTimeoutTests = []struct {
timeout time.Duration
xerrs [2]error // expected errors in transition
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// ReadMsgUDP reads a packet from c, copying the payload into b and
-// the associated out-of-band data into oob. It returns the number
+// the associated out-of-band data into oob. It returns the number
// of bytes copied into b, the number of bytes copied into oob, the
// flags that were set on the packet and the source address of the
// packet.
//
// WriteToUDP can be made to time out and return an error with
// Timeout() == true after a fixed time limit; see SetDeadline and
-// SetWriteDeadline. On packet-oriented connections, write timeouts
+// SetWriteDeadline. On packet-oriented connections, write timeouts
// are rare.
func (c *UDPConn) WriteToUDP(b []byte, addr *UDPAddr) (int, error) {
if !c.ok() || c.fd.data == nil {
// WriteMsgUDP writes a packet to addr via c if c isn't connected, or
// to c's remote destination address if c is connected (in which case
// addr must be nil). The payload is copied from b and the associated
-// out-of-band data is copied from oob. It returns the number of
+// out-of-band data is copied from oob. It returns the number of
// payload and out-of-band bytes written.
func (c *UDPConn) WriteMsgUDP(b, oob []byte, addr *UDPAddr) (n, oobn int, err error) {
return 0, 0, &OpError{Op: "write", Net: c.fd.dir, Source: c.fd.laddr, Addr: addr.opAddr(), Err: syscall.EPLAN9}
}
// ListenUDP listens for incoming UDP packets addressed to the local
-// address laddr. Net must be "udp", "udp4", or "udp6". If laddr has
+// address laddr. Net must be "udp", "udp4", or "udp6". If laddr has
// a port of 0, ListenUDP will choose an available port.
// The LocalAddr method of the returned UDPConn can be used to
-// discover the port. The returned connection's ReadFrom and WriteTo
+// discover the port. The returned connection's ReadFrom and WriteTo
// methods can be used to receive and send UDP packets with per-packet
// addressing.
func ListenUDP(net string, laddr *UDPAddr) (*UDPConn, error) {
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// ReadMsgUDP reads a packet from c, copying the payload into b and
-// the associated out-of-band data into oob. It returns the number
+// the associated out-of-band data into oob. It returns the number
// of bytes copied into b, the number of bytes copied into oob, the
// flags that were set on the packet and the source address of the
// packet.
//
// WriteToUDP can be made to time out and return an error with
// Timeout() == true after a fixed time limit; see SetDeadline and
-// SetWriteDeadline. On packet-oriented connections, write timeouts
+// SetWriteDeadline. On packet-oriented connections, write timeouts
// are rare.
func (c *UDPConn) WriteToUDP(b []byte, addr *UDPAddr) (int, error) {
if !c.ok() {
// WriteMsgUDP writes a packet to addr via c if c isn't connected, or
// to c's remote destination address if c is connected (in which case
// addr must be nil). The payload is copied from b and the associated
-// out-of-band data is copied from oob. It returns the number of
+// out-of-band data is copied from oob. It returns the number of
// payload and out-of-band bytes written.
func (c *UDPConn) WriteMsgUDP(b, oob []byte, addr *UDPAddr) (n, oobn int, err error) {
if !c.ok() {
}
// ListenUDP listens for incoming UDP packets addressed to the local
-// address laddr. Net must be "udp", "udp4", or "udp6". If laddr has
+// address laddr. Net must be "udp", "udp4", or "udp6". If laddr has
// a port of 0, ListenUDP will choose an available port.
// The LocalAddr method of the returned UDPConn can be used to
-// discover the port. The returned connection's ReadFrom and WriteTo
+// discover the port. The returned connection's ReadFrom and WriteTo
// methods can be used to receive and send UDP packets with per-packet
// addressing.
func ListenUDP(net string, laddr *UDPAddr) (*UDPConn, error) {
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
switch err {
case nil: // ReadFrom succeeds
default: // Read may timeout, it depends on the platform
- if nerr, ok := err.(Error); (!ok || !nerr.Timeout()) && runtime.GOOS != "windows" { // Windows retruns WSAEMSGSIZ
+ if nerr, ok := err.(Error); (!ok || !nerr.Timeout()) && runtime.GOOS != "windows" { // Windows returns WSAEMSGSIZ
t.Fatal(err)
}
}
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
conn
}
-// ReadFromUnix reads a packet from c, copying the payload into b. It
+// ReadFromUnix reads a packet from c, copying the payload into b. It
// returns the number of bytes copied into b and the source address of
// the packet.
//
}
// ReadMsgUnix reads a packet from c, copying the payload into b and
-// the associated out-of-band data into oob. It returns the number of
+// the associated out-of-band data into oob. It returns the number of
// bytes copied into b, the number of bytes copied into oob, the flags
// that were set on the packet, and the source address of the packet.
func (c *UnixConn) ReadMsgUnix(b, oob []byte) (n, oobn, flags int, addr *UnixAddr, err error) {
//
// WriteToUnix can be made to time out and return an error with
// Timeout() == true after a fixed time limit; see SetDeadline and
-// SetWriteDeadline. On packet-oriented connections, write timeouts
+// SetWriteDeadline. On packet-oriented connections, write timeouts
// are rare.
func (c *UnixConn) WriteToUnix(b []byte, addr *UnixAddr) (int, error) {
return 0, &OpError{Op: "write", Net: c.fd.dir, Source: c.fd.laddr, Addr: addr.opAddr(), Err: syscall.EPLAN9}
}
// WriteMsgUnix writes a packet to addr via c, copying the payload
-// from b and the associated out-of-band data from oob. It returns
+// from b and the associated out-of-band data from oob. It returns
// the number of payload and out-of-band bytes written.
func (c *UnixConn) WriteMsgUnix(b, oob []byte, addr *UnixAddr) (n, oobn int, err error) {
return 0, 0, &OpError{Op: "write", Net: c.fd.dir, Source: c.fd.laddr, Addr: addr.opAddr(), Err: syscall.EPLAN9}
return nil, &OpError{Op: "dial", Net: net, Source: laddr.opAddr(), Addr: raddr.opAddr(), Err: syscall.EPLAN9}
}
-// UnixListener is a Unix domain socket listener. Clients should
+// UnixListener is a Unix domain socket listener. Clients should
// typically use variables of type Listener instead of assuming Unix
// domain sockets.
type UnixListener struct {
}
// ListenUnix announces on the Unix domain socket laddr and returns a
-// Unix listener. The network net must be "unix" or "unixpacket".
+// Unix listener. The network net must be "unix" or "unixpacket".
func ListenUnix(net string, laddr *UnixAddr) (*UnixListener, error) {
return nil, &OpError{Op: "listen", Net: net, Source: nil, Addr: laddr.opAddr(), Err: syscall.EPLAN9}
}
return nil, &OpError{Op: "accept", Net: l.fd.dir, Source: nil, Addr: l.fd.laddr, Err: syscall.EPLAN9}
}
-// Close stops listening on the Unix address. Already accepted
+// Close stops listening on the Unix address. Already accepted
// connections are not closed.
func (l *UnixListener) Close() error {
return &OpError{Op: "close", Net: l.fd.dir, Source: nil, Addr: l.fd.laddr, Err: syscall.EPLAN9}
}
// File returns a copy of the underlying os.File, set to blocking
-// mode. It is the caller's responsibility to close f when finished.
+// mode. It is the caller's responsibility to close f when finished.
// Closing l does not affect f, and closing f does not affect l.
//
// The returned os.File's file descriptor is different from the
-// connection's. Attempting to change properties of the original
+// connection's. Attempting to change properties of the original
// using this duplicate may or may not have the desired effect.
func (l *UnixListener) File() (*os.File, error) {
return nil, &OpError{Op: "file", Net: l.fd.dir, Source: nil, Addr: l.fd.laddr, Err: syscall.EPLAN9}
}
// ListenUnixgram listens for incoming Unix datagram packets addressed
-// to the local address laddr. The network net must be "unixgram".
+// to the local address laddr. The network net must be "unixgram".
// The returned connection's ReadFrom and WriteTo methods can be used
// to receive and send packets with per-packet addressing.
func ListenUnixgram(net string, laddr *UnixAddr) (*UnixConn, error) {
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
func newUnixConn(fd *netFD) *UnixConn { return &UnixConn{conn{fd}} }
-// ReadFromUnix reads a packet from c, copying the payload into b. It
+// ReadFromUnix reads a packet from c, copying the payload into b. It
// returns the number of bytes copied into b and the source address of
// the packet.
//
}
// ReadMsgUnix reads a packet from c, copying the payload into b and
-// the associated out-of-band data into oob. It returns the number of
+// the associated out-of-band data into oob. It returns the number of
// bytes copied into b, the number of bytes copied into oob, the flags
// that were set on the packet, and the source address of the packet.
func (c *UnixConn) ReadMsgUnix(b, oob []byte) (n, oobn, flags int, addr *UnixAddr, err error) {
//
// WriteToUnix can be made to time out and return an error with
// Timeout() == true after a fixed time limit; see SetDeadline and
-// SetWriteDeadline. On packet-oriented connections, write timeouts
+// SetWriteDeadline. On packet-oriented connections, write timeouts
// are rare.
func (c *UnixConn) WriteToUnix(b []byte, addr *UnixAddr) (int, error) {
if !c.ok() {
}
// WriteMsgUnix writes a packet to addr via c, copying the payload
-// from b and the associated out-of-band data from oob. It returns
+// from b and the associated out-of-band data from oob. It returns
// the number of payload and out-of-band bytes written.
func (c *UnixConn) WriteMsgUnix(b, oob []byte, addr *UnixAddr) (n, oobn int, err error) {
if !c.ok() {
return newUnixConn(fd), nil
}
-// UnixListener is a Unix domain socket listener. Clients should
+// UnixListener is a Unix domain socket listener. Clients should
// typically use variables of type Listener instead of assuming Unix
// domain sockets.
type UnixListener struct {
}
// ListenUnix announces on the Unix domain socket laddr and returns a
-// Unix listener. The network net must be "unix" or "unixpacket".
+// Unix listener. The network net must be "unix" or "unixpacket".
func ListenUnix(net string, laddr *UnixAddr) (*UnixListener, error) {
switch net {
case "unix", "unixpacket":
return newUnixConn(fd), nil
}
-// Accept implements the Accept method in the Listener interface; it
-// waits for the next call and returns a generic Conn.
-func (l *UnixListener) Accept() (c Conn, err error) {
- c1, err := l.AcceptUnix()
+// Accept implements the Accept method in the Listener interface.
+// Returned connections will be of type *UnixConn.
+func (l *UnixListener) Accept() (Conn, error) {
+ c, err := l.AcceptUnix()
if err != nil {
return nil, err
}
- return c1, nil
+ return c, nil
}
-// Close stops listening on the Unix address. Already accepted
+// Close stops listening on the Unix address. Already accepted
// connections are not closed.
func (l *UnixListener) Close() error {
if l == nil || l.fd == nil {
// and replaced our socket name already--
// but this sequence (remove then close)
// is at least compatible with the auto-remove
- // sequence in ListenUnix. It's only non-Go
+ // sequence in ListenUnix. It's only non-Go
// programs that can mess us up.
if l.path[0] != '@' && l.unlink {
syscall.Unlink(l.path)
}
// File returns a copy of the underlying os.File, set to blocking
-// mode. It is the caller's responsibility to close f when finished.
+// mode. It is the caller's responsibility to close f when finished.
// Closing l does not affect f, and closing f does not affect l.
//
// The returned os.File's file descriptor is different from the
-// connection's. Attempting to change properties of the original
+// connection's. Attempting to change properties of the original
// using this duplicate may or may not have the desired effect.
func (l *UnixListener) File() (f *os.File, err error) {
f, err = l.fd.dup()
}
// ListenUnixgram listens for incoming Unix datagram packets addressed
-// to the local address laddr. The network net must be "unixgram".
+// to the local address laddr. The network net must be "unixgram".
// The returned connection's ReadFrom and WriteTo methods can be used
// to receive and send packets with per-packet addressing.
func ListenUnixgram(net string, laddr *UnixAddr) (*UnixConn, error) {
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
t.Fatal("closing unix listener did not remove unix socket")
}
}
-
-// forceGoDNS forces the resolver configuration to use the pure Go resolver
-// and returns a fixup function to restore the old settings.
-func forceGoDNS() func() {
- c := systemConf()
- oldGo := c.netGo
- oldCgo := c.netCgo
- fixup := func() {
- c.netGo = oldGo
- c.netCgo = oldCgo
- }
- c.netGo = true
- c.netCgo = false
- return fixup
-}
-
-// forceCgoDNS forces the resolver configuration to use the cgo resolver
-// and returns a fixup function to restore the old settings.
-// (On non-Unix systems forceCgoDNS returns nil.)
-func forceCgoDNS() func() {
- c := systemConf()
- oldGo := c.netGo
- oldCgo := c.netCgo
- fixup := func() {
- c.netGo = oldGo
- c.netCgo = oldCgo
- }
- c.netGo = false
- c.netCgo = true
- return fixup
-}
// construct a URL struct directly and set the Opaque field instead of Path.
// These still work as well.
type URL struct {
- Scheme string
- Opaque string // encoded opaque data
- User *Userinfo // username and password information
- Host string // host or host:port
- Path string
- RawPath string // encoded path hint (Go 1.5 and later only; see EscapedPath method)
- RawQuery string // encoded query values, without '?'
- Fragment string // fragment for references, without '#'
+ Scheme string
+ Opaque string // encoded opaque data
+ User *Userinfo // username and password information
+ Host string // host or host:port
+ Path string
+ RawPath string // encoded path hint (Go 1.5 and later only; see EscapedPath method)
+ ForceQuery bool // append a query ('?') even if RawQuery is empty
+ RawQuery string // encoded query values, without '?'
+ Fragment string // fragment for references, without '#'
}
// User returns a Userinfo containing the provided username
// Parse parses rawurl into a URL structure.
// The rawurl may be relative or absolute.
-func Parse(rawurl string) (url *URL, err error) {
+func Parse(rawurl string) (*URL, error) {
// Cut off #frag
u, frag := split(rawurl, "#", true)
- if url, err = parse(u, false); err != nil {
+ url, err := parse(u, false)
+ if err != nil {
return nil, err
}
if frag == "" {
return url, nil
}
-// ParseRequestURI parses rawurl into a URL structure. It assumes that
+// ParseRequestURI parses rawurl into a URL structure. It assumes that
// rawurl was received in an HTTP request, so the rawurl is interpreted
// only as an absolute URI or an absolute path.
// The string rawurl is assumed not to have a #fragment suffix.
// (Web browsers strip #fragment before sending the URL to a web server.)
-func ParseRequestURI(rawurl string) (url *URL, err error) {
+func ParseRequestURI(rawurl string) (*URL, error) {
return parse(rawurl, true)
}
-// parse parses a URL from a string in one of two contexts. If
+// parse parses a URL from a string in one of two contexts. If
// viaRequest is true, the URL is assumed to have arrived via an HTTP request,
// in which case only absolute URLs or path-absolute relative URLs are allowed.
// If viaRequest is false, all forms of relative URLs are allowed.
}
url.Scheme = strings.ToLower(url.Scheme)
- rest, url.RawQuery = split(rest, "?", true)
+ if strings.HasSuffix(rest, "?") && strings.Count(rest, "?") == 1 {
+ url.ForceQuery = true
+ rest = rest[:len(rest)-1]
+ } else {
+ rest, url.RawQuery = split(rest, "?", true)
+ }
if !strings.HasPrefix(rest, "/") {
if url.Scheme != "" {
return nil, host, nil
}
userinfo := authority[:i]
- if strings.Index(userinfo, ":") < 0 {
+ if !strings.Contains(userinfo, ":") {
if userinfo, err = unescape(userinfo, encodeUserPassword); err != nil {
return nil, "", err
}
}
buf.WriteString(path)
}
- if u.RawQuery != "" {
+ if u.ForceQuery || u.RawQuery != "" {
buf.WriteByte('?')
buf.WriteString(u.RawQuery)
}
if v == nil {
return ""
}
- vs, ok := v[key]
- if !ok || len(vs) == 0 {
+ vs := v[key]
+ if len(vs) == 0 {
return ""
}
return vs[0]
// ParseQuery always returns a non-nil map containing all the
// valid query parameters found; err describes the first decoding error
// encountered, if any.
-func ParseQuery(query string) (m Values, err error) {
- m = make(Values)
- err = parseQuery(m, query)
- return
+func ParseQuery(query string) (Values, error) {
+ m := make(Values)
+ err := parseQuery(m, query)
+ return m, err
}
func parseQuery(m Values, query string) (err error) {
return u.Scheme != ""
}
-// Parse parses a URL in the context of the receiver. The provided URL
-// may be relative or absolute. Parse returns nil, err on parse
+// Parse parses a URL in the context of the receiver. The provided URL
+// may be relative or absolute. Parse returns nil, err on parse
// failure, otherwise its return value is the same as ResolveReference.
func (u *URL) Parse(ref string) (*URL, error) {
refurl, err := Parse(ref)
// ResolveReference resolves a URI reference to an absolute URI from
// an absolute base URI, per RFC 3986 Section 5.2. The URI reference
-// may be relative or absolute. ResolveReference always returns a new
+// may be relative or absolute. ResolveReference always returns a new
// URL instance, even if the returned URL is identical to either the
// base or reference. If ref is an absolute URL, then ResolveReference
// ignores base and returns a copy of ref.
result = u.Scheme + ":" + result
}
}
- if u.RawQuery != "" {
+ if u.ForceQuery || u.RawQuery != "" {
result += "?" + u.RawQuery
}
return result
},
"ftp://john%20doe@www.google.com/",
},
+ // empty query
+ {
+ "http://www.google.com/?",
+ &URL{
+ Scheme: "http",
+ Host: "www.google.com",
+ Path: "/",
+ ForceQuery: true,
+ },
+ "",
+ },
+ // query ending in question mark (Issue 14573)
+ {
+ "http://www.google.com/?foo=bar?",
+ &URL{
+ Scheme: "http",
+ Host: "www.google.com",
+ Path: "/",
+ RawQuery: "foo=bar?",
+ },
+ "",
+ },
// query
{
"http://www.google.com/?q=go+language",
pass = p
}
}
- return fmt.Sprintf("opaque=%q, scheme=%q, user=%#v, pass=%#v, host=%q, path=%q, rawpath=%q, rawq=%q, frag=%q",
- u.Opaque, u.Scheme, user, pass, u.Host, u.Path, u.RawPath, u.RawQuery, u.Fragment)
+ return fmt.Sprintf("opaque=%q, scheme=%q, user=%#v, pass=%#v, host=%q, path=%q, rawpath=%q, rawq=%q, frag=%q, forcequery=%v",
+ u.Opaque, u.Scheme, user, pass, u.Host, u.Path, u.RawPath, u.RawQuery, u.Fragment, u.ForceQuery)
}
func DoTest(t *testing.T, parse func(string) (*URL, error), name string, tests []URLTest) {
// Absolute URL references
{"http://foo.com?a=b", "https://bar.com/", "https://bar.com/"},
{"http://foo.com/", "https://bar.com/?a=b", "https://bar.com/?a=b"},
+ {"http://foo.com/", "https://bar.com/?", "https://bar.com/?"},
{"http://foo.com/bar", "mailto:foo@example.com", "mailto:foo@example.com"},
// Path-absolute references
{"http://foo.com/bar", "/baz", "http://foo.com/baz"},
{"http://foo.com/bar?a=b#f", "/baz", "http://foo.com/baz"},
+ {"http://foo.com/bar?a=b", "/baz?", "http://foo.com/baz?"},
{"http://foo.com/bar?a=b", "/baz?c=d", "http://foo.com/baz?c=d"},
// Scheme-relative
},
"//foo",
},
+ {
+ &URL{
+ Scheme: "http",
+ Host: "example.com",
+ Path: "/foo",
+ ForceQuery: true,
+ },
+ "/foo?",
+ },
}
func TestRequestURI(t *testing.T) {
}
// Sys returns system-dependent exit information about
-// the process. Convert it to the appropriate underlying
+// the process. Convert it to the appropriate underlying
// type, such as syscall.WaitStatus on Unix, to access its contents.
func (p *ProcessState) Sys() interface{} {
return p.sys()
}
// SysUsage returns system-dependent resource usage information about
-// the exited process. Convert it to the appropriate underlying
+// the exited process. Convert it to the appropriate underlying
// type, such as *syscall.Rusage on Unix, to access its contents.
// (On Unix, *syscall.Rusage matches struct rusage as defined in the
// getrusage(2) manual page.)
// nil error. If it encounters an error before the end of the
// directory, Readdir returns the FileInfo read until that point
// and a non-nil error.
-func (f *File) Readdir(n int) (fi []FileInfo, err error) {
+func (f *File) Readdir(n int) ([]FileInfo, error) {
if f == nil {
return nil, ErrInvalid
}
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// ExpandEnv replaces ${var} or $var in the string according to the values
-// of the current environment variables. References to undefined
+// of the current environment variables. References to undefined
// variables are replaced by the empty string.
func ExpandEnv(s string) string {
return Expand(s, Getenv)
}
// getShellName returns the name that begins the string and the number of bytes
-// consumed to extract it. If the name is enclosed in {}, it's part of a ${}
+// consumed to extract it. If the name is enclosed in {}, it's part of a ${}
// expansion and two more bytes are needed than the length of the name.
func getShellName(s string) (string, int) {
switch {
// new process in the form returned by Environ.
// If it is nil, the result of Environ will be used.
Env []string
- // Files specifies the open files inherited by the new process. The
+ // Files specifies the open files inherited by the new process. The
// first three entries correspond to standard input, standard output, and
- // standard error. An implementation may support additional entries,
- // depending on the underlying operating system. A nil entry corresponds
+ // standard error. An implementation may support additional entries,
+ // depending on the underlying operating system. A nil entry corresponds
// to that file being closed when the process starts.
Files []*File
//
// We want to test that FDs in the child do not get overwritten
// by one another as this shuffle occurs. The original implementation
- // was buggy in that in some data dependent cases it would ovewrite
+ // was buggy in that in some data dependent cases it would overwrite
// stderr in the child with one of the ExtraFile members.
// Testing for this case is difficult because it relies on using
// the same FD values as that case. In particular, an FD of 3
}
// Referring to fd3 here ensures that it is not
// garbage collected, and therefore closed, while
- // executing the wantfd loop above. It doesn't matter
+ // executing the wantfd loop above. It doesn't matter
// what we do with fd3 as long as we refer to it;
// closing it is the easy choice.
fd3.Close()
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
return f, nil
}
}
- return ``, os.ErrNotExist
+ return "", os.ErrNotExist
}
// LookPath searches for an executable binary named file
// LookPath also uses PATHEXT environment variable to match
// a suitable candidate.
// The result may be an absolute path or a path relative to the current directory.
-func LookPath(file string) (f string, err error) {
+func LookPath(file string) (string, error) {
x := os.Getenv(`PATHEXT`)
- if x == `` {
+ if x == "" {
x = `.COM;.EXE;.BAT;.CMD`
}
exts := []string{}
}
exts = append(exts, e)
}
- if strings.IndexAny(file, `:\/`) != -1 {
- if f, err = findExecutable(file, exts); err == nil {
- return
+ if strings.ContainsAny(file, `:\/`) {
+ if f, err := findExecutable(file, exts); err == nil {
+ return f, nil
+ } else {
+ return "", &Error{file, err}
}
- return ``, &Error{file, err}
}
- if f, err = findExecutable(`.\`+file, exts); err == nil {
- return
+ if f, err := findExecutable(`.\`+file, exts); err == nil {
+ return f, nil
}
- if pathenv := os.Getenv(`PATH`); pathenv != `` {
+ if pathenv := os.Getenv(`PATH`); pathenv != "" {
for _, dir := range splitList(pathenv) {
- if f, err = findExecutable(dir+`\`+file, exts); err == nil {
- return
+ if f, err := findExecutable(dir+`\`+file, exts); err == nil {
+ return f, nil
}
}
}
- return ``, &Error{file, ErrNotFound}
+ return "", &Error{file, ErrNotFound}
}
func splitList(path string) []string {
// Remove quotes.
for i, s := range list {
if strings.Contains(s, `"`) {
- list[i] = strings.Replace(s, `"`, ``, -1)
+ list[i] = strings.Replace(s, `"`, "", -1)
}
}
}
// createFiles copies srcPath file into multiply files.
-// It uses dir as preifx for all destination files.
+// It uses dir as prefix for all destination files.
func createFiles(t *testing.T, dir string, files []string, srcPath string) {
for _, f := range files {
installProg(t, filepath.Join(dir, f), srcPath)
},
{
// LookPath(`a.exe`) will find `.\a.exe`, but prefixing that with
- // dir `p\a.exe` will refer to not existant file
+ // dir `p\a.exe` will refer to a non-existent file
files: []string{`a.exe`, `p\not_important_file`},
dir: `p`,
arg0: `a.exe`,
},
{
// like above, but making test succeed by installing file
- // in refered destination (so LookPath(`a.exe`) will still
+ // in referred destination (so LookPath(`a.exe`) will still
// find `.\a.exe`, but we successfully execute `p\a.exe`)
files: []string{`a.exe`, `p\a.exe`},
dir: `p`,
func startProcess(name string, argv []string, attr *ProcAttr) (p *Process, err error) {
// If there is no SysProcAttr (ie. no Chroot or changed
// UID/GID), double-check existence of the directory we want
- // to chdir into. We can make the error clearer this way.
+ // to chdir into. We can make the error clearer this way.
if attr != nil && attr.Sys == nil && attr.Dir != "" {
if _, err := Stat(attr.Dir); err != nil {
pe := err.(*PathError)
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
return nil
}
-// Open opens the named file for reading. If successful, methods on
+// Open opens the named file for reading. If successful, methods on
// the returned file can be used for reading; the associated file
// descriptor has mode O_RDONLY.
// If there is an error, it will be of type *PathError.
}
// OpenFile is the generalized open call; most users will use Open
-// or Create instead. It opens the named file with specified flag
-// (O_RDONLY etc.) and perm, (0666 etc.) if applicable. If successful,
+// or Create instead. It opens the named file with specified flag
+// (O_RDONLY etc.) and perm, (0666 etc.) if applicable. If successful,
// methods on the returned File can be used for I/O.
// If there is an error, it will be of type *PathError.
func OpenFile(name string, flag int, perm FileMode) (*File, error) {
const DevNull = "/dev/null"
// OpenFile is the generalized open call; most users will use Open
-// or Create instead. It opens the named file with specified flag
-// (O_RDONLY etc.) and perm, (0666 etc.) if applicable. If successful,
+// or Create instead. It opens the named file with specified flag
+// (O_RDONLY etc.) and perm, (0666 etc.) if applicable. If successful,
// methods on the returned File can be used for I/O.
// If there is an error, it will be of type *PathError.
func OpenFile(name string, flag int, perm FileMode) (*File, error) {
}
// There's a race here with fork/exec, which we are
- // content to live with. See ../syscall/exec_unix.go.
+ // content to live with. See ../syscall/exec_unix.go.
if !supportsCloseOnExec {
syscall.CloseOnExec(r)
}
// Lstat returns a FileInfo describing the named file.
// If the file is a symbolic link, the returned FileInfo
-// describes the symbolic link. Lstat makes no attempt to follow the link.
+// describes the symbolic link. Lstat makes no attempt to follow the link.
// If there is an error, it will be of type *PathError.
func Lstat(name string) (FileInfo, error) {
var fs fileStat
// Both failed: figure out which error to return.
// OS X and Linux differ on whether unlink(dir)
- // returns EISDIR, so can't use that. However,
+ // returns EISDIR, so can't use that. However,
// both agree that rmdir(file) returns ENOTDIR,
// so we can use that to decide which error is real.
// Rmdir might also return ENOTDIR if given a bad
}
// OpenFile is the generalized open call; most users will use Open
-// or Create instead. It opens the named file with specified flag
-// (O_RDONLY etc.) and perm, (0666 etc.) if applicable. If successful,
+// or Create instead. It opens the named file with specified flag
+// (O_RDONLY etc.) and perm, (0666 etc.) if applicable. If successful,
// methods on the returned File can be used for I/O.
// If there is an error, it will be of type *PathError.
func OpenFile(name string, flag int, perm FileMode) (*File, error) {
var useSyscallwd = func(error) bool { return true }
// Getwd returns a rooted path name corresponding to the
-// current directory. If the current directory can be
+// current directory. If the current directory can be
// reached via multiple paths (due to symbolic links),
// Getwd may return any one of them.
func Getwd() (dir string, err error) {
}
// General algorithm: find name in parent
- // and then find name of parent. Each iteration
+ // and then find name of parent. Each iteration
// adds /name to the beginning of dir.
dir = ""
for parent := ".."; ; parent = "../" + parent {
switch runtime.GOOS {
case "android":
return &sysDir{
- "/system/framework",
+ "/system/lib",
[]string{
- "ext.jar",
- "framework.jar",
+ "libmedia.so",
+ "libpowermanager.so",
},
}
case "darwin":
return s
}
- if got, want := names(mustReadDir("inital readdir")),
+ if got, want := names(mustReadDir("initial readdir")),
[]string{"good1", "good2", "x"}; !reflect.DeepEqual(got, want) {
t.Errorf("initial readdir got %q; want %q", got, want)
}
}
// Use TempDir() to make sure we're on a local file system,
// so that the group ids returned by Getgroups will be allowed
- // on the file. On NFS, the Getgroups groups are
+ // on the file. On NFS, the Getgroups groups are
// basically useless.
f := newFile("TestChown", t)
defer Remove(f.Name())
}
// Can't change uid unless root, but can try
- // changing the group id. First try our current group.
+ // changing the group id. First try our current group.
gid := Getgid()
t.Log("gid:", gid)
if err = Chown(f.Name(), -1, gid); err != nil {
}
// Use TempDir() to make sure we're on a local file system,
// so that the group ids returned by Getgroups will be allowed
- // on the file. On NFS, the Getgroups groups are
+ // on the file. On NFS, the Getgroups groups are
// basically useless.
f := newFile("TestFileChown", t)
defer Remove(f.Name())
}
// Can't change uid unless root, but can try
- // changing the group id. First try our current group.
+ // changing the group id. First try our current group.
gid := Getgid()
t.Log("gid:", gid)
if err = f.Chown(-1, gid); err != nil {
}
// Use TempDir() to make sure we're on a local file system,
// so that the group ids returned by Getgroups will be allowed
- // on the file. On NFS, the Getgroups groups are
+ // on the file. On NFS, the Getgroups groups are
// basically useless.
f := newFile("TestLchown", t)
defer Remove(f.Name())
defer Remove(linkname)
// Can't change uid unless root, but can try
- // changing the group id. First try our current group.
+ // changing the group id. First try our current group.
gid := Getgid()
t.Log("gid:", gid)
if err = Lchown(linkname, -1, gid); err != nil {
}
if !os.SameFile(fi, fi2) {
- t.Fatal("race condition occured")
+ t.Fatal("race condition occurred")
}
}
// RemoveAll removes path and any children it contains.
// It removes everything it can but returns the first error
-// it encounters. If the path does not exist, RemoveAll
+// it encounters. If the path does not exist, RemoveAll
// returns nil (no error).
func RemoveAll(path string) error {
// Simple case: if Remove works, we're done.
// No error checking here: either RemoveAll
// will or won't be able to remove dpath;
// either way we want to see if it removes fpath
- // and path/zzz. Reasons why RemoveAll might
+ // and path/zzz. Reasons why RemoveAll might
// succeed in removing dpath as well include:
// * running as root
// * running on a file system without permissions (FAT)
}
}
-// This is a helper for TestStdPipe. It's not a test in itself.
+// This is a helper for TestStdPipe. It's not a test in itself.
func TestStdPipeHelper(t *testing.T) {
if os.Getenv("GO_TEST_STD_PIPE_HELPER_SIGNAL") != "" {
signal.Notify(make(chan os.Signal, 1), syscall.SIGPIPE)
Types of signals
The signals SIGKILL and SIGSTOP may not be caught by a program, and
-therefore can not be affected by this package.
+therefore cannot be affected by this package.
Synchronous signals are signals triggered by errors in program
execution: SIGBUS, SIGFPE, and SIGSEGV. These are only considered
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//
// Package signal will not block sending to c: the caller must ensure
// that c has sufficient buffer space to keep up with the expected
-// signal rate. For a channel used for notification of just one signal value,
+// signal rate. For a channel used for notification of just one signal value,
// a buffer of size 1 is sufficient.
//
// It is allowed to call Notify multiple times with the same channel:
// Lstat returns a FileInfo describing the named file.
// If the file is a symbolic link, the returned FileInfo
-// describes the symbolic link. Lstat makes no attempt to follow the link.
+// describes the symbolic link. Lstat makes no attempt to follow the link.
// If there is an error, it will be of type *PathError.
func Lstat(name string) (FileInfo, error) {
return Stat(name)
// Lstat returns the FileInfo structure describing the named file.
// If the file is a symbolic link, the returned FileInfo
-// describes the symbolic link. Lstat makes no attempt to follow the link.
+// describes the symbolic link. Lstat makes no attempt to follow the link.
// If there is an error, it will be of type *PathError.
func Lstat(name string) (FileInfo, error) {
if len(name) == 0 {
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Simple converions to avoid depending on strconv.
+// Simple conversions to avoid depending on strconv.
package os
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// A FileMode represents a file's mode and permission bits.
// The bits have the same definition on all systems, so that
// information about files can be moved from one system
-// to another portably. Not all bits apply to all systems.
+// to another portably. Not all bits apply to all systems.
// The only required bit is ModeDir for directories.
type FileMode uint32
Name: C.GoString(pwd.pw_gecos),
HomeDir: C.GoString(pwd.pw_dir),
}
- // The pw_gecos field isn't quite standardized. Some docs
+ // The pw_gecos field isn't quite standardized. Some docs
// say: "It is expected to be a comma separated list of
// personal data where the first item is the full name of the
// user."
if err == nil {
return name, nil
}
- // domain worked neigher as a domain nor as a server
+ // domain worked neither as a domain nor as a server
// could be domain server unavailable
// pretend username is fullname
return username, nil
star, chunk, pattern = scanChunk(pattern)
if star && chunk == "" {
// Trailing * matches rest of string unless it has a /.
- return strings.Index(name, string(Separator)) < 0, nil
+ return !strings.Contains(name, string(Separator)), nil
}
// Look for match at current position.
t, ok, err := matchChunk(chunk, name)
// recognized by Match.
func hasMeta(path string) bool {
// TODO(niemeyer): Should other magic characters be added here?
- return strings.IndexAny(path, "*?[") >= 0
+ return strings.ContainsAny(path, "*?[")
}
pattern := tt.pattern
s := tt.s
if runtime.GOOS == "windows" {
- if strings.Index(pattern, "\\") >= 0 {
+ if strings.Contains(pattern, "\\") {
// no escape allowed on windows.
continue
}
)
// Clean returns the shortest path name equivalent to path
-// by purely lexical processing. It applies the following rules
+// by purely lexical processing. It applies the following rules
// iteratively until no further processing can be done:
//
// 1. Replace multiple Separator elements with a single one.
// Abs returns an absolute representation of path.
// If the path is not absolute it will be joined with the current
-// working directory to turn it into an absolute path. The absolute
+// working directory to turn it into an absolute path. The absolute
// path name for a given file is not guaranteed to be unique.
func Abs(path string) (string, error) {
return abs(path)
checkMarks(t, true)
errors = errors[0:0]
- // Test permission errors. Only possible if we're not root
+ // Test permission errors. Only possible if we're not root
// and only on some file systems (AFS, FAT). To avoid errors during
// all.bash on those file systems, skip during go test -short.
if os.Getuid() > 0 && !testing.Short() {
// joinNonEmpty is like join, but it assumes that the first element is non-empty.
func joinNonEmpty(elem []string) string {
if len(elem[0]) == 2 && elem[0][1] == ':' {
- // First element is drive leter without terminating slash.
+ // First element is drive letter without terminating slash.
// Keep path relative to current directory on that drive.
return Clean(elem[0] + strings.Join(elem[1:], string(Separator)))
}
return "", err
}
if runtime.GOOS == "windows" {
- // walkLinks(".", ...) always retuns "." on unix.
+ // walkLinks(".", ...) always returns "." on unix.
// But on windows it returns symlink target, if current
// directory is a symlink. Stop the walk, if symlink
// target is not absolute path, and return "."
star, chunk, pattern = scanChunk(pattern)
if star && chunk == "" {
// Trailing * matches rest of string unless it has a /.
- return strings.Index(name, "/") < 0, nil
+ return !strings.Contains(name, "/"), nil
}
// Look for match at current position.
t, ok, err := matchChunk(chunk, name)
}
// Clean returns the shortest path name equivalent to path
-// by purely lexical processing. It applies the following rules
+// by purely lexical processing. It applies the following rules
// iteratively until no further processing can be done:
//
// 1. Replace multiple slashes with a single slash.
// off the stack into the frame will store an *Inner there, and then if a garbage collection
// happens to scan that argument frame before it is discarded, it will scan the *Inner
// memory as if it were an *Outer. If the two have different memory layouts, the
-// collection will intepret the memory incorrectly.
+// collection will interpret the memory incorrectly.
//
// One such possible incorrect interpretation is to treat two arbitrary memory words
// (Inner.P1 and Inner.P2 below) as an interface (Outer.R below). Because interpreting
2 * PtrSize,
[]byte{1},
[]byte{1},
- // Note: this one is tricky, as the receiver is not a pointer. But we
+ // Note: this one is tricky, as the receiver is not a pointer. But we
// pass the receiver by reference to the autogenerated pointer-receiver
// version of the function.
})
t.Errorf("allocs per chan send/recv: want 1 got %f", allocs)
}
// Note: there is one allocation in reflect.recv which seems to be
- // a limitation of escape analysis. If that is ever fixed the
+ // a limitation of escape analysis. If that is ever fixed the
// allocs < 0.5 condition will trigger and this test should be fixed.
}
+
+type nameTest struct {
+ v interface{}
+ want string
+}
+
+var nameTests = []nameTest{
+ {int32(0), "int32"},
+ {D1{}, "D1"},
+ {[]D1{}, ""},
+ {(chan D1)(nil), ""},
+ {(func() D1)(nil), ""},
+}
+
+func TestNames(t *testing.T) {
+ for _, test := range nameTests {
+ if got := TypeOf(test.v).Name(); got != test.want {
+ t.Errorf("%T Name()=%q, want %q", test.v, got, test.want)
+ }
+ }
+}
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
import "unsafe"
// During deepValueEqual, must keep track of checks that are
-// in progress. The comparison algorithm assumes that all
+// in progress. The comparison algorithm assumes that all
// checks in progress are true when it reencounters them.
// Visited comparisons are stored in a map indexed by visit.
type visit struct {
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
var r []string
for _, m := range typelinks() {
for _, t := range m {
- r = append(r, *t.string)
+ r = append(r, t.string)
}
}
return r
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// license that can be found in the LICENSE file.
// Package reflect implements run-time reflection, allowing a program to
-// manipulate objects with arbitrary types. The typical use is to take a value
+// manipulate objects with arbitrary types. The typical use is to take a value
// with static type interface{} and extract its dynamic type information by
// calling TypeOf, which returns a Type.
//
// Type is the representation of a Go type.
//
-// Not all methods apply to all kinds of types. Restrictions,
+// Not all methods apply to all kinds of types. Restrictions,
// if any, are noted in the documentation for each method.
// Use the Kind method to find out the kind of type before
-// calling kind-specific methods. Calling a method
+// calling kind-specific methods. Calling a method
// inappropriate to the kind of type causes a run-time panic.
type Type interface {
// Methods applicable to all types.
// String returns a string representation of the type.
// The string representation may use shortened package names
// (e.g., base64 instead of "encoding/base64") and is not
- // guaranteed to be unique among types. To test for equality,
+ // guaranteed to be unique among types. To test for equality,
// compare the Types directly.
String() string
ChanDir() ChanDir
// IsVariadic reports whether a function type's final input parameter
- // is a "..." parameter. If so, t.In(t.NumIn() - 1) returns the parameter's
+ // is a "..." parameter. If so, t.In(t.NumIn() - 1) returns the parameter's
// implicit actual type []T.
//
// For concreteness, if t represents func(x int, y ... float64), then
Field(i int) StructField
// FieldByIndex returns the nested field corresponding
- // to the index sequence. It is equivalent to calling Field
+ // to the index sequence. It is equivalent to calling Field
// successively for each index i.
// It panics if the type's Kind is not Struct.
FieldByIndex(index []int) StructField
kind uint8 // enumeration for C
alg *typeAlg // algorithm table
gcdata *byte // garbage collection data
- string *string // string form; unnecessary but undeniably useful
+ string string // string form; unnecessary but undeniably useful
*uncommonType // (relatively) uncommon fields
- ptrToThis *rtype // type for pointer to this type, if used in binary or has methods
}
// a copy of runtime.typeAlg
// Using a pointer to this struct reduces the overall size required
// to describe an unnamed type with no methods.
type uncommonType struct {
- name *string // name of type
pkgPath *string // import path; nil for built-in types like int, string
methods []method // methods associated with type
}
type Method struct {
// Name is the method name.
// PkgPath is the package path that qualifies a lower case (unexported)
- // method name. It is empty for upper case (exported) method names.
+ // method name. It is empty for upper case (exported) method names.
// The combination of PkgPath and Name uniquely identifies a method
// in a method set.
// See https://golang.org/ref/spec#Uniqueness_of_identifiers
return *t.pkgPath
}
-func (t *uncommonType) Name() string {
- if t == nil || t.name == nil {
- return ""
- }
- return *t.name
-}
-
-func (t *rtype) String() string { return *t.string }
+func (t *rtype) String() string { return t.string }
func (t *rtype) Size() uintptr { return t.size }
return t.uncommonType.PkgPath()
}
+func hasPrefix(s, prefix string) bool {
+ return len(s) >= len(prefix) && s[:len(prefix)] == prefix
+}
+
func (t *rtype) Name() string {
- return t.uncommonType.Name()
+ if hasPrefix(t.string, "map[") {
+ return ""
+ }
+ if hasPrefix(t.string, "struct {") {
+ return ""
+ }
+ if hasPrefix(t.string, "chan ") {
+ return ""
+ }
+ if hasPrefix(t.string, "func(") {
+ return ""
+ }
+ if t.string[0] == '[' || t.string[0] == '*' {
+ return ""
+ }
+ i := len(t.string) - 1
+ for i >= 0 {
+ if t.string[i] == '.' {
+ break
+ }
+ i--
+ }
+ return t.string[i+1:]
}
func (t *rtype) ChanDir() ChanDir {
// Name is the field name.
Name string
// PkgPath is the package path that qualifies a lower case (unexported)
- // field name. It is empty for upper case (exported) field names.
+ // field name. It is empty for upper case (exported) field names.
// See https://golang.org/ref/spec#Uniqueness_of_identifiers
PkgPath string
f.Offset = p.offset
// NOTE(rsc): This is the only allocation in the interface
- // presented by a reflect.Type. It would be nice to avoid,
+ // presented by a reflect.Type. It would be nice to avoid,
// at least in the common cases, but we need to make sure
// that misbehaving clients of reflect cannot affect other
- // uses of reflect. One possibility is CL 5371098, but we
+ // uses of reflect. One possibility is CL 5371098, but we
// postponed that ugliness until there is a demonstrated
- // need for the performance. This is issue 2320.
+ // need for the performance. This is issue 2320.
f.Index = []int{i}
return
}
}
func (t *rtype) ptrTo() *rtype {
- if p := t.ptrToThis; p != nil {
- return p
- }
-
- // Otherwise, synthesize one.
- // This only happens for pointers with no methods.
- // We keep the mapping in a map on the side, because
- // this operation is rare and a separate map lets us keep
- // the type structures in read-only memory.
+ // Check the cache.
ptrMap.RLock()
if m := ptrMap.m; m != nil {
if p := m[t]; p != nil {
}
}
ptrMap.RUnlock()
+
ptrMap.Lock()
if ptrMap.m == nil {
ptrMap.m = make(map[*rtype]*ptrType)
}
// Look in known types.
- s := "*" + *t.string
+ s := "*" + t.string
for _, tt := range typesByString(s) {
p = (*ptrType)(unsafe.Pointer(tt))
if p.elem == t {
prototype := *(**ptrType)(unsafe.Pointer(&iptr))
*p = *prototype
- p.string = &s
+ p.string = s
// For the type structures linked into the binary, the
// compiler provides a good hash of the string.
p.hash = fnv1(t.hash, '*')
p.uncommonType = nil
- p.ptrToThis = nil
p.elem = t
ptrMap.m[t] = p
// Note that strings are not unique identifiers for types:
// there can be more than one with a given string.
// Only types we might want to look up are included:
-// channels, maps, slices, and arrays.
+// pointers, channels, maps, slices, and arrays.
func typelinks() [][]*rtype
// typesByString returns the subslice of typelinks() whose elements have
for i < j {
h := i + (j-i)/2 // avoid overflow when computing h
// i ≤ h < j
- if !(*typ[h].string >= s) {
+ if !(typ[h].string >= s) {
i = h + 1 // preserves f(i-1) == false
} else {
j = h // preserves f(j) == true
// We could do a second binary search, but the caller is going
// to do a linear scan anyway.
j = i
- for j < len(typ) && *typ[j].string == s {
+ for j < len(typ) && typ[j].string == s {
j++
}
lookupCache.Unlock()
panic("reflect.ChanOf: invalid dir")
case SendDir:
- s = "chan<- " + *typ.string
+ s = "chan<- " + typ.string
case RecvDir:
- s = "<-chan " + *typ.string
+ s = "<-chan " + typ.string
case BothDir:
- s = "chan " + *typ.string
+ s = "chan " + typ.string
}
for _, tt := range typesByString(s) {
ch := (*chanType)(unsafe.Pointer(tt))
ch := new(chanType)
*ch = *prototype
ch.dir = uintptr(dir)
- ch.string = &s
+ ch.string = s
ch.hash = fnv1(typ.hash, 'c', byte(dir))
ch.elem = typ
ch.uncommonType = nil
- ch.ptrToThis = nil
return cachePut(ckey, &ch.rtype)
}
}
// Look in known types.
- s := "map[" + *ktyp.string + "]" + *etyp.string
+ s := "map[" + ktyp.string + "]" + etyp.string
for _, tt := range typesByString(s) {
mt := (*mapType)(unsafe.Pointer(tt))
if mt.key == ktyp && mt.elem == etyp {
var imap interface{} = (map[unsafe.Pointer]unsafe.Pointer)(nil)
mt := new(mapType)
*mt = **(**mapType)(unsafe.Pointer(&imap))
- mt.string = &s
+ mt.string = s
mt.hash = fnv1(etyp.hash, 'm', byte(ktyp.hash>>24), byte(ktyp.hash>>16), byte(ktyp.hash>>8), byte(ktyp.hash))
mt.key = ktyp
mt.elem = etyp
mt.reflexivekey = isReflexive(ktyp)
mt.needkeyupdate = needKeyUpdate(ktyp)
mt.uncommonType = nil
- mt.ptrToThis = nil
return cachePut(ckey, &mt.rtype)
}
}
// Populate the remaining fields of ft and store in cache.
- ft.string = &str
+ ft.string = str
ft.uncommonType = nil
- ft.ptrToThis = nil
funcLookupCache.m[hash] = append(funcLookupCache.m[hash], &ft.rtype)
return &ft.rtype
}
if ft.dotdotdot && i == len(ft.in)-1 {
repr = append(repr, "..."...)
- repr = append(repr, *(*sliceType)(unsafe.Pointer(t)).elem.string...)
+ repr = append(repr, (*sliceType)(unsafe.Pointer(t)).elem.string...)
} else {
- repr = append(repr, *t.string...)
+ repr = append(repr, t.string...)
}
}
repr = append(repr, ')')
if i > 0 {
repr = append(repr, ", "...)
}
- repr = append(repr, *t.string...)
+ repr = append(repr, t.string...)
}
if len(ft.out) > 1 {
repr = append(repr, ')')
// Make sure these routines stay in sync with ../../runtime/hashmap.go!
// These types exist only for GC, so we only fill out GC relevant info.
-// Currently, that's just size and the GC program. We also fill in string
+// Currently, that's just size and the GC program. We also fill in string
// for possible debugging use.
const (
bucketSize uintptr = 8
b.ptrdata = ptrdata
b.kind = kind
b.gcdata = gcdata
- s := "bucket(" + *ktyp.string + "," + *etyp.string + ")"
- b.string = &s
+ s := "bucket(" + ktyp.string + "," + etyp.string + ")"
+ b.string = s
return b
}
}
// Look in known types.
- s := "[]" + *typ.string
+ s := "[]" + typ.string
for _, tt := range typesByString(s) {
slice := (*sliceType)(unsafe.Pointer(tt))
if slice.elem == typ {
prototype := *(**sliceType)(unsafe.Pointer(&islice))
slice := new(sliceType)
*slice = *prototype
- slice.string = &s
+ slice.string = s
slice.hash = fnv1(typ.hash, '[')
slice.elem = typ
slice.uncommonType = nil
- slice.ptrToThis = nil
return cachePut(ckey, &slice.rtype)
}
}
// Look in known types.
- s := "[" + strconv.Itoa(count) + "]" + *typ.string
+ s := "[" + strconv.Itoa(count) + "]" + typ.string
for _, tt := range typesByString(s) {
array := (*arrayType)(unsafe.Pointer(tt))
if array.elem == typ {
prototype := *(**arrayType)(unsafe.Pointer(&iarray))
array := new(arrayType)
*array = *prototype
- array.string = &s
+ array.string = s
array.hash = fnv1(typ.hash, '[')
for n := uint32(count); n > 0; n >>= 8 {
array.hash = fnv1(array.hash, byte(n))
array.align = typ.align
array.fieldAlign = typ.fieldAlign
array.uncommonType = nil
- array.ptrToThis = nil
array.len = uintptr(count)
array.slice = slice.(*rtype)
// function arguments and return values for the function type t.
// If rcvr != nil, rcvr specifies the type of the receiver.
// The returned type exists only for GC, so we only fill out GC relevant info.
-// Currently, that's just size and the GC program. We also fill in
+// Currently, that's just size and the GC program. We also fill in
// the name for possible debugging use.
func funcLayout(t *rtype, rcvr *rtype) (frametype *rtype, argSize, retOffset uintptr, stk *bitVector, framePool *sync.Pool) {
if t.Kind() != Func {
var s string
if rcvr != nil {
- s = "methodargs(" + *rcvr.string + ")(" + *t.string + ")"
+ s = "methodargs(" + rcvr.string + ")(" + t.string + ")"
} else {
- s = "funcargs(" + *t.string + ")"
+ s = "funcargs(" + t.string + ")"
}
- x.string = &s
+ x.string = s
// cache result for future callers
if layoutCache.m == nil {
// Value is the reflection interface to a Go value.
//
-// Not all methods apply to all kinds of values. Restrictions,
+// Not all methods apply to all kinds of values. Restrictions,
// if any, are noted in the documentation for each method.
// Use the Kind method to find out the kind of value before
-// calling kind-specific methods. Calling a method
+// calling kind-specific methods. Calling a method
// inappropriate to the kind of type causes a run time panic.
//
// The zero Value represents no value.
flag
// A method value represents a curried method invocation
- // like r.Read for some receiver r. The typ+val+flag bits describe
+ // like r.Read for some receiver r. The typ+val+flag bits describe
// the receiver r, but the flag's Kind bits say Func (methods are
// functions), and the top bits of the flag give the method number
// in r's type's method table.
}
e.word = ptr
case v.flag&flagIndir != 0:
- // Value is indirect, but interface is direct. We need
+ // Value is indirect, but interface is direct. We need
// to load the data at v.ptr into the interface data word.
e.word = *(*unsafe.Pointer)(v.ptr)
default:
// Value is direct, and so is the interface.
e.word = v.ptr
}
- // Now, fill in the type portion. We're very careful here not
+ // Now, fill in the type portion. We're very careful here not
// to have any operation between the e.word and e.typ assignments
// that would let the garbage collector observe the partially-built
// interface value.
}
// A ValueError occurs when a Value method is invoked on
-// a Value that does not support it. Such cases are documented
+// a Value that does not support it. Such cases are documented
// in the description of each method.
type ValueError struct {
Method string
}
// CanAddr reports whether the value's address can be obtained with Addr.
-// Such values are called addressable. A value is addressable if it is
+// Such values are called addressable. A value is addressable if it is
// an element of a slice, an element of an addressable array,
// a field of an addressable struct, or the result of dereferencing a pointer.
// If CanAddr returns false, calling Addr will panic.
return
}
-// v is a method receiver. Store at p the word which is used to
+// v is a method receiver. Store at p the word which is used to
// encode that receiver at the start of the argument list.
// Reflect uses the "interface" calling convention for
// methods, which always uses one word to record the receiver.
// Do not require key to be exported, so that DeepEqual
// and other programs can use all the keys returned by
- // MapKeys as arguments to MapIndex. If either the map
+ // MapKeys as arguments to MapIndex. If either the map
// or the key is unexported, though, the result will be
- // considered unexported. This is consistent with the
+ // considered unexported. This is consistent with the
// behavior for structs, which allow read but not write
// of unexported fields.
key = key.assignTo("reflect.Value.MapIndex", tt.key, nil)
key := mapiterkey(it)
if key == nil {
// Someone deleted an entry from the map since we
- // called maplen above. It's a data race, but nothing
+ // called maplen above. It's a data race, but nothing
// we can do about it.
break
}
// result is zero if and only if v is a nil func Value.
//
// If v's Kind is Slice, the returned pointer is to the first
-// element of the slice. If the slice is nil the returned value
+// element of the slice. If the slice is nil the returned value
// is 0. If the slice is empty but non-nil the return value is non-zero.
func (v Value) Pointer() uintptr {
// TODO: deprecate
val unsafe.Pointer // ptr to data (SendDir) or ptr to receive buffer (RecvDir)
}
-// rselect runs a select. It returns the index of the chosen case.
+// rselect runs a select. It returns the index of the chosen case.
// If the case was a receive, val is filled in with the received value.
// The conventional OK bool indicates whether the receive corresponds
// to a sent value.
}
// ValueOf returns a new Value initialized to the concrete value
-// stored in the interface i. ValueOf(nil) returns the zero Value.
+// stored in the interface i. ValueOf(nil) returns the zero Value.
func ValueOf(i interface{}) Value {
if i == nil {
return Value{}
}
// TODO: Maybe allow contents of a Value to live on the stack.
- // For now we make the contents always escape to the heap. It
+ // For now we make the contents always escape to the heap. It
// makes life easier in a few places (see chanrecv/mapassign
// comment below).
escapes(i)
}
// New returns a Value representing a pointer to a new zero value
-// for the specified type. That is, the returned Value's Type is PtrTo(typ).
+// for the specified type. That is, the returned Value's Type is PtrTo(typ).
func New(typ Type) Value {
if typ == nil {
panic("reflect: New(nil)")
func chanlen(ch unsafe.Pointer) int
// Note: some of the noescape annotations below are technically a lie,
-// but safe in the context of this package. Functions like chansend
+// but safe in the context of this package. Functions like chansend
// and mapassign don't escape the referent, but may escape anything
// the referent points to (they do shallow copies of the referent).
// It is safe in this package because the referent may only point
// Optimization: rather than push and pop,
// code that is going to Push and continue
// the loop simply updates ip, p, and arg
- // and jumps to CheckAndLoop. We have to
+ // and jumps to CheckAndLoop. We have to
// do the ShouldVisit check that Push
// would have, but we avoid the stack
// manipulation.
}
panic("bad arg in InstCapture")
- continue
case syntax.InstEmptyWidth:
if syntax.EmptyOp(inst.Arg)&^i.context(pos) != 0 {
// Otherwise, continue on in hope of a longer match.
continue
}
- panic("unreachable")
}
return m.matched
// A entry is an entry on a queue.
// It holds both the instruction pc and the actual thread.
// Some queue entries are just place holders so that the machine
-// knows it has considered that pc. Such entries have t == nil.
+// knows it has considered that pc. Such entries have t == nil.
type entry struct {
pc uint32
t *thread
// considered during RE2's exhaustive tests, which run all possible
// regexps over a given set of atoms and operators, up to a given
// complexity, over all possible strings over a given alphabet,
-// up to a given size. Rather than try to link with RE2, we read a
+// up to a given size. Rather than try to link with RE2, we read a
// log file containing the test cases and the expected matches.
// The log file, re2-exhaustive.txt, is generated by running 'make log'
// in the open source RE2 distribution https://github.com/google/re2/.
// -;0-3 0-1 1-2 2-3
//
// The stanza begins by defining a set of strings, quoted
-// using Go double-quote syntax, one per line. Then the
+// using Go double-quote syntax, one per line. Then the
// regexps section gives a sequence of regexps to run on
-// the strings. In the block that follows a regexp, each line
+// the strings. In the block that follows a regexp, each line
// gives the semicolon-separated match results of running
// the regexp on the corresponding string.
// Each match result is either a single -, meaning no match, or a
// space-separated sequence of pairs giving the match and
-// submatch indices. An unmatched subexpression formats
+// submatch indices. An unmatched subexpression formats
// its pair as a single - (not illustrated above). For now
// each regexp run produces two match results, one for a
// ``full match'' that restricts the regexp to matching the entire
// string or nothing, and one for a ``partial match'' that gives
// the leftmost first match found in the string.
//
-// Lines beginning with # are comments. Lines beginning with
+// Lines beginning with # are comments. Lines beginning with
// a capital letter are test names printed during RE2's test suite
// and are echoed into t but otherwise ignored.
//
if !isSingleBytes(text) && strings.Contains(re.String(), `\B`) {
// RE2's \B considers every byte position,
// so it sees 'not word boundary' in the
- // middle of UTF-8 sequences. This package
+ // middle of UTF-8 sequences. This package
// only considers the positions between runes,
- // so it disagrees. Skip those cases.
+ // so it disagrees. Skip those cases.
continue
}
res := strings.Split(line, ";")
// h REG_MULTIREF multiple digit backref
// i REG_ICASE ignore case
// j REG_SPAN . matches \n
- // k REG_ESCAPE \ to ecape [...] delimiter
+ // k REG_ESCAPE \ to escape [...] delimiter
// l REG_LEFT implicit ^...
// m REG_MINIMAL minimal match
// n REG_NEWLINE explicit \n match
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// OnePassPrefix returns a literal string that all matches for the
-// regexp must start with. Complete is true if the prefix
+// regexp must start with. Complete is true if the prefix
// is the entire match. Pc is the index of the last rune instruction
// in the string. The OnePassPrefix skips over the mandatory
// EmptyBeginText
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// All characters are UTF-8-encoded code points.
//
// There are 16 methods of Regexp that match a regular expression and identify
-// the matched text. Their names are matched by this regular expression:
+// the matched text. Their names are matched by this regular expression:
//
// Find(All)?(String)?(Submatch)?(Index)?
//
// If 'All' is present, the routine matches successive non-overlapping
-// matches of the entire expression. Empty matches abutting a preceding
-// match are ignored. The return value is a slice containing the successive
-// return values of the corresponding non-'All' routine. These routines take
+// matches of the entire expression. Empty matches abutting a preceding
+// match are ignored. The return value is a slice containing the successive
+// return values of the corresponding non-'All' routine. These routines take
// an extra integer argument, n; if n >= 0, the function returns at most n
// matches/submatches.
//
//
// If 'Index' is present, matches and submatches are identified by byte index
// pairs within the input string: result[2*n:2*n+1] identifies the indexes of
-// the nth submatch. The pair for n==0 identifies the match of the entire
-// expression. If 'Index' is not present, the match is identified by the
-// text of the match/submatch. If an index is negative, it means that
+// the nth submatch. The pair for n==0 identifies the match of the entire
+// expression. If 'Index' is not present, the match is identified by the
+// text of the match/submatch. If an index is negative, it means that
// subexpression did not match any string in the input.
//
// There is also a subset of the methods that can be applied to text read
//
// MatchReader, FindReaderIndex, FindReaderSubmatchIndex
//
-// This set may grow. Note that regular expression matches may need to
+// This set may grow. Note that regular expression matches may need to
// examine text beyond the text returned by a match, so the methods that
// match text from a RuneReader may read arbitrarily far into the input
// before returning.
}
// SubexpNames returns the names of the parenthesized subexpressions
-// in this Regexp. The name for the first sub-expression is names[1],
+// in this Regexp. The name for the first sub-expression is names[1],
// so that if m is a match slice, the name for m[i] is SubexpNames()[i].
// Since the Regexp as a whole cannot be named, names[0] is always
-// the empty string. The slice should not be modified.
+// the empty string. The slice should not be modified.
func (re *Regexp) SubexpNames() []string {
return re.subexpNames
}
}
// LiteralPrefix returns a literal string that must begin any match
-// of the regular expression re. It returns the boolean true if the
+// of the regular expression re. It returns the boolean true if the
// literal string comprises the entire regular expression.
func (re *Regexp) LiteralPrefix() (prefix string, complete bool) {
return re.prefix, re.prefixComplete
}
// MatchReader checks whether a textual regular expression matches the text
-// read by the RuneReader. More complicated queries need to use Compile and
+// read by the RuneReader. More complicated queries need to use Compile and
// the full Regexp interface.
func MatchReader(pattern string, r io.RuneReader) (matched bool, err error) {
re, err := Compile(pattern)
}
// MatchString checks whether a textual regular expression
-// matches a string. More complicated queries need
+// matches a string. More complicated queries need
// to use Compile and the full Regexp interface.
func MatchString(pattern string, s string) (matched bool, err error) {
re, err := Compile(pattern)
}
// Match checks whether a textual regular expression
-// matches a byte slice. More complicated queries need
+// matches a byte slice. More complicated queries need
// to use Compile and the full Regexp interface.
func Match(pattern string, b []byte) (matched bool, err error) {
re, err := Compile(pattern)
}
// ReplaceAllString returns a copy of src, replacing matches of the Regexp
-// with the replacement string repl. Inside repl, $ signs are interpreted as
+// with the replacement string repl. Inside repl, $ signs are interpreted as
// in Expand, so for instance $1 represents the text of the first submatch.
func (re *Regexp) ReplaceAllString(src, repl string) string {
n := 2
- if strings.Index(repl, "$") >= 0 {
+ if strings.Contains(repl, "$") {
n = 2 * (re.numSubexp + 1)
}
b := re.replaceAll(nil, src, n, func(dst []byte, match []int) []byte {
}
// ReplaceAllLiteralString returns a copy of src, replacing matches of the Regexp
-// with the replacement string repl. The replacement repl is substituted directly,
+// with the replacement string repl. The replacement repl is substituted directly,
// without using Expand.
func (re *Regexp) ReplaceAllLiteralString(src, repl string) string {
return string(re.replaceAll(nil, src, 2, func(dst []byte, match []int) []byte {
// ReplaceAllStringFunc returns a copy of src in which all matches of the
// Regexp have been replaced by the return value of function repl applied
-// to the matched substring. The replacement returned by repl is substituted
+// to the matched substring. The replacement returned by repl is substituted
// directly, without using Expand.
func (re *Regexp) ReplaceAllStringFunc(src string, repl func(string) string) string {
b := re.replaceAll(nil, src, 2, func(dst []byte, match []int) []byte {
searchPos += width
} else if searchPos+1 > a[1] {
// This clause is only needed at the end of the input
- // string. In that case, DecodeRuneInString returns width=0.
+ // string. In that case, DecodeRuneInString returns width=0.
searchPos++
} else {
searchPos = a[1]
}
// ReplaceAll returns a copy of src, replacing matches of the Regexp
-// with the replacement text repl. Inside repl, $ signs are interpreted as
+// with the replacement text repl. Inside repl, $ signs are interpreted as
// in Expand, so for instance $1 represents the text of the first submatch.
func (re *Regexp) ReplaceAll(src, repl []byte) []byte {
n := 2
}
// ReplaceAllLiteral returns a copy of src, replacing matches of the Regexp
-// with the replacement bytes repl. The replacement repl is substituted directly,
+// with the replacement bytes repl. The replacement repl is substituted directly,
// without using Expand.
func (re *Regexp) ReplaceAllLiteral(src, repl []byte) []byte {
return re.replaceAll(src, "", 2, func(dst []byte, match []int) []byte {
// ReplaceAllFunc returns a copy of src in which all matches of the
// Regexp have been replaced by the return value of function repl applied
-// to the matched byte slice. The replacement returned by repl is substituted
+// to the matched byte slice. The replacement returned by repl is substituted
// directly, without using Expand.
func (re *Regexp) ReplaceAllFunc(src []byte, repl func([]byte) []byte) []byte {
return re.replaceAll(src, "", 2, func(dst []byte, match []int) []byte {
// QuoteMeta returns a string that quotes all regular expression metacharacters
// inside the argument text; the returned string is a regular expression matching
-// the literal text. For example, QuoteMeta(`[foo]`) returns `\[foo\]`.
+// the literal text. For example, QuoteMeta(`[foo]`) returns `\[foo\]`.
func QuoteMeta(s string) string {
b := make([]byte, 2*len(s))
}
// FindIndex returns a two-element slice of integers defining the location of
-// the leftmost match in b of the regular expression. The match itself is at
+// the leftmost match in b of the regular expression. The match itself is at
// b[loc[0]:loc[1]].
// A return value of nil indicates no match.
func (re *Regexp) FindIndex(b []byte) (loc []int) {
}
// FindString returns a string holding the text of the leftmost match in s of the regular
-// expression. If there is no match, the return value is an empty string,
+// expression. If there is no match, the return value is an empty string,
// but it will also be empty if the regular expression successfully matches
-// an empty string. Use FindStringIndex or FindStringSubmatch if it is
+// an empty string. Use FindStringIndex or FindStringSubmatch if it is
// necessary to distinguish these cases.
func (re *Regexp) FindString(s string) string {
a := re.doExecute(nil, nil, s, 0, 2)
}
// FindStringIndex returns a two-element slice of integers defining the
-// location of the leftmost match in s of the regular expression. The match
+// location of the leftmost match in s of the regular expression. The match
// itself is at s[loc[0]:loc[1]].
// A return value of nil indicates no match.
func (re *Regexp) FindStringIndex(s string) (loc []int) {
// FindReaderIndex returns a two-element slice of integers defining the
// location of the leftmost match of the regular expression in text read from
-// the RuneReader. The match text was found in the input stream at
+// the RuneReader. The match text was found in the input stream at
// byte offset loc[0] through loc[1]-1.
// A return value of nil indicates no match.
func (re *Regexp) FindReaderIndex(r io.RuneReader) (loc []int) {
// Expand appends template to dst and returns the result; during the
// append, Expand replaces variables in the template with corresponding
-// matches drawn from src. The match slice should have been returned by
+// matches drawn from src. The match slice should have been returned by
// FindSubmatchIndex.
//
// In the template, a variable is denoted by a substring of the form
// $name or ${name}, where name is a non-empty sequence of letters,
-// digits, and underscores. A purely numeric name like $1 refers to
+// digits, and underscores. A purely numeric name like $1 refers to
// the submatch with the corresponding index; other names refer to
-// capturing parentheses named with the (?P<name>...) syntax. A
+// capturing parentheses named with the (?P<name>...) syntax. A
// reference to an out of range or unmatched index or a name that is not
// present in the regular expression is replaced with an empty slice.
//
// FindReaderSubmatchIndex returns a slice holding the index pairs
// identifying the leftmost match of the regular expression of text read by
// the RuneReader, and the matches, if any, of its subexpressions, as defined
-// by the 'Submatch' and 'Index' descriptions in the package comment. A
+// by the 'Submatch' and 'Index' descriptions in the package comment. A
// return value of nil indicates no match.
func (re *Regexp) FindReaderSubmatchIndex(r io.RuneReader) []int {
return re.pad(re.doExecute(r, nil, "", 0, re.prog.NumCap))
// A patchList is a list of instruction pointers that need to be filled in (patched).
// Because the pointers haven't been filled in yet, we can reuse their storage
-// to hold the list. It's kind of sleazy, but works well in practice.
+// to hold the list. It's kind of sleazy, but works well in practice.
// See http://swtch.com/~rsc/regexp/regexp1.html for inspiration.
//
// These aren't really pointers: they're integers, so we can reinterpret them
-// this way without using package unsafe. A value l denotes
+// this way without using package unsafe. A value l denotes
// p.inst[l>>1].Out (l&1==0) or .Arg (l&1==1).
// l == 0 denotes the empty list, okay because we start every program
// with a fail instruction, so we'll never want to point at its output link.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// maybeConcat implements incremental concatenation
-// of literal runes into string nodes. The parser calls this
+// of literal runes into string nodes. The parser calls this
// before each push, so only the top fragment of the stack
-// might need processing. Since this is called before a push,
+// might need processing. Since this is called before a push,
// the topmost literal is no longer subject to operators like *
// (Otherwise ab* would turn into (ab)*.)
// If r >= 0 and there's a node left over, maybeConcat uses it
}
// removeLeadingString removes the first n leading runes
-// from the beginning of re. It returns the replacement for re.
+// from the beginning of re. It returns the replacement for re.
func (p *parser) removeLeadingString(re *Regexp, n int) *Regexp {
if re.Op == OpConcat && len(re.Sub) > 0 {
// Removing a leading string in a concatenation
// Perl 5.10 gave in and implemented the Python version too,
// but they claim that the last two are the preferred forms.
// PCRE and languages based on it (specifically, PHP and Ruby)
- // support all three as well. EcmaScript 4 uses only the Python form.
+ // support all three as well. EcmaScript 4 uses only the Python form.
//
// In both the open source world (via Code Search) and the
// Google source tree, (?P<expr>name) is the dominant form,
- // so that's the one we implement. One is enough.
+ // so that's the one we implement. One is enough.
if len(t) > 4 && t[2] == 'P' && t[3] == '<' {
// Pull out name.
end := strings.IndexRune(t, '>')
return t[end+1:], nil
}
- // Non-capturing group. Might also twiddle Perl flags.
+ // Non-capturing group. Might also twiddle Perl flags.
var c rune
t = t[2:] // skip (?
flags := p.flags
if c < utf8.RuneSelf && !isalnum(c) {
// Escaped non-word characters are always themselves.
// PCRE is not quite so rigorous: it accepts things like
- // \q, but we don't. We once rejected \_, but too many
+ // \q, but we don't. We once rejected \_, but too many
// programs and people insist on using it, so allow \_.
return c, t, nil
}
if c == '{' {
// Any number of digits in braces.
// Perl accepts any text at all; it ignores all text
- // after the first non-hex digit. We require only hex digits,
+ // after the first non-hex digit. We require only hex digits,
// and at least one.
nhex := 0
r = 0
}
return x*16 + y, t, nil
- // C escapes. There is no case 'b', to avoid misparsing
+ // C escapes. There is no case 'b', to avoid misparsing
// the Perl word-boundary \b as the C backspace \b
- // when in POSIX mode. In Perl, /\b/ means word-boundary
- // but /[\b]/ means backspace. We don't support that.
+ // when in POSIX mode. In Perl, /\b/ means word-boundary
+ // but /[\b]/ means backspace. We don't support that.
// If you want a backspace, embed a literal backspace
// character or use \x08.
case 'a':
}
// parsePerlClassEscape parses a leading Perl character class escape like \d
-// from the beginning of s. If one is present, it appends the characters to r
+// from the beginning of s. If one is present, it appends the characters to r
// and returns the new slice r and the remainder of the string.
func (p *parser) parsePerlClassEscape(s string, r []rune) (out []rune, rest string) {
if p.flags&PerlX == 0 || len(s) < 2 || s[0] != '\\' {
}
// parseNamedClass parses a leading POSIX named character class like [:alnum:]
-// from the beginning of s. If one is present, it appends the characters to r
+// from the beginning of s. If one is present, it appends the characters to r
// and returns the new slice r and the remainder of the string.
func (p *parser) parseNamedClass(s string, r []rune) (out []rune, rest string, err error) {
if len(s) < 2 || s[0] != '[' || s[1] != ':' {
}
// parseUnicodeClass parses a leading Unicode character class like \p{Han}
-// from the beginning of s. If one is present, it appends the characters to r
+// from the beginning of s. If one is present, it appends the characters to r
// and returns the new slice r and the remainder of the string.
func (p *parser) parseUnicodeClass(s string, r []rune) (out []rune, rest string, err error) {
if p.flags&UnicodeGroups == 0 || len(s) < 2 || s[0] != '\\' || s[1] != 'p' && s[1] != 'P' {
hi = maxFold
}
- // Brute force. Depend on appendRange to coalesce ranges on the fly.
+ // Brute force. Depend on appendRange to coalesce ranges on the fly.
for c := lo; c <= hi; c++ {
r = appendRange(r, c, c)
f := unicode.SimpleFold(c)
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// Prefix returns a literal string that all matches for the
-// regexp must start with. Complete is true if the prefix
+// regexp must start with. Complete is true if the prefix
// is the entire match.
func (p *Prog) Prefix() (prefix string, complete bool) {
i, _ := p.skipNop(uint32(p.Start))
}
// StartCond returns the leading empty-width conditions that must
-// be true in any match. It returns ^EmptyOp(0) if no matches are possible.
+// be true in any match. It returns ^EmptyOp(0) if no matches are possible.
func (p *Prog) StartCond() EmptyOp {
var flag EmptyOp
pc := uint32(p.Start)
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
if len(re.Rune) == 0 {
b.WriteString(`^\x00-\x{10FFFF}`)
} else if re.Rune[0] == 0 && re.Rune[len(re.Rune)-1] == unicode.MaxRune {
- // Contains 0 and MaxRune. Probably a negated class.
+ // Contains 0 and MaxRune. Probably a negated class.
// Print the gaps.
b.WriteRune('^')
for i := 1; i < len(re.Rune)-1; i += 2 {
func escape(b *bytes.Buffer, r rune, force bool) {
if unicode.IsPrint(r) {
- if strings.IndexRune(meta, r) >= 0 || force {
+ if strings.ContainsRune(meta, r) || force {
b.WriteRune('\\')
}
b.WriteRune(r)
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// and with various other simplifications, such as rewriting /(?:a+)+/ to /a+/.
// The resulting regexp will execute correctly but its string representation
// will not produce the same parse tree, because capturing parentheses
-// may have been duplicated or removed. For example, the simplified form
+// may have been duplicated or removed. For example, the simplified form
// for /(x){1,2}/ is /(x)(x)?/ but both parentheses capture as $1.
// The returned regexp may share structure with or be the original.
func (re *Regexp) Simplify() *Regexp {
}
// simplify1 implements Simplify for the unary OpStar,
-// OpPlus, and OpQuest operators. It returns the simple regexp
+// OpPlus, and OpQuest operators. It returns the simple regexp
// equivalent to
//
// Regexp{Op: op, Flags: flags, Sub: {sub}}
//
// under the assumption that sub is already simple, and
-// without first allocating that structure. If the regexp
+// without first allocating that structure. If the regexp
// to be returned turns out to be equivalent to re, simplify1
// returns re instead.
//
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
{`a{0,1}`, `a?`},
// The next three are illegible because Simplify inserts (?:)
// parens instead of () parens to avoid creating extra
- // captured subexpressions. The comments show a version with fewer parens.
+ // captured subexpressions. The comments show a version with fewer parens.
{`(a){0,2}`, `(?:(a)(a)?)?`}, // (aa?)?
{`(a){0,4}`, `(?:(a)(?:(a)(?:(a)(a)?)?)?)?`}, // (a(a(aa?)?)?)?
{`(a){2,6}`, `(a)(a)(?:(a)(?:(a)(?:(a)(a)?)?)?)?`}, // aa(a(a(aa?)?)?)?
// Empty string as a regular expression.
// The empty string must be preserved inside parens in order
// to make submatches work right, so these tests are less
- // interesting than they might otherwise be. String inserts
+ // interesting than they might otherwise be. String inserts
// explicit (?:) in place of non-parenthesized empty strings,
// to make them easier to spot for other parsers.
{`(a|b|)`, `([a-b]|(?:))`},
// type algorithms - known to compiler
const (
- alg_MEM = iota
+ alg_NOEQ = iota
alg_MEM0
alg_MEM8
alg_MEM16
alg_MEM32
alg_MEM64
alg_MEM128
- alg_NOEQ
- alg_NOEQ0
- alg_NOEQ8
- alg_NOEQ16
- alg_NOEQ32
- alg_NOEQ64
- alg_NOEQ128
alg_STRING
alg_INTER
alg_NILINTER
- alg_SLICE
alg_FLOAT32
alg_FLOAT64
alg_CPLX64
}
// memhash_varlen is defined in assembly because it needs access
-// to the closure. It appears here to provide an argument
+// to the closure. It appears here to provide an argument
// signature for the assembly routine.
func memhash_varlen(p unsafe.Pointer, h uintptr) uintptr
var algarray = [alg_max]typeAlg{
- alg_MEM: {nil, nil}, // not used
+ alg_NOEQ: {nil, nil},
alg_MEM0: {memhash0, memequal0},
alg_MEM8: {memhash8, memequal8},
alg_MEM16: {memhash16, memequal16},
alg_MEM32: {memhash32, memequal32},
alg_MEM64: {memhash64, memequal64},
alg_MEM128: {memhash128, memequal128},
- alg_NOEQ: {nil, nil},
- alg_NOEQ0: {nil, nil},
- alg_NOEQ8: {nil, nil},
- alg_NOEQ16: {nil, nil},
- alg_NOEQ32: {nil, nil},
- alg_NOEQ64: {nil, nil},
- alg_NOEQ128: {nil, nil},
alg_STRING: {strhash, strequal},
alg_INTER: {interhash, interequal},
alg_NILINTER: {nilinterhash, nilinterequal},
- alg_SLICE: {nil, nil},
alg_FLOAT32: {f32hash, f32equal},
alg_FLOAT64: {f64hash, f64equal},
alg_CPLX64: {c64hash, c64equal},
t := tab._type
fn := t.alg.hash
if fn == nil {
- panic(errorString("hash of unhashable type " + *t._string))
+ panic(errorString("hash of unhashable type " + t._string))
}
if isDirectIface(t) {
return c1 * fn(unsafe.Pointer(&a.data), h^c0)
}
fn := t.alg.hash
if fn == nil {
- panic(errorString("hash of unhashable type " + *t._string))
+ panic(errorString("hash of unhashable type " + t._string))
}
if isDirectIface(t) {
return c1 * fn(unsafe.Pointer(&a.data), h^c0)
}
}
-func memequal(p, q unsafe.Pointer, size uintptr) bool {
- if p == q {
- return true
- }
- return memeq(p, q, size)
-}
-
func memequal0(p, q unsafe.Pointer) bool {
return true
}
}
eq := t.alg.equal
if eq == nil {
- panic(errorString("comparing uncomparable type " + *t._string))
+ panic(errorString("comparing uncomparable type " + t._string))
}
if isDirectIface(t) {
return eq(noescape(unsafe.Pointer(&x.data)), noescape(unsafe.Pointer(&y.data)))
t := xtab._type
eq := t.alg.equal
if eq == nil {
- panic(errorString("comparing uncomparable type " + *t._string))
+ panic(errorString("comparing uncomparable type " + t._string))
}
if isDirectIface(t) {
return eq(noescape(unsafe.Pointer(&x.data)), noescape(unsafe.Pointer(&y.data)))
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// func mcall(fn func(*g))
// Switch to m->g0's stack, call fn(g).
-// Fn must never return. It should gogo(&g->sched)
+// Fn must never return. It should gogo(&g->sched)
// to keep running g.
TEXT runtime·mcall(SB), NOSPLIT, $0-4
MOVL fn+0(FP), DI
RET
// systemstack_switch is a dummy routine that systemstack leaves at the bottom
-// of the G stack. We need to distinguish the routine that
+// of the G stack. We need to distinguish the routine that
// lives at the bottom of the G stack from the one that lives
// at the top of the system stack because the one at the top of
// the system stack terminates the stack walk (see topofstack()).
CALL AX
switch:
- // save our state in g->sched. Pretend to
+ // save our state in g->sched. Pretend to
// be systemstack_switch if the G stack is scanned.
MOVL $runtime·systemstack_switch(SB), (g_sched+gobuf_pc)(AX)
MOVL SP, (g_sched+gobuf_sp)(AX)
RET
endofpage:
- // address ends in 1111xxxx. Might be up against
+ // address ends in 1111xxxx. Might be up against
// a page boundary, so load ending at last byte.
// Then shift bytes down using pshufb.
MOVOU -32(AX)(BX*1), X1
GLOBL masks<>(SB),RODATA,$256
-// these are arguments to pshufb. They move data down from
+// these are arguments to pshufb. They move data down from
// the high bytes of the register to the low bytes of the register.
// index is how many bytes to move.
DATA shifts<>+0x00(SB)/4, $0x00000000
SETEQ ret+0(FP)
RET
-TEXT runtime·memeq(SB),NOSPLIT,$0-13
+// memequal(p, q unsafe.Pointer, size uintptr) bool
+TEXT runtime·memequal(SB),NOSPLIT,$0-13
MOVL a+0(FP), SI
MOVL b+4(FP), DI
+ CMPL SI, DI
+ JEQ eq
MOVL size+8(FP), BX
LEAL ret+12(FP), AX
JMP runtime·memeqbody(SB)
+eq:
+ MOVB $1, ret+12(FP)
+ RET
// memequal_varlen(a, b unsafe.Pointer) bool
TEXT runtime·memequal_varlen(SB),NOSPLIT,$0-9
MOVL (SI), SI
JMP si_finish
si_high:
- // address ends in 111111xx. Load up to bytes we want, move to correct position.
+ // address ends in 111111xx. Load up to bytes we want, move to correct position.
MOVL -4(SI)(BX*1), SI
SHRL CX, SI
si_finish:
TEXT runtime·prefetchnta(SB),NOSPLIT,$0-4
RET
-// Add a module's moduledata to the linked list of moduledata objects. This
+// Add a module's moduledata to the linked list of moduledata objects. This
// is called from .init_array by a function generated in the linker and so
// follows the platform ABI wrt register preservation -- it only touches AX,
// CX (implicitly) and DX, but it does not follow the ABI wrt arguments:
// func mcall(fn func(*g))
// Switch to m->g0's stack, call fn(g).
-// Fn must never return. It should gogo(&g->sched)
+// Fn must never return. It should gogo(&g->sched)
// to keep running g.
TEXT runtime·mcall(SB), NOSPLIT, $0-8
MOVQ fn+0(FP), DI
RET
// systemstack_switch is a dummy routine that systemstack leaves at the bottom
-// of the G stack. We need to distinguish the routine that
+// of the G stack. We need to distinguish the routine that
// lives at the bottom of the G stack from the one that lives
// at the top of the system stack because the one at the top of
// the system stack terminates the stack walk (see topofstack()).
CALL AX
switch:
- // save our state in g->sched. Pretend to
+ // save our state in g->sched. Pretend to
// be systemstack_switch if the G stack is scanned.
MOVQ $runtime·systemstack_switch(SB), SI
MOVQ SI, (g_sched+gobuf_pc)(AX)
CALL runtime·cgocallbackg(SB)
MOVQ 0(SP), R8
- // Compute the size of the frame again. FP and SP have
+ // Compute the size of the frame again. FP and SP have
// completely different values here than they did above,
// but only their difference matters.
LEAQ fv+0(FP), AX
RET
endofpage:
- // address ends in 1111xxxx. Might be up against
+ // address ends in 1111xxxx. Might be up against
// a page boundary, so load ending at last byte.
// Then shift bytes down using pshufb.
MOVOU -32(AX)(CX*1), X1
SETEQ ret+0(FP)
RET
-// these are arguments to pshufb. They move data down from
+// these are arguments to pshufb. They move data down from
// the high bytes of the register to the low bytes of the register.
// index is how many bytes to move.
DATA shifts<>+0x00(SB)/8, $0x0000000000000000
DATA shifts<>+0xf8(SB)/8, $0xff0f0e0d0c0b0a09
GLOBL shifts<>(SB),RODATA,$256
-TEXT runtime·memeq(SB),NOSPLIT,$0-25
+// memequal(p, q unsafe.Pointer, size uintptr) bool
+TEXT runtime·memequal(SB),NOSPLIT,$0-25
MOVQ a+0(FP), SI
MOVQ b+8(FP), DI
+ CMPQ SI, DI
+ JEQ eq
MOVQ size+16(FP), BX
LEAQ ret+24(FP), AX
JMP runtime·memeqbody(SB)
+eq:
+ MOVB $1, ret+24(FP)
+ RET
// memequal_varlen(a, b unsafe.Pointer) bool
TEXT runtime·memequal_varlen(SB),NOSPLIT,$0-17
MOVQ (SI), SI
JMP si_finish
si_high:
- // address ends in 11111xxx. Load up to bytes we want, move to correct position.
+ // address ends in 11111xxx. Load up to bytes we want, move to correct position.
MOVQ -8(SI)(BX*1), SI
SHRQ CX, SI
si_finish:
// AL: byte sought
// R8: address to put result
TEXT runtime·indexbytebody(SB),NOSPLIT,$0
- MOVQ SI, DI
-
- CMPQ BX, $16
- JLT small
-
- CMPQ BX, $32
- JA avx2
-no_avx2:
- // round up to first 16-byte boundary
- TESTQ $15, SI
- JZ aligned
- MOVQ SI, CX
- ANDQ $~15, CX
- ADDQ $16, CX
-
- // search the beginning
- SUBQ SI, CX
- REPN; SCASB
- JZ success
-
-// DI is 16-byte aligned; get ready to search using SSE instructions
-aligned:
- // round down to last 16-byte boundary
- MOVQ BX, R11
- ADDQ SI, R11
- ANDQ $~15, R11
-
- // shuffle X0 around so that each byte contains c
+ // Shuffle X0 around so that each byte contains
+ // the character we're looking for.
MOVD AX, X0
PUNPCKLBW X0, X0
PUNPCKLBW X0, X0
PSHUFL $0, X0, X0
- JMP condition
+
+ CMPQ BX, $16
+ JLT small
+ MOVQ SI, DI
+
+ CMPQ BX, $32
+ JA avx2
sse:
- // move the next 16-byte chunk of the buffer into X1
- MOVO (DI), X1
- // compare bytes in X0 to X1
- PCMPEQB X0, X1
- // take the top bit of each byte in X1 and put the result in DX
+ LEAQ -16(SI)(BX*1), AX // AX = address of last 16 bytes
+ JMP sseloopentry
+
+sseloop:
+ // Move the next 16-byte chunk of the data into X1.
+ MOVOU (DI), X1
+ // Compare bytes in X0 to X1.
+ PCMPEQB X0, X1
+ // Take the top bit of each byte in X1 and put the result in DX.
PMOVMSKB X1, DX
- TESTL DX, DX
- JNZ ssesuccess
- ADDQ $16, DI
+ // Find first set bit, if any.
+ BSFL DX, DX
+ JNZ ssesuccess
+ // Advance to next block.
+ ADDQ $16, DI
+sseloopentry:
+ CMPQ DI, AX
+ JB sseloop
-condition:
- CMPQ DI, R11
- JLT sse
-
- // search the end
- MOVQ SI, CX
- ADDQ BX, CX
- SUBQ R11, CX
- // if CX == 0, the zero flag will be set and we'll end up
- // returning a false success
- JZ failure
- REPN; SCASB
- JZ success
+ // Search the last 16-byte chunk. This chunk may overlap with the
+ // chunks we've already searched, but that's ok.
+ MOVQ AX, DI
+ MOVOU (AX), X1
+ PCMPEQB X0, X1
+ PMOVMSKB X1, DX
+ BSFL DX, DX
+ JNZ ssesuccess
failure:
MOVQ $-1, (R8)
RET
+// We've found a chunk containing the byte.
+// The chunk was loaded from DI.
+// The index of the matching byte in the chunk is DX.
+// The start of the data is SI.
+ssesuccess:
+ SUBQ SI, DI // Compute offset of chunk within data.
+ ADDQ DX, DI // Add offset of byte within chunk.
+ MOVQ DI, (R8)
+ RET
+
// handle for lengths < 16
small:
- MOVQ BX, CX
- REPN; SCASB
- JZ success
- MOVQ $-1, (R8)
+ TESTQ BX, BX
+ JEQ failure
+
+ // Check if we'll load across a page boundary.
+ LEAQ 16(SI), AX
+ TESTW $0xff0, AX
+ JEQ endofpage
+
+ MOVOU (SI), X1 // Load data
+ PCMPEQB X0, X1 // Compare target byte with each byte in data.
+ PMOVMSKB X1, DX // Move result bits to integer register.
+ BSFL DX, DX // Find first set bit.
+ JZ failure // No set bit, failure.
+ CMPL DX, BX
+ JAE failure // Match is past end of data.
+ MOVQ DX, (R8)
+ RET
+
+endofpage:
+ MOVOU -16(SI)(BX*1), X1 // Load data into the high end of X1.
+ PCMPEQB X0, X1 // Compare target byte with each byte in data.
+ PMOVMSKB X1, DX // Move result bits to integer register.
+ MOVL BX, CX
+ SHLL CX, DX
+ SHRL $16, DX // Shift desired bits down to bottom of register.
+ BSFL DX, DX // Find first set bit.
+ JZ failure // No set bit, failure.
+ MOVQ DX, (R8)
RET
avx2:
CMPB runtime·support_avx2(SB), $1
- JNE no_avx2
+ JNE sse
MOVD AX, X0
LEAQ -32(SI)(BX*1), R11
VPBROADCASTB X0, Y1
VZEROUPPER
RET
-// we've found the chunk containing the byte
-// now just figure out which specific byte it is
-ssesuccess:
- // get the index of the least significant set bit
- BSFW DX, DX
- SUBQ SI, DI
- ADDQ DI, DX
- MOVQ DX, (R8)
- RET
-
-success:
- SUBQ SI, DI
- SUBL $1, DI
- MOVQ DI, (R8)
- RET
-
TEXT bytes·Equal(SB),NOSPLIT,$0-49
MOVQ a_len+8(FP), BX
MOVQ b_len+32(FP), CX
// func mcall(fn func(*g))
// Switch to m->g0's stack, call fn(g).
-// Fn must never return. It should gogo(&g->sched)
+// Fn must never return. It should gogo(&g->sched)
// to keep running g.
TEXT runtime·mcall(SB), NOSPLIT, $0-4
MOVL fn+0(FP), DI
RET
// systemstack_switch is a dummy routine that systemstack leaves at the bottom
-// of the G stack. We need to distinguish the routine that
+// of the G stack. We need to distinguish the routine that
// lives at the bottom of the G stack from the one that lives
// at the top of the system stack because the one at the top of
// the system stack terminates the stack walk (see topofstack()).
CALL AX
switch:
- // save our state in g->sched. Pretend to
+ // save our state in g->sched. Pretend to
// be systemstack_switch if the G stack is scanned.
MOVL $runtime·systemstack_switch(SB), SI
MOVL SI, (g_sched+gobuf_pc)(AX)
REP
STOSB
// Note: we zero only 4 bytes at a time so that the tail is at most
- // 3 bytes. That guarantees that we aren't zeroing pointers with STOSB.
+ // 3 bytes. That guarantees that we aren't zeroing pointers with STOSB.
// See issue 13160.
RET
MOVL AX, ret+16(FP)
RET
-TEXT runtime·memeq(SB),NOSPLIT,$0-17
+// memequal(p, q unsafe.Pointer, size uintptr) bool
+TEXT runtime·memequal(SB),NOSPLIT,$0-13
MOVL a+0(FP), SI
MOVL b+4(FP), DI
+ CMPL SI, DI
+ JEQ eq
MOVL size+8(FP), BX
CALL runtime·memeqbody(SB)
MOVB AX, ret+16(FP)
RET
+eq:
+ MOVB $1, ret+16(FP)
+ RET
// memequal_varlen(a, b unsafe.Pointer) bool
TEXT runtime·memequal_varlen(SB),NOSPLIT,$0-9
MOVQ (SI), SI
JMP si_finish
si_high:
- // address ends in 11111xxx. Load up to bytes we want, move to correct position.
+ // address ends in 11111xxx. Load up to bytes we want, move to correct position.
MOVQ BX, DX
ADDQ SI, DX
MOVQ -8(DX), SI
// func mcall(fn func(*g))
// Switch to m->g0's stack, call fn(g).
-// Fn must never return. It should gogo(&g->sched)
+// Fn must never return. It should gogo(&g->sched)
// to keep running g.
TEXT runtime·mcall(SB),NOSPLIT,$-4-4
// Save caller state in g->sched.
RET
// systemstack_switch is a dummy routine that systemstack leaves at the bottom
-// of the G stack. We need to distinguish the routine that
+// of the G stack. We need to distinguish the routine that
// lives at the bottom of the G stack from the one that lives
// at the top of the system stack because the one at the top of
// the system stack terminates the stack walk (see topofstack()).
BL (R0)
switch:
- // save our state in g->sched. Pretend to
+ // save our state in g->sched. Pretend to
// be systemstack_switch if the G stack is scanned.
MOVW $runtime·systemstack_switch(SB), R3
#ifdef GOOS_nacl
// armPublicationBarrier is a native store/store barrier for ARMv7+.
// On earlier ARM revisions, armPublicationBarrier is a no-op.
// This will not work on SMP ARMv6 machines, if any are in use.
-// To implement publiationBarrier in sys_$GOOS_arm.s using the native
+// To implement publicationBarrier in sys_$GOOS_arm.s using the native
// instructions, use:
//
// TEXT ·publicationBarrier(SB),NOSPLIT,$-4-0
MOVW R0, ret+8(FP)
RET
-TEXT runtime·memeq(SB),NOSPLIT,$-4-13
+// memequal(p, q unsafe.Pointer, size uintptr) bool
+TEXT runtime·memequal(SB),NOSPLIT,$-4-13
MOVW a+0(FP), R1
MOVW b+4(FP), R2
MOVW size+8(FP), R3
ADD R1, R3, R6
MOVW $1, R0
MOVB R0, ret+12(FP)
+ CMP R1, R2
+ RET.EQ
loop:
CMP R1, R6
RET.EQ
MOVW R0, 4(R13)
MOVW R1, 8(R13)
MOVW R2, 12(R13)
- BL runtime·memeq(SB)
+ BL runtime·memequal(SB)
MOVB 16(R13), R0
MOVB R0, ret+8(FP)
RET
MOVB R8, v+16(FP)
RET
-// TODO: share code with memeq?
+// TODO: share code with memequal?
TEXT bytes·Equal(SB),NOSPLIT,$0-25
MOVW a_len+4(FP), R1
MOVW b_len+16(FP), R3
// Called from cgo wrappers, this function returns g->m->curg.stack.hi.
// Must obey the gcc calling convention.
TEXT _cgo_topofstack(SB),NOSPLIT,$8
- // R11 and g register are clobbered by load_g. They are
+ // R11 and g register are clobbered by load_g. They are
// callee-save in the gcc calling convention, so save them here.
MOVW R11, saveR11-4(SP)
MOVW g, saveG-8(SP)
// void mcall(fn func(*g))
// Switch to m->g0's stack, call fn(g).
-// Fn must never return. It should gogo(&g->sched)
+// Fn must never return. It should gogo(&g->sched)
// to keep running g.
TEXT runtime·mcall(SB), NOSPLIT, $-8-8
// Save caller state in g->sched
B runtime·badmcall2(SB)
// systemstack_switch is a dummy routine that systemstack leaves at the bottom
-// of the G stack. We need to distinguish the routine that
+// of the G stack. We need to distinguish the routine that
// lives at the bottom of the G stack from the one that lives
// at the top of the system stack because the one at the top of
// the system stack terminates the stack walk (see topofstack()).
BL (R3)
switch:
- // save our state in g->sched. Pretend to
+ // save our state in g->sched. Pretend to
// be systemstack_switch if the G stack is scanned.
MOVD $runtime·systemstack_switch(SB), R6
ADD $8, R6 // get past prologue
BL (R1)
MOVD R0, R9
- // Restore g, stack pointer. R0 is errno, so don't touch it
+ // Restore g, stack pointer. R0 is errno, so don't touch it
MOVD 0(RSP), g
BL runtime·save_g(SB)
MOVD (g_stack+stack_hi)(g), R5
MOVD R3, ret+16(FP)
RET
-TEXT runtime·memeq(SB),NOSPLIT,$-8-25
+// memequal(p, q unsafe.Pointer, size uintptr) bool
+TEXT runtime·memequal(SB),NOSPLIT,$-8-25
MOVD a+0(FP), R1
MOVD b+8(FP), R2
MOVD size+16(FP), R3
ADD R1, R3, R6
MOVD $1, R0
MOVB R0, ret+24(FP)
+ CMP R1, R2
+ BEQ done
loop:
CMP R1, R6
BEQ done
MOVD R3, 8(RSP)
MOVD R4, 16(RSP)
MOVD R5, 24(RSP)
- BL runtime·memeq(SB)
+ BL runtime·memequal(SB)
MOVBU 32(RSP), R3
MOVB R3, ret+16(FP)
RET
MOVD R0, ret+24(FP)
RET
-// TODO: share code with memeq?
+// TODO: share code with memequal?
TEXT bytes·Equal(SB),NOSPLIT,$0-49
MOVD a_len+8(FP), R1
MOVD b_len+32(FP), R3
// void mcall(fn func(*g))
// Switch to m->g0's stack, call fn(g).
-// Fn must never return. It should gogo(&g->sched)
+// Fn must never return. It should gogo(&g->sched)
// to keep running g.
TEXT runtime·mcall(SB), NOSPLIT, $-8-8
// Save caller state in g->sched
JMP runtime·badmcall2(SB)
// systemstack_switch is a dummy routine that systemstack leaves at the bottom
-// of the G stack. We need to distinguish the routine that
+// of the G stack. We need to distinguish the routine that
// lives at the bottom of the G stack from the one that lives
// at the top of the system stack because the one at the top of
// the system stack terminates the stack walk (see topofstack()).
JAL (R4)
switch:
- // save our state in g->sched. Pretend to
+ // save our state in g->sched. Pretend to
// be systemstack_switch if the G stack is scanned.
MOVV $runtime·systemstack_switch(SB), R4
ADDV $8, R4 // get past prologue
TEXT runtime·aeshashstr(SB),NOSPLIT,$-8-0
MOVW (R0), R1
-TEXT runtime·memeq(SB),NOSPLIT,$-8-25
+// memequal(p, q unsafe.Pointer, size uintptr) bool
+TEXT runtime·memequal(SB),NOSPLIT,$-8-25
MOVV a+0(FP), R1
MOVV b+8(FP), R2
+ BEQ R1, R2, eq
MOVV size+16(FP), R3
ADDV R1, R3, R4
loop:
MOVB R0, ret+24(FP)
RET
+eq:
+ MOVV $1, R1
+ MOVB R1, ret+24(FP)
+ RET
// memequal_varlen(a, b unsafe.Pointer) bool
TEXT runtime·memequal_varlen(SB),NOSPLIT,$40-17
MOVV R1, 8(R29)
MOVV R2, 16(R29)
MOVV R3, 24(R29)
- JAL runtime·memeq(SB)
+ JAL runtime·memequal(SB)
MOVBU 32(R29), R1
MOVB R1, ret+16(FP)
RET
MOVB R0, ret+32(FP)
RET
-// TODO: share code with memeq?
+// TODO: share code with memequal?
TEXT bytes·Equal(SB),NOSPLIT,$0-49
MOVV a_len+8(FP), R3
MOVV b_len+32(FP), R4
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +---------------------+ <- R1
//
// So a function that sets up a stack frame at all uses as least FIXED_FRAME
-// bytes of stack. This mostly affects assembly that calls other functions
+// bytes of stack. This mostly affects assembly that calls other functions
// with arguments (the arguments should be stored at FIXED_FRAME+0(R1),
// FIXED_FRAME+8(R1) etc) and some other low-level places.
//
// void mcall(fn func(*g))
// Switch to m->g0's stack, call fn(g).
-// Fn must never return. It should gogo(&g->sched)
+// Fn must never return. It should gogo(&g->sched)
// to keep running g.
TEXT runtime·mcall(SB), NOSPLIT|NOFRAME, $0-8
// Save caller state in g->sched
BR runtime·badmcall2(SB)
// systemstack_switch is a dummy routine that systemstack leaves at the bottom
-// of the G stack. We need to distinguish the routine that
+// of the G stack. We need to distinguish the routine that
// lives at the bottom of the G stack from the one that lives
// at the top of the system stack because the one at the top of
// the system stack terminates the stack walk (see topofstack()).
BL (CTR)
switch:
- // save our state in g->sched. Pretend to
+ // save our state in g->sched. Pretend to
// be systemstack_switch if the G stack is scanned.
MOVD $runtime·systemstack_switch(SB), R6
ADD $16, R6 // get past prologue (including r2-setting instructions when they're there)
TEXT runtime·aeshashstr(SB),NOSPLIT|NOFRAME,$0-0
MOVW (R0), R1
-TEXT runtime·memeq(SB),NOSPLIT|NOFRAME,$0-25
+// memequal(p, q unsafe.Pointer, size uintptr) bool
+TEXT runtime·memequal(SB),NOSPLIT|NOFRAME,$0-25
MOVD a+0(FP), R3
MOVD b+8(FP), R4
+ CMP R3, R4
+ BEQ eq
MOVD size+16(FP), R5
SUB $1, R3
SUB $1, R4
MOVB R0, ret+24(FP)
RET
+eq:
+ MOVD $1, R1
+ MOVB R1, ret+24(FP)
+ RET
// memequal_varlen(a, b unsafe.Pointer) bool
TEXT runtime·memequal_varlen(SB),NOSPLIT,$40-17
MOVD R3, FIXED_FRAME+0(R1)
MOVD R4, FIXED_FRAME+8(R1)
MOVD R5, FIXED_FRAME+16(R1)
- BL runtime·memeq(SB)
+ BL runtime·memequal(SB)
MOVBZ FIXED_FRAME+24(R1), R3
MOVB R3, ret+16(FP)
RET
MOVB R0, ret+32(FP)
RET
-// TODO: share code with memeq?
+// TODO: share code with memequal?
TEXT bytes·Equal(SB),NOSPLIT,$0-49
MOVD a_len+8(FP), R3
MOVD b_len+32(FP), R4
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package runtime_test
+
+import (
+ "runtime"
+ "strings"
+ "testing"
+)
+
+func f1(pan bool) []uintptr {
+ return f2(pan) // line 14
+}
+
+func f2(pan bool) []uintptr {
+ return f3(pan) // line 18
+}
+
+func f3(pan bool) []uintptr {
+ if pan {
+ panic("f3") // line 23
+ }
+ ret := make([]uintptr, 20)
+ return ret[:runtime.Callers(0, ret)] // line 26
+}
+
+func testCallers(t *testing.T, pcs []uintptr, pan bool) {
+ m := make(map[string]int, len(pcs))
+ frames := runtime.CallersFrames(pcs)
+ for {
+ frame, more := frames.Next()
+ if frame.Function != "" {
+ m[frame.Function] = frame.Line
+ }
+ if !more {
+ break
+ }
+ }
+
+ var seen []string
+ for k := range m {
+ seen = append(seen, k)
+ }
+ t.Logf("functions seen: %s", strings.Join(seen, " "))
+
+ var f3Line int
+ if pan {
+ f3Line = 23
+ } else {
+ f3Line = 26
+ }
+ want := []struct {
+ name string
+ line int
+ }{
+ {"f1", 14},
+ {"f2", 18},
+ {"f3", f3Line},
+ }
+ for _, w := range want {
+ if got := m["runtime_test."+w.name]; got != w.line {
+ t.Errorf("%s is line %d, want %d", w.name, got, w.line)
+ }
+ }
+}
+
+func TestCallers(t *testing.T) {
+ testCallers(t, f1(false), false)
+}
+
+func TestCallersPanic(t *testing.T) {
+ defer func() {
+ if r := recover(); r == nil {
+ t.Fatal("did not panic")
+ }
+ pcs := make([]uintptr, 20)
+ pcs = pcs[:runtime.Callers(0, pcs)]
+ testCallers(t, pcs, true)
+ }()
+ f1(true)
+}
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:cgo_export_static crosscall2
//go:cgo_export_dynamic crosscall2
-// Panic. The argument is converted into a Go string.
+// Panic. The argument is converted into a Go string.
// Call like this in code compiled with gcc:
// struct { const char *p; } a;
// Creates a new system thread without updating any Go state.
//
// This method is invoked during shared library loading to create a new OS
-// thread to perform the runtime initialization. This method is similar to
+// thread to perform the runtime initialization. This method is similar to
// _cgo_sys_thread_start except that it doesn't update any Go state.
//go:cgo_import_static x_cgo_sys_thread_create
var x_cgo_sys_thread_create byte
var _cgo_sys_thread_create = &x_cgo_sys_thread_create
-// Notifies that the runtime has been intialized.
+// Notifies that the runtime has been initialized.
//
// We currently block at every CGO entry point (via _cgo_wait_runtime_init_done)
// to ensure that the runtime has been initialized before the CGO call is
-// executed. This is necessary for shared libraries where we kickoff runtime
+// executed. This is necessary for shared libraries where we kickoff runtime
// initialization in a separate thread and return without waiting for this
// thread to complete the init.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
setg_gcc((void*)ts.g);
// On DragonFly, a new thread inherits the signal stack of the
- // creating thread. That confuses minit, so we remove that
- // signal stack here before calling the regular mstart. It's
+ // creating thread. That confuses minit, so we remove that
+ // signal stack here before calling the regular mstart. It's
// a bit baroque to remove a signal stack here only to add one
// in minit, but it's a simple change that keeps DragonFly
- // working like other OS's. At this point all signals are
+ // working like other OS's. At this point all signals are
// blocked, so there is no race.
memset(&ss, 0, sizeof ss);
ss.ss_flags = SS_DISABLE;
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Not sure why the memset is necessary here,
// but without it, we get a bogus stack size
- // out of pthread_attr_getstacksize. C'est la Linux.
+ // out of pthread_attr_getstacksize. C'est la Linux.
memset(&attr, 0, sizeof attr);
pthread_attr_init(&attr);
size = 0;
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Not sure why the memset is necessary here,
// but without it, we get a bogus stack size
- // out of pthread_attr_getstacksize. C'est la Linux.
+ // out of pthread_attr_getstacksize. C'est la Linux.
memset(&attr, 0, sizeof attr);
pthread_attr_init(&attr);
size = 0;
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Not sure why the memset is necessary here,
// but without it, we get a bogus stack size
- // out of pthread_attr_getstacksize. C'est la Linux.
+ // out of pthread_attr_getstacksize. C'est la Linux.
memset(&attr, 0, sizeof attr);
pthread_attr_init(&attr);
size = 0;
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Not sure why the memset is necessary here,
// but without it, we get a bogus stack size
- // out of pthread_attr_getstacksize. C'est la Linux.
+ // out of pthread_attr_getstacksize. C'est la Linux.
memset(&attr, 0, sizeof attr);
pthread_attr_init(&attr);
size = 0;
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
setg_gcc((void*)ts.g);
// On NetBSD, a new thread inherits the signal stack of the
- // creating thread. That confuses minit, so we remove that
- // signal stack here before calling the regular mstart. It's
+ // creating thread. That confuses minit, so we remove that
+ // signal stack here before calling the regular mstart. It's
// a bit baroque to remove a signal stack here only to add one
// in minit, but it's a simple change that keeps NetBSD
- // working like other OS's. At this point all signals are
+ // working like other OS's. At this point all signals are
// blocked, so there is no race.
memset(&ss, 0, sizeof ss);
ss.ss_flags = SS_DISABLE;
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
setg_gcc((void*)ts.g);
// On NetBSD, a new thread inherits the signal stack of the
- // creating thread. That confuses minit, so we remove that
- // signal stack here before calling the regular mstart. It's
+ // creating thread. That confuses minit, so we remove that
+ // signal stack here before calling the regular mstart. It's
// a bit baroque to remove a signal stack here only to add one
// in minit, but it's a simple change that keeps NetBSD
- // working like other OS's. At this point all signals are
+ // working like other OS's. At this point all signals are
// blocked, so there is no race.
memset(&ss, 0, sizeof ss);
ss.ss_flags = SS_DISABLE;
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
free(v);
// On NetBSD, a new thread inherits the signal stack of the
- // creating thread. That confuses minit, so we remove that
- // signal stack here before calling the regular mstart. It's
+ // creating thread. That confuses minit, so we remove that
+ // signal stack here before calling the regular mstart. It's
// a bit baroque to remove a signal stack here only to add one
// in minit, but it's a simple change that keeps NetBSD
- // working like other OS's. At this point all signals are
+ // working like other OS's. At this point all signals are
// blocked, so there is no race.
memset(&ss, 0, sizeof ss);
ss.ss_flags = SS_DISABLE;
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// The runtime package contains an uninitialized definition
-// for runtime·iscgo. Override it to tell the runtime we're here.
+// for runtime·iscgo. Override it to tell the runtime we're here.
// There are various function pointers that should be set too,
// but those depend on dynamic linker magic to get initialized
-// correctly, and sometimes they break. This variable is a
+// correctly, and sometimes they break. This variable is a
// backup: it depends only on old C style static linking rules.
package cgo
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
import _ "unsafe"
// When using cgo, call the C library for mmap, so that we call into
-// any sanitizer interceptors. This supports using the memory
-// sanitizer with Go programs. The memory sanitizer only applies to
+// any sanitizer interceptors. This supports using the memory
+// sanitizer with Go programs. The memory sanitizer only applies to
// C/C++ code; this permits that code to see the Go code as normal
// program addresses that have been initialized.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Support for memory sanitizer. See runtime/cgo/mmap.go.
+// Support for memory sanitizer. See runtime/cgo/mmap.go.
// +build linux,amd64
return sysMmap(addr, n, prot, flags, fd, off)
}
-// sysMmap calls the mmap system call. It is implemented in assembly.
+// sysMmap calls the mmap system call. It is implemented in assembly.
func sysMmap(addr unsafe.Pointer, n uintptr, prot, flags, fd int32, off uint32) unsafe.Pointer
// cgoMmap calls the mmap function in the runtime/cgo package on the
// callCgoMmap calls the mmap function in the runtime/cgo package
-// using the GCC calling convention. It is implemented in assembly.
+// using the GCC calling convention. It is implemented in assembly.
func callCgoMmap(addr unsafe.Pointer, n uintptr, prot, flags, fd int32, off uint32) uintptr
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// and then unlocks g from m.
//
// The above description skipped over the possibility of the gcc-compiled
-// function f calling back into Go. If that happens, we continue down
+// function f calling back into Go. If that happens, we continue down
// the rabbit hole during the execution of f.
//
// To make it possible for gcc-compiled C code to call a Go function p.GoF,
// GoF calls crosscall2(_cgoexp_GoF, frame, framesize). Crosscall2
// (in cgo/gcc_$GOARCH.S, a gcc-compiled assembly file) is a two-argument
// adapter from the gcc function call ABI to the 6c function call ABI.
-// It is called from gcc to call 6c functions. In this case it calls
+// It is called from gcc to call 6c functions. In this case it calls
// _cgoexp_GoF(frame, framesize), still running on m->g0's stack
-// and outside the $GOMAXPROCS limit. Thus, this code cannot yet
+// and outside the $GOMAXPROCS limit. Thus, this code cannot yet
// call arbitrary Go code directly and must be careful not to allocate
// memory or use up m->g0's stack.
//
// We want to detect all cases where a program that does not use
// unsafe makes a cgo call passing a Go pointer to memory that
-// contains a Go pointer. Here a Go pointer is defined as a pointer
-// to memory allocated by the Go runtime. Programs that use unsafe
+// contains a Go pointer. Here a Go pointer is defined as a pointer
+// to memory allocated by the Go runtime. Programs that use unsafe
// can evade this restriction easily, so we don't try to catch them.
// The cgo program will rewrite all possibly bad pointer arguments to
// call cgoCheckPointer, where we can catch cases of a Go pointer
// Complicating matters, taking the address of a slice or array
// element permits the C program to access all elements of the slice
-// or array. In that case we will see a pointer to a single element,
+// or array. In that case we will see a pointer to a single element,
// but we need to check the entire data structure.
// The cgoCheckPointer call takes additional arguments indicating that
-// it was called on an address expression. An additional argument of
-// true means that it only needs to check a single element. An
+// it was called on an address expression. An additional argument of
+// true means that it only needs to check a single element. An
// additional argument of a slice or array means that it needs to
-// check the entire slice/array, but nothing else. Otherwise, the
+// check the entire slice/array, but nothing else. Otherwise, the
// pointer could be anything, and we check the entire heap object,
// which is conservative but safe.
// When and if we implement a moving garbage collector,
// cgoCheckPointer will pin the pointer for the duration of the cgo
// call. (This is necessary but not sufficient; the cgo program will
-// also have to change to pin Go pointers that can not point to Go
+// also have to change to pin Go pointers that cannot point to Go
// pointers.)
// cgoCheckPointer checks if the argument contains a Go pointer that
-// points to a Go pointer, and panics if it does. It returns the pointer.
+// points to a Go pointer, and panics if it does. It returns the pointer.
func cgoCheckPointer(ptr interface{}, args ...interface{}) interface{} {
if debug.cgocheck == 0 {
return ptr
const cgoCheckPointerFail = "cgo argument has Go pointer to Go pointer"
const cgoResultFail = "cgo result has Go pointer"
-// cgoCheckArg is the real work of cgoCheckPointer. The argument p
+// cgoCheckArg is the real work of cgoCheckPointer. The argument p
// is either a pointer to the value (of type t), or the value itself,
-// depending on indir. The top parameter is whether we are at the top
+// depending on indir. The top parameter is whether we are at the top
// level, where Go pointers are allowed.
func cgoCheckArg(t *_type, p unsafe.Pointer, indir, top bool, msg string) {
if t.kind&kindNoPointers != 0 {
}
case kindChan, kindMap:
// These types contain internal pointers that will
- // always be allocated in the Go heap. It's never OK
+ // always be allocated in the Go heap. It's never OK
// to pass them to C.
panic(errorString(msg))
case kindFunc:
return
}
// A type known at compile time is OK since it's
- // constant. A type not known at compile time will be
+ // constant. A type not known at compile time will be
// in the heap and will not be OK.
if inheap(uintptr(unsafe.Pointer(it))) {
panic(errorString(msg))
if !top {
panic(errorString(msg))
}
+ if st.elem.kind&kindNoPointers != 0 {
+ return
+ }
for i := 0; i < s.cap; i++ {
cgoCheckArg(st.elem, p, true, false, msg)
p = add(p, st.elem.size)
}
// cgoCheckUnknownPointer is called for an arbitrary pointer into Go
-// memory. It checks whether that Go memory contains any other
-// pointer into Go memory. If it does, we panic.
+// memory. It checks whether that Go memory contains any other
+// pointer into Go memory. If it does, we panic.
// The return values are unused but useful to see in panic tracebacks.
func cgoCheckUnknownPointer(p unsafe.Pointer, msg string) (base, i uintptr) {
if cgoInRange(p, mheap_.arena_start, mheap_.arena_used) {
}
// cgoIsGoPointer returns whether the pointer is a Go pointer--a
-// pointer to Go memory. We only care about Go memory that might
+// pointer to Go memory. We only care about Go memory that might
// contain pointers.
//go:nosplit
//go:nowritebarrierrec
}
// cgoCheckResult is called to check the result parameter of an
-// exported Go function. It panics if the result is or contains a Go
+// exported Go function. It panics if the result is or contains a Go
// pointer.
func cgoCheckResult(val interface{}) {
if debug.cgocheck == 0 {
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// cgoCheckTypedBlock checks the block of memory at src, for up to size bytes,
-// and throws if it finds a Go pointer. The type of the memory is typ,
+// and throws if it finds a Go pointer. The type of the memory is typ,
// and src is off bytes into that type.
//go:nosplit
//go:nowritebarrier
return
}
- // The type has a GC program. Try to find GC bits somewhere else.
+ // The type has a GC program. Try to find GC bits somewhere else.
for datap := &firstmoduledata; datap != nil; datap = datap.next {
if cgoInRange(src, datap.data, datap.edata) {
doff := uintptr(src) - datap.data
hbits := heapBitsForAddr(uintptr(src))
for i := uintptr(0); i < off+size; i += sys.PtrSize {
bits := hbits.bits()
- if bits != 0 {
- println(i, bits)
- }
if i >= off && bits&bitPointer != 0 {
v := *(*unsafe.Pointer)(add(src, i))
if cgoIsGoPointer(v) {
}
// cgoCheckBits checks the block of memory at src, for up to size
-// bytes, and throws if it finds a Go pointer. The gcbits mark each
-// pointer value. The src pointer is off bytes into the gcbits.
+// bytes, and throws if it finds a Go pointer. The gcbits mark each
+// pointer value. The src pointer is off bytes into the gcbits.
//go:nosplit
//go:nowritebarrier
func cgoCheckBits(src unsafe.Pointer, gcbits *byte, off, size uintptr) {
// fall back to look for pointers in src using the type information.
// We only this when looking at a value on the stack when the type
// uses a GC program, because otherwise it's more efficient to use the
-// GC bits. This is called on the system stack.
+// GC bits. This is called on the system stack.
//go:nowritebarrier
//go:systemstack
func cgoCheckUsingType(typ *_type, src unsafe.Pointer, off, size uintptr) {
}
if c.qcount < c.dataqsiz {
- // Space is available in the channel buffer. Enqueue the element to send.
+ // Space is available in the channel buffer. Enqueue the element to send.
qp := chanbuf(c, c.sendx)
if raceenabled {
raceacquire(qp)
return false
}
- // Block on the channel. Some receiver will complete our operation for us.
+ // Block on the channel. Some receiver will complete our operation for us.
gp := getg()
mysg := acquireSudog()
mysg.releasetime = 0
if t0 != 0 {
mysg.releasetime = -1
}
+ // No stack splits between assigning elem and enqueuing mysg
+ // on gp.waiting where copystack can find it.
mysg.elem = ep
mysg.waitlink = nil
mysg.g = gp
racesync(c, sg)
} else {
// Pretend we go through the buffer, even though
- // we copy directly. Note that we need to increment
+ // we copy directly. Note that we need to increment
// the head/tail locations only when raceenabled.
qp := chanbuf(c, c.recvx)
raceacquire(qp)
}
if sg := c.sendq.dequeue(); sg != nil {
- // Found a waiting sender. If buffer is size 0, receive value
- // directly from sender. Otherwise, recieve from head of queue
+ // Found a waiting sender. If buffer is size 0, receive value
+ // directly from sender. Otherwise, receive from head of queue
// and add sender's value to the tail of the queue (both map to
// the same buffer slot because the queue is full).
recv(c, sg, ep, func() { unlock(&c.lock) })
if t0 != 0 {
mysg.releasetime = -1
}
+ // No stack splits between assigning elem and enqueuing mysg
+ // on gp.waiting where copystack can find it.
mysg.elem = ep
mysg.waitlink = nil
gp.waiting = mysg
typedmemmove(c.elemtype, ep, sg.elem)
}
} else {
- // Queue is full. Take the item at the
- // head of the queue. Make the sender enqueue
- // its item at the tail of the queue. Since the
+ // Queue is full. Take the item at the
+ // head of the queue. Make the sender enqueue
+ // its item at the tail of the queue. Since the
// queue is full, those are both the same slot.
qp := chanbuf(c, c.recvx)
if raceenabled {
}
e <- 9
}()
- time.Sleep(time.Millisecond) // make sure goroutine A gets qeueued first on c
+ time.Sleep(time.Millisecond) // make sure goroutine A gets queued first on c
// goroutine B
go func() {
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package runtime
// Compiler is the name of the compiler toolchain that built the
-// running binary. Known toolchains are:
+// running binary. Known toolchains are:
//
// gc Also known as cmd/compile.
// gccgo The gccgo front end, part of the GCC compiler suite.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// writes to an operating system file.
//
// The signal handler for the profiling clock tick adds a new stack trace
-// to a hash table tracking counts for recent traces. Most clock ticks
-// hit in the cache. In the event of a cache miss, an entry must be
+// to a hash table tracking counts for recent traces. Most clock ticks
+// hit in the cache. In the event of a cache miss, an entry must be
// evicted from the hash table, copied to a log that will eventually be
-// written as profile data. The google-perftools code flushed the
-// log itself during the signal handler. This code cannot do that, because
+// written as profile data. The google-perftools code flushed the
+// log itself during the signal handler. This code cannot do that, because
// the io.Writer might block or need system calls or locks that are not
-// safe to use from within the signal handler. Instead, we split the log
+// safe to use from within the signal handler. Instead, we split the log
// into two halves and let the signal handler fill one half while a goroutine
-// is writing out the other half. When the signal handler fills its half, it
-// offers to swap with the goroutine. If the writer is not done with its half,
+// is writing out the other half. When the signal handler fills its half, it
+// offers to swap with the goroutine. If the writer is not done with its half,
// we lose the stack trace for this clock tick (and record that loss).
// The goroutine interacts with the signal handler by calling getprofile() to
// get the next log piece to write, implicitly handing back the last log
// piece it obtained.
//
// The state of this dance between the signal handler and the goroutine
-// is encoded in the Profile.handoff field. If handoff == 0, then the goroutine
+// is encoded in the Profile.handoff field. If handoff == 0, then the goroutine
// is not using either log half and is waiting (or will soon be waiting) for
// a new piece by calling notesleep(&p.wait). If the signal handler
// changes handoff from 0 to non-zero, it must call notewakeup(&p.wait)
-// to wake the goroutine. The value indicates the number of entries in the
-// log half being handed off. The goroutine leaves the non-zero value in
+// to wake the goroutine. The value indicates the number of entries in the
+// log half being handed off. The goroutine leaves the non-zero value in
// place until it has finished processing the log half and then flips the number
-// back to zero. Setting the high bit in handoff means that the profiling is over,
+// back to zero. Setting the high bit in handoff means that the profiling is over,
// and the goroutine is now in charge of flushing the data left in the hash table
// to the log and returning that data.
//
// then the signal handler owns it and can change it to non-zero.
// If handoff != 0 then the goroutine owns it and can change it to zero.
// If that were the end of the story then we would not need to manipulate
-// handoff using atomic operations. The operations are needed, however,
+// handoff using atomic operations. The operations are needed, however,
// in order to let the log closer set the high bit to indicate "EOF" safely
// in the situation when normally the goroutine "owns" handoff.
// It is called from signal handlers and other limited environments
// and cannot allocate memory or acquire locks that might be
// held at the time of the signal, nor can it use substantial amounts
-// of stack. It is allowed to call evict.
+// of stack. It is allowed to call evict.
func (p *cpuProfile) add(pc []uintptr) {
if len(pc) > maxCPUProfStack {
pc = pc[:maxCPUProfStack]
}
if e.count > 0 {
if !p.evict(e) {
- // Could not evict entry. Record lost stack.
+ // Could not evict entry. Record lost stack.
p.lost++
return
}
// evict copies the given entry's data into the log, so that
// the entry can be reused. evict is called from add, which
// is called from the profiling signal handler, so it must not
-// allocate memory or block. It is safe to call flushlog.
+// allocate memory or block. It is safe to call flushlog.
// evict returns true if the entry was copied to the log,
// false if there was no room available.
func (p *cpuProfile) evict(e *cpuprofEntry) bool {
// flushlog tries to flush the current log and switch to the other one.
// flushlog is called from evict, called from add, called from the signal handler,
-// so it cannot allocate memory or block. It can try to swap logs with
+// so it cannot allocate memory or block. It can try to swap logs with
// the writing goroutine, as explained in the comment at the top of this file.
func (p *cpuProfile) flushlog() bool {
if !atomic.Cas(&p.handoff, 0, uint32(p.nlog)) {
}
// getprofile blocks until the next block of profiling data is available
-// and returns it as a []byte. It is called from the writing goroutine.
+// and returns it as a []byte. It is called from the writing goroutine.
func (p *cpuProfile) getprofile() []byte {
if p == nil {
return nil
}
// In flush mode.
- // Add is no longer being called. We own the log.
+ // Add is no longer being called. We own the log.
// Also, p.handoff is non-zero, so flushlog will return false.
// Evict the hash table into the log and return it.
Flush:
for j := range b.entry {
e := &b.entry[j]
if e.count > 0 && !p.evict(e) {
- // Filled the log. Stop the loop and return what we've got.
+ // Filled the log. Stop the loop and return what we've got.
break Flush
}
}
return uintptrBytes(eod[:])
}
- // Finally done. Clean up and return nil.
+ // Finally done. Clean up and return nil.
p.flushing = false
if !atomic.Cas(&p.handoff, p.handoff, 0) {
print("runtime: profile flush racing with something\n")
}
// CPUProfile returns the next chunk of binary CPU profiling stack trace data,
-// blocking until data is available. If profiling is turned off and all the profile
+// blocking until data is available. If profiling is turned off and all the profile
// data accumulated while it was on has been returned, CPUProfile returns nil.
// The caller must save the returned data before calling CPUProfile again.
//
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package runtime
// careful: cputicks is not guaranteed to be monotonic! In particular, we have
-// noticed drift between cpus on certain os/arch combinations. See issue 8976.
+// noticed drift between cpus on certain os/arch combinations. See issue 8976.
func cputicks() int64
package runtime_test
import (
+ "fmt"
+ "internal/testenv"
"os/exec"
"runtime"
"strings"
"testing"
+ "time"
)
func TestCgoCrashHandler(t *testing.T) {
t.Errorf("expected %q, got %v", want, got)
}
}
+
+// Test for issue 14387.
+// Test that the program that doesn't need any cgo pointer checking
+// takes about the same amount of time with it as without it.
+func TestCgoCheckBytes(t *testing.T) {
+ // Make sure we don't count the build time as part of the run time.
+ testenv.MustHaveGoBuild(t)
+ exe, err := buildTestProg(t, "testprogcgo")
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ // Try it 10 times to avoid flakiness.
+ const tries = 10
+ var tot1, tot2 time.Duration
+ for i := 0; i < tries; i++ {
+ cmd := testEnv(exec.Command(exe, "CgoCheckBytes"))
+ cmd.Env = append(cmd.Env, "GODEBUG=cgocheck=0", fmt.Sprintf("GO_CGOCHECKBYTES_TRY=%d", i))
+
+ start := time.Now()
+ cmd.Run()
+ d1 := time.Since(start)
+
+ cmd = testEnv(exec.Command(exe, "CgoCheckBytes"))
+ cmd.Env = append(cmd.Env, fmt.Sprintf("GO_CGOCHECKBYTES_TRY=%d", i))
+
+ start = time.Now()
+ cmd.Run()
+ d2 := time.Since(start)
+
+ if d1*20 > d2 {
+ // The slow version (d2) was less than 20 times
+ // slower than the fast version (d1), so OK.
+ return
+ }
+
+ tot1 += d1
+ tot2 += d2
+ }
+
+ t.Errorf("cgo check too slow: got %v, expected at most %v", tot2/tries, (tot1/tries)*20)
+}
+
+func TestCgoPanicDeadlock(t *testing.T) {
+ // test issue 14432
+ got := runTestProg(t, "testprogcgo", "CgoPanicDeadlock")
+ want := "panic: cgo error\n\n"
+ if !strings.HasPrefix(got, want) {
+ t.Fatalf("output does not start with %q:\n%s", want, got)
+ }
+}
// 1. defer a function that recovers
// 2. defer a function that panics
// 3. call goexit
- // Goexit should run the #2 defer. Its panic
+ // Goexit should run the #2 defer. Its panic
// should be caught by the #1 defer, and execution
- // should resume in the caller. Like the Goexit
+ // should resume in the caller. Like the Goexit
// never happened!
defer func() {
r := recover()
t.Fatalf("output does not start with %q:\n%s", want, output)
}
}
+
+func TestPanicTraceback(t *testing.T) {
+ output := runTestProg(t, "testprog", "PanicTraceback")
+ want := "panic: hello"
+ if !strings.HasPrefix(output, want) {
+ t.Fatalf("output does not start with %q:\n%s", want, output)
+ }
+
+ // Check functions in the traceback.
+ fns := []string{"panic", "main.pt1.func1", "panic", "main.pt2.func1", "panic", "main.pt2", "main.pt1"}
+ for _, fn := range fns {
+ re := regexp.MustCompile(`(?m)^` + regexp.QuoteMeta(fn) + `\(.*\n`)
+ idx := re.FindStringIndex(output)
+ if idx == nil {
+ t.Fatalf("expected %q function in traceback:\n%s", fn, output)
+ }
+ output = output[idx[1]:]
+ }
+}
+
+func testPanicDeadlock(t *testing.T, name string, want string) {
+ // test issue 14432
+ output := runTestProg(t, "testprog", name)
+ if !strings.HasPrefix(output, want) {
+ t.Fatalf("output does not start with %q:\n%s", want, output)
+ }
+}
+
+func TestPanicDeadlockGosched(t *testing.T) {
+ testPanicDeadlock(t, "GoschedInPanic", "panic: errorThatGosched\n\n")
+}
+
+func TestPanicDeadlockSyscall(t *testing.T) {
+ testPanicDeadlock(t, "SyscallInPanic", "1\n2\npanic: 3\n\n")
+}
"os/exec"
"path/filepath"
"runtime"
+ "strings"
"syscall"
"testing"
)
cmd = exec.Command(filepath.Join(dir, "a.exe"))
cmd = testEnv(cmd)
cmd.Env = append(cmd.Env, "GOTRACEBACK=crash")
+
+ // Set GOGC=off. Because of golang.org/issue/10958, the tight
+ // loops in the test program are not preemptible. If GC kicks
+ // in, it may lock up and prevent main from saying it's ready.
+ newEnv := []string{}
+ for _, s := range cmd.Env {
+ if !strings.HasPrefix(s, "GOGC=") {
+ newEnv = append(newEnv, s)
+ }
+ }
+ cmd.Env = append(newEnv, "GOGC=off")
+
var outbuf bytes.Buffer
cmd.Stdout = &outbuf
cmd.Stderr = &outbuf
func TestSignalExitStatus(t *testing.T) {
testenv.MustHaveGoBuild(t)
switch runtime.GOOS {
- case "netbsd":
- t.Skip("skipping on NetBSD; see https://golang.org/issue/14063")
+ case "netbsd", "solaris":
+ t.Skipf("skipping on %s; see https://golang.org/issue/14063", runtime.GOOS)
}
exe, err := buildTestProg(t, "testprog")
if err != nil {
)
// GOMAXPROCS sets the maximum number of CPUs that can be executing
-// simultaneously and returns the previous setting. If n < 1, it does not
+// simultaneously and returns the previous setting. If n < 1, it does not
// change the current setting.
// The number of logical CPUs on the local machine can be queried with NumCPU.
// This call will go away when the scheduler improves.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
func check(t *testing.T, line, has string) {
- if strings.Index(line, has) < 0 {
+ if !strings.Contains(line, has) {
t.Errorf("expected %q in %q", has, line)
}
}
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include <linux/eventpoll.h>
// This is the sigaction structure from the Linux 2.1.68 kernel which
-// is used with the rt_sigaction system call. For 386 this is not
+// is used with the rt_sigaction system call. For 386 this is not
// defined in any public header file.
struct kernel_sigaction {
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
func typestring(x interface{}) string {
e := efaceOf(&x)
- return *e._type._string
+ return e._type._string
}
// For calling from C.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package runtime
var NewOSProc0 = newosproc0
+var Mincore = mincore
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build darwin dragonfly freebsd linux nacl netbsd openbsd solaris
+
+// Export guts for testing.
+
+package runtime
+
+var Mmap = mmap
+
+const ENOMEM = _ENOMEM
+const MAP_ANON = _MAP_ANON
+const MAP_PRIVATE = _MAP_PRIVATE
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
return (*LFNode)(unsafe.Pointer(lfstackpop(head)))
}
-type ParFor struct {
- body func(*ParFor, uint32)
- done uint32
- Nthr uint32
- thrseq uint32
- Cnt uint32
- wait bool
-}
-
-func NewParFor(nthrmax uint32) *ParFor {
- var desc *ParFor
- systemstack(func() {
- desc = (*ParFor)(unsafe.Pointer(parforalloc(nthrmax)))
- })
- return desc
-}
-
-func ParForSetup(desc *ParFor, nthr, n uint32, wait bool, body func(*ParFor, uint32)) {
- systemstack(func() {
- parforsetup((*parfor)(unsafe.Pointer(desc)), nthr, n, wait,
- *(*func(*parfor, uint32))(unsafe.Pointer(&body)))
- })
-}
-
-func ParForDo(desc *ParFor) {
- systemstack(func() {
- parfordo((*parfor)(unsafe.Pointer(desc)))
- })
-}
-
-func ParForIters(desc *ParFor, tid uint32) (uint32, uint32) {
- desc1 := (*parfor)(unsafe.Pointer(desc))
- pos := desc1.thr[tid].pos
- return uint32(pos), uint32(pos >> 32)
-}
-
func GCMask(x interface{}) (ret []byte) {
systemstack(func() {
ret = getgcmask(x)
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
import "runtime/internal/sys"
// Caller reports file and line number information about function invocations on
-// the calling goroutine's stack. The argument skip is the number of stack frames
+// the calling goroutine's stack. The argument skip is the number of stack frames
// to ascend, with 0 identifying the caller of Caller. (For historical reasons the
// meaning of skip differs between Caller and Callers.) The return values report the
// program counter, file name, and line number within the file of the corresponding
-// call. The boolean ok is false if it was not possible to recover the information.
+// call. The boolean ok is false if it was not possible to recover the information.
func Caller(skip int) (pc uintptr, file string, line int, ok bool) {
// Ask for two PCs: the one we were asked for
// and what it called, so that we can see if it
}
// Callers fills the slice pc with the return program counters of function invocations
-// on the calling goroutine's stack. The argument skip is the number of stack frames
+// on the calling goroutine's stack. The argument skip is the number of stack frames
// to skip before recording in pc, with 0 identifying the frame for Callers itself and
// 1 identifying the caller of Callers.
// It returns the number of entries written to pc.
//
// Note that since each slice entry pc[i] is a return program counter,
// looking up the file and line for pc[i] (for example, using (*Func).FileLine)
-// will return the file and line number of the instruction immediately
+// will normally return the file and line number of the instruction immediately
// following the call.
-// To look up the file and line number of the call itself, use pc[i]-1.
-// As an exception to this rule, if pc[i-1] corresponds to the function
-// runtime.sigpanic, then pc[i] is the program counter of a faulting
-// instruction and should be used without any subtraction.
+// To easily look up file/line information for the call sequence, use Frames.
func Callers(skip int, pc []uintptr) int {
// runtime.callers uses pc.array==nil as a signal
- // to print a stack trace. Pick off 0-length pc here
+ // to print a stack trace. Pick off 0-length pc here
// so that we don't let a nil pc slice get to it.
if len(pc) == 0 {
return 0
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// https://code.google.com/p/smhasher/
// This code is a port of some of the Smhasher tests to Go.
//
-// The current AES hash function passes Smhasher. Our fallback
+// The current AES hash function passes Smhasher. Our fallback
// hash functions don't, so we only enable the difficult tests when
// we know the AES implementation is available.
k.i = uint64(r.Int63())
}
func (k *EfaceKey) bits() int {
- // use 64 bits. This tests inlined interfaces
+ // use 64 bits. This tests inlined interfaces
// on 64-bit targets and indirect interfaces on
// 32-bit targets.
return 64
k.i = fInter(r.Int63())
}
func (k *IfaceKey) bits() int {
- // use 64 bits. This tests inlined interfaces
+ // use 64 bits. This tests inlined interfaces
// on 64-bit targets and indirect interfaces on
// 32-bit targets.
return 64
// Each entry in the grid should be about REP/2.
// More precisely, we did N = k.bits() * hashSize experiments where
- // each is the sum of REP coin flips. We want to find bounds on the
+ // each is the sum of REP coin flips. We want to find bounds on the
// sum of coin flips such that a truly random experiment would have
// all sums inside those bounds with 99% probability.
N := n * hashSize
func BenchmarkHash65536(b *testing.B) { benchmarkHash(b, 65536) }
func TestArrayHash(t *testing.T) {
- // Make sure that "" in arrays hash correctly. The hash
+ // Make sure that "" in arrays hash correctly. The hash
// should at least scramble the input seed so that, e.g.,
// {"","foo"} and {"foo",""} have different hashes.
// If the hash is bad, then all (8 choose 4) = 70 keys
// have the same hash. If so, we allocate 70/8 = 8
- // overflow buckets. If the hash is good we don't
+ // overflow buckets. If the hash is good we don't
// normally allocate any overflow buckets, and the
// probability of even one or two overflows goes down rapidly.
- // (There is always 1 allocation of the bucket array. The map
+ // (There is always 1 allocation of the bucket array. The map
// header is allocated on the stack.)
f := func() {
- // Make the key type at most 128 bytes. Otherwise,
+ // Make the key type at most 128 bytes. Otherwise,
// we get an allocation per key.
type key [8]string
m := make(map[key]bool, 70)
// This file contains the implementation of Go's map type.
//
-// A map is just a hash table. The data is arranged
-// into an array of buckets. Each bucket contains up to
-// 8 key/value pairs. The low-order bits of the hash are
-// used to select a bucket. Each bucket contains a few
+// A map is just a hash table. The data is arranged
+// into an array of buckets. Each bucket contains up to
+// 8 key/value pairs. The low-order bits of the hash are
+// used to select a bucket. Each bucket contains a few
// high-order bits of each hash to distinguish the entries
// within a single bucket.
//
// extra buckets.
//
// When the hashtable grows, we allocate a new array
-// of buckets twice as big. Buckets are incrementally
+// of buckets twice as big. Buckets are incrementally
// copied from the old bucket array to the new bucket array.
//
// Map iterators walk through the array of buckets and
// to the new table.
// Picking loadFactor: too large and we have lots of overflow
-// buckets, too small and we waste a lot of space. I wrote
+// buckets, too small and we waste a lot of space. I wrote
// a simple program to check some stats for different loads:
// (64-bit, 8 byte keys and values)
// loadFactor %overflow bytes/entry hitprobe missprobe
// missprobe = # of entries to check when looking up an absent key
//
// Keep in mind this data is for maximally loaded tables, i.e. just
-// before the table grows. Typical tables will be somewhat less loaded.
+// before the table grows. Typical tables will be somewhat less loaded.
import (
"runtime/internal/atomic"
maxValueSize = 128
// data offset should be the size of the bmap struct, but needs to be
- // aligned correctly. For amd64p32 this means 64-bit alignment
+ // aligned correctly. For amd64p32 this means 64-bit alignment
// even though pointers are 32 bit.
dataOffset = unsafe.Offsetof(struct {
b bmap
v int64
}{}.v)
- // Possible tophash values. We reserve a few possibilities for special marks.
+ // Possible tophash values. We reserve a few possibilities for special marks.
// Each bucket (including its overflow buckets, if any) will have either all or none of its
// entries in the evacuated* states (except during the evacuate() method, which only happens
// during map writes and thus no one else can observe the map during that time).
// A header for a Go map.
type hmap struct {
// Note: the format of the Hmap is encoded in ../../cmd/internal/gc/reflect.go and
- // ../reflect/type.go. Don't change this structure without also changing that code!
+ // ../reflect/type.go. Don't change this structure without also changing that code!
count int // # live cells == size of map. Must be first (used by len() builtin)
flags uint8
B uint8 // log_2 of # of buckets (can hold up to loadFactor * 2^B items)
throw("value size wrong")
}
- // invariants we depend on. We should probably check these at compile time
+ // invariants we depend on. We should probably check these at compile time
// somewhere, but for now we'll do it here.
if t.key.align > bucketCnt {
throw("key align too big")
}
}
-// returns both key and value. Used by map iterator
+// returns both key and value. Used by map iterator
func mapaccessK(t *maptype, h *hmap, key unsafe.Pointer) (unsafe.Pointer, unsafe.Pointer) {
if h == nil || h.count == 0 {
return nil, nil
if !alg.equal(key, k2) {
continue
}
- // already have a mapping for key. Update it.
+ // already have a mapping for key. Update it.
if t.needkeyupdate {
typedmemmove(t.key, k2, key)
}
b = ovf
}
- // did not find mapping for key. Allocate new cell & add entry.
+ // did not find mapping for key. Allocate new cell & add entry.
if float32(h.count) >= loadFactor*float32((uintptr(1)<<h.B)) && h.count >= bucketCnt {
hashGrow(t, h)
goto again // Growing the table invalidates everything, so try again
if b.tophash[offi] != empty && b.tophash[offi] != evacuatedEmpty {
if checkBucket != noCheck {
// Special case: iterator was started during a grow and the
- // grow is not done yet. We're working on a bucket whose
- // oldbucket has not been evacuated yet. Or at least, it wasn't
- // evacuated when we started the bucket. So we're iterating
+ // grow is not done yet. We're working on a bucket whose
+ // oldbucket has not been evacuated yet. Or at least, it wasn't
+ // evacuated when we started the bucket. So we're iterating
// through the oldbucket, skipping any keys that will go
// to the other new bucket (each oldbucket expands to two
// buckets during a grow).
} else {
// Hash isn't repeatable if k != k (NaNs). We need a
// repeatable and randomish choice of which direction
- // to send NaNs during evacuation. We'll use the low
+ // to send NaNs during evacuation. We'll use the low
// bit of tophash to decide which way NaNs go.
// NOTE: this case is why we need two evacuate tophash
// values, evacuatedX and evacuatedY, that differ in
it.value = rv
} else {
// if key!=key then the entry can't be deleted or
- // updated, so we can just return it. That's lucky for
+ // updated, so we can just return it. That's lucky for
// us because when key!=key we can't look it up
// successfully in the current table.
it.key = k2
if h.flags&iterator != 0 {
if !t.reflexivekey && !alg.equal(k2, k2) {
// If key != key (NaNs), then the hash could be (and probably
- // will be) entirely different from the old hash. Moreover,
- // it isn't reproducible. Reproducibility is required in the
+ // will be) entirely different from the old hash. Moreover,
+ // it isn't reproducible. Reproducibility is required in the
// presence of iterators, as our evacuation decision must
// match whatever decision the iterator made.
// Fortunately, we have the freedom to send these keys either
- // way. Also, tophash is meaningless for these kinds of keys.
+ // way. Also, tophash is meaningless for these kinds of keys.
// We let the low bit of tophash drive the evacuation decision.
// We recompute a new random tophash for the next level so
// these keys will get evenly distributed across all buckets
if oldbucket == h.nevacuate {
h.nevacuate = oldbucket + 1
if oldbucket+1 == newbit { // newbit == # of oldbuckets
- // Growing is all done. Free old main bucket array.
+ // Growing is all done. Free old main bucket array.
h.oldbuckets = nil
// Can discard old overflow buckets as well.
// If they are still referenced by an iterator,
return t.alg.hash != nil
}
-// Reflect stubs. Called from ../reflect/asm_*.s
+// Reflect stubs. Called from ../reflect/asm_*.s
//go:linkname reflect_makemap reflect.makemap
func reflect_makemap(t *maptype) *hmap {
}
var b *bmap
if h.B == 0 {
- // One-bucket table. No need to hash.
+ // One-bucket table. No need to hash.
b = (*bmap)(h.buckets)
} else {
hash := t.key.alg.hash(noescape(unsafe.Pointer(&key)), uintptr(h.hash0))
}
var b *bmap
if h.B == 0 {
- // One-bucket table. No need to hash.
+ // One-bucket table. No need to hash.
b = (*bmap)(h.buckets)
} else {
hash := t.key.alg.hash(noescape(unsafe.Pointer(&key)), uintptr(h.hash0))
}
var b *bmap
if h.B == 0 {
- // One-bucket table. No need to hash.
+ // One-bucket table. No need to hash.
b = (*bmap)(h.buckets)
} else {
hash := t.key.alg.hash(noescape(unsafe.Pointer(&key)), uintptr(h.hash0))
}
var b *bmap
if h.B == 0 {
- // One-bucket table. No need to hash.
+ // One-bucket table. No need to hash.
b = (*bmap)(h.buckets)
} else {
hash := t.key.alg.hash(noescape(unsafe.Pointer(&key)), uintptr(h.hash0))
if k.len != key.len {
continue
}
- if k.str == key.str || memeq(k.str, key.str, uintptr(key.len)) {
+ if k.str == key.str || memequal(k.str, key.str, uintptr(key.len)) {
return add(unsafe.Pointer(b), dataOffset+bucketCnt*2*sys.PtrSize+i*uintptr(t.valuesize))
}
}
continue
}
if keymaybe != bucketCnt {
- // Two keys are potential matches. Use hash to distinguish them.
+ // Two keys are potential matches. Use hash to distinguish them.
goto dohash
}
keymaybe = i
}
if keymaybe != bucketCnt {
k := (*stringStruct)(add(unsafe.Pointer(b), dataOffset+keymaybe*2*sys.PtrSize))
- if memeq(k.str, key.str, uintptr(key.len)) {
+ if memequal(k.str, key.str, uintptr(key.len)) {
return add(unsafe.Pointer(b), dataOffset+bucketCnt*2*sys.PtrSize+keymaybe*uintptr(t.valuesize))
}
}
if k.len != key.len {
continue
}
- if k.str == key.str || memeq(k.str, key.str, uintptr(key.len)) {
+ if k.str == key.str || memequal(k.str, key.str, uintptr(key.len)) {
return add(unsafe.Pointer(b), dataOffset+bucketCnt*2*sys.PtrSize+i*uintptr(t.valuesize))
}
}
if k.len != key.len {
continue
}
- if k.str == key.str || memeq(k.str, key.str, uintptr(key.len)) {
+ if k.str == key.str || memequal(k.str, key.str, uintptr(key.len)) {
return add(unsafe.Pointer(b), dataOffset+bucketCnt*2*sys.PtrSize+i*uintptr(t.valuesize)), true
}
}
continue
}
if keymaybe != bucketCnt {
- // Two keys are potential matches. Use hash to distinguish them.
+ // Two keys are potential matches. Use hash to distinguish them.
goto dohash
}
keymaybe = i
}
if keymaybe != bucketCnt {
k := (*stringStruct)(add(unsafe.Pointer(b), dataOffset+keymaybe*2*sys.PtrSize))
- if memeq(k.str, key.str, uintptr(key.len)) {
+ if memequal(k.str, key.str, uintptr(key.len)) {
return add(unsafe.Pointer(b), dataOffset+bucketCnt*2*sys.PtrSize+keymaybe*uintptr(t.valuesize)), true
}
}
if k.len != key.len {
continue
}
- if k.str == key.str || memeq(k.str, key.str, uintptr(key.len)) {
+ if k.str == key.str || memequal(k.str, key.str, uintptr(key.len)) {
return add(unsafe.Pointer(b), dataOffset+bucketCnt*2*sys.PtrSize+i*uintptr(t.valuesize)), true
}
}
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Implementation of runtime/debug.WriteHeapDump. Writes all
+// Implementation of runtime/debug.WriteHeapDump. Writes all
// objects in the heap plus additional info (roots, threads,
// finalizers, etc.) to a file.
// Inside a bucket, we keep a list of types that
// have been serialized so far, most recently used first.
// Note: when a bucket overflows we may end up
-// serializing a type more than once. That's ok.
+// serializing a type more than once. That's ok.
const (
typeCacheBuckets = 256
typeCacheAssoc = 4
}
}
- // Might not have been dumped yet. Dump it and
+ // Might not have been dumped yet. Dump it and
// remember we did so.
for j := typeCacheAssoc - 1; j > 0; j-- {
b.t[j] = b.t[j-1]
dumpint(tagType)
dumpint(uint64(uintptr(unsafe.Pointer(t))))
dumpint(uint64(t.size))
- if t.x == nil || t.x.pkgpath == nil || t.x.name == nil {
- dumpstr(*t._string)
+ if t.x == nil || t.x.pkgpath == nil {
+ dumpstr(t._string)
} else {
pkgpath := stringStructOf(t.x.pkgpath)
- name := stringStructOf(t.x.name)
+ namestr := t.name()
+ name := stringStructOf(&namestr)
dumpint(uint64(uintptr(pkgpath.len) + 1 + uintptr(name.len)))
dwrite(pkgpath.str, uintptr(pkgpath.len))
dwritebyte('.')
pcdata := pcdatavalue(f, _PCDATA_StackMapIndex, pc, nil)
if pcdata == -1 {
// We do not have a valid pcdata value but there might be a
- // stackmap for this function. It is likely that we are looking
+ // stackmap for this function. It is likely that we are looking
// at the function prologue, assume so and hope for the best.
pcdata = 0
}
func itab_callback(tab *itab) {
t := tab._type
- // Dump a map from itab* to the type of its data field.
- // We want this map so we can deduce types of interface referents.
- if t.kind&kindDirectIface == 0 {
- // indirect - data slot is a pointer to t.
- dumptype(t.ptrto)
- dumpint(tagItab)
- dumpint(uint64(uintptr(unsafe.Pointer(tab))))
- dumpint(uint64(uintptr(unsafe.Pointer(t.ptrto))))
- } else if t.kind&kindNoPointers == 0 {
- // t is pointer-like - data slot is a t.
- dumptype(t)
- dumpint(tagItab)
- dumpint(uint64(uintptr(unsafe.Pointer(tab))))
- dumpint(uint64(uintptr(unsafe.Pointer(t))))
- } else {
- // Data slot is a scalar. Dump type just for fun.
- // With pointer-only interfaces, this shouldn't happen.
- dumptype(t)
- dumpint(tagItab)
- dumpint(uint64(uintptr(unsafe.Pointer(tab))))
- dumpint(uint64(uintptr(unsafe.Pointer(t))))
- }
+ dumptype(t)
+ dumpint(tagItab)
+ dumpint(uint64(uintptr(unsafe.Pointer(tab))))
+ dumpint(uint64(uintptr(unsafe.Pointer(t))))
}
func dumpitabs() {
}
}
-var dumphdr = []byte("go1.6 heap dump\n")
+var dumphdr = []byte("go1.7 heap dump\n")
func mdump() {
// make sure we're done sweeping
}
// The heap dump reader needs to be able to disambiguate
-// Eface entries. So it needs to know every type that might
-// appear in such an entry. The following routine accomplishes that.
+// Eface entries. So it needs to know every type that might
+// appear in such an entry. The following routine accomplishes that.
// TODO(rsc, khr): Delete - no longer possible.
// Dump all the types that appear in the type field of
if canfail {
return nil
}
- panic(&TypeAssertionError{"", *typ._string, *inter.typ._string, *inter.mhdr[0].name})
+ panic(&TypeAssertionError{"", typ._string, inter.typ._string, *inter.mhdr[0].name})
}
// compiler has provided some good hash codes for us.
if locked != 0 {
unlock(&ifaceLock)
}
- panic(&TypeAssertionError{"", *typ._string, *inter.typ._string, *iname})
+ panic(&TypeAssertionError{"", typ._string, inter.typ._string, *iname})
}
m.bad = 1
break
x = newobject(t)
}
// TODO: We allocate a zeroed object only to overwrite it with
- // actual data. Figure out how to avoid zeroing. Also below in convT2I.
+ // actual data. Figure out how to avoid zeroing. Also below in convT2I.
typedmemmove(t, x, elem)
e._type = t
e.data = x
func panicdottype(have, want, iface *_type) {
haveString := ""
if have != nil {
- haveString = *have._string
+ haveString = have._string
}
- panic(&TypeAssertionError{*iface._string, haveString, *want._string, ""})
+ panic(&TypeAssertionError{iface._string, haveString, want._string, ""})
}
func assertI2T(t *_type, i iface, r unsafe.Pointer) {
tab := i.tab
if tab == nil {
- panic(&TypeAssertionError{"", "", *t._string, ""})
+ panic(&TypeAssertionError{"", "", t._string, ""})
}
if tab._type != t {
- panic(&TypeAssertionError{*tab.inter.typ._string, *tab._type._string, *t._string, ""})
+ panic(&TypeAssertionError{tab.inter.typ._string, tab._type._string, t._string, ""})
}
if r != nil {
if isDirectIface(t) {
func assertE2T(t *_type, e eface, r unsafe.Pointer) {
if e._type == nil {
- panic(&TypeAssertionError{"", "", *t._string, ""})
+ panic(&TypeAssertionError{"", "", t._string, ""})
}
if e._type != t {
- panic(&TypeAssertionError{"", *e._type._string, *t._string, ""})
+ panic(&TypeAssertionError{"", e._type._string, t._string, ""})
}
if r != nil {
if isDirectIface(t) {
tab := i.tab
if tab == nil {
// explicit conversions require non-nil interface value.
- panic(&TypeAssertionError{"", "", *inter.typ._string, ""})
+ panic(&TypeAssertionError{"", "", inter.typ._string, ""})
}
r._type = tab._type
r.data = i.data
tab := i.tab
if tab == nil {
// explicit conversions require non-nil interface value.
- panic(&TypeAssertionError{"", "", *inter.typ._string, ""})
+ panic(&TypeAssertionError{"", "", inter.typ._string, ""})
}
if tab.inter == inter {
r.tab = tab
t := e._type
if t == nil {
// explicit conversions require non-nil interface value.
- panic(&TypeAssertionError{"", "", *inter.typ._string, ""})
+ panic(&TypeAssertionError{"", "", inter.typ._string, ""})
}
r.tab = getitab(inter, t, false)
r.data = e.data
func assertE2E(inter *interfacetype, e eface, r *eface) {
if e._type == nil {
// explicit conversions require non-nil interface value.
- panic(&TypeAssertionError{"", "", *inter.typ._string, ""})
+ panic(&TypeAssertionError{"", "", inter.typ._string, ""})
}
*r = e
}
return true
}
-func ifacethash(i iface) uint32 {
- tab := i.tab
- if tab == nil {
- return 0
- }
- return tab._type.hash
-}
-
-func efacethash(e eface) uint32 {
- t := e._type
- if t == nil {
- return 0
- }
- return t.hash
-}
-
func iterate_itabs(fn func(*itab)) {
for _, h := range &hash {
for ; h != nil; h = h.link {
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// R4 = ((ptr & 3) * 8)
AND $3, R1, R4
SLLV $3, R4
- // Shift val for aligned ptr. R2 = val << R4
+ // Shift val for aligned ptr. R2 = val << R4
SLLV R4, R2
SYNC
// R4 = ((ptr & 3) * 8)
AND $3, R1, R4
SLLV $3, R4
- // Shift val for aligned ptr. R2 = val << R4 | ^(0xFF << R4)
+ // Shift val for aligned ptr. R2 = val << R4 | ^(0xFF << R4)
MOVV $0xFF, R5
SLLV R4, R2
SLLV R4, R5
#endif
// R6 = ((ptr & 3) * 8) = (ptr << 3) & (3*8)
RLDC $3, R3, $(3*8), R6
- // Shift val for aligned ptr. R4 = val << R6
+ // Shift val for aligned ptr. R4 = val << R6
SLD R6, R4, R4
again:
#endif
// R6 = ((ptr & 3) * 8) = (ptr << 3) & (3*8)
RLDC $3, R3, $(3*8), R6
- // Shift val for aligned ptr. R4 = val << R6 | ^(0xFF << R6)
+ // Shift val for aligned ptr. R4 = val << R6 | ^(0xFF << R6)
MOVD $0xFF, R7
SLD R6, R4
SLD R6, R7
}
}
-// Tests that xadduintptr correctly updates 64-bit values. The place where
+// Tests that xadduintptr correctly updates 64-bit values. The place where
// we actually do so is mstats.go, functions mSysStat{Inc,Dec}.
func TestXadduintptrOnUint64(t *testing.T) {
/* if runtime.BigEndian != 0 {
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// wait is either MUTEX_LOCKED or MUTEX_SLEEPING
// depending on whether there is a thread sleeping
- // on this mutex. If we ever change l->key from
+ // on this mutex. If we ever change l->key from
// MUTEX_SLEEPING to some other value, we must be
// careful to change it back to MUTEX_SLEEPING before
// returning, to ensure that the sleeping thread gets
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
}
if v&locked != 0 {
- // Queued. Wait.
+ // Queued. Wait.
semasleep(-1)
i = 0
}
// Two notewakeups! Not allowed.
throw("notewakeup - double wakeup")
default:
- // Must be the waiting m. Wake it up.
+ // Must be the waiting m. Wake it up.
semawakeup((*m)(unsafe.Pointer(v)))
}
}
}
return
}
- // Queued. Sleep.
+ // Queued. Sleep.
gp.m.blocked = true
semasleep(-1)
gp.m.blocked = false
return true
}
if ns < 0 {
- // Queued. Sleep.
+ // Queued. Sleep.
gp.m.blocked = true
semasleep(-1)
gp.m.blocked = false
deadline = nanotime() + ns
for {
- // Registered. Sleep.
+ // Registered. Sleep.
gp.m.blocked = true
if semasleep(ns) >= 0 {
gp.m.blocked = false
return true
}
gp.m.blocked = false
- // Interrupted or timed out. Still registered. Semaphore not acquired.
+ // Interrupted or timed out. Still registered. Semaphore not acquired.
ns = deadline - nanotime()
if ns <= 0 {
break
}
- // Deadline hasn't arrived. Keep sleeping.
+ // Deadline hasn't arrived. Keep sleeping.
}
- // Deadline arrived. Still registered. Semaphore not acquired.
+ // Deadline arrived. Still registered. Semaphore not acquired.
// Want to give up and return, but have to unregister first,
// so that any notewakeup racing with the return does not
// try to grant us the semaphore when we don't expect it.
// directly, bypassing the MCache and MCentral free lists.
//
// The small objects on the MCache and MCentral free lists
-// may or may not be zeroed. They are zeroed if and only if
-// the second word of the object is zero. A span in the
+// may or may not be zeroed. They are zeroed if and only if
+// the second word of the object is zero. A span in the
// page heap is zeroed unless s->needzero is set. When a span
// is allocated to break into small objects, it is zeroed if needed
// and s->needzero is set. There are two main benefits to delaying the
// _64bit = 1 on 64-bit systems, 0 on 32-bit systems
_64bit = 1 << (^uintptr(0) >> 63) / 2
- // Computed constant. The definition of MaxSmallSize and the
+ // Computed constant. The definition of MaxSmallSize and the
// algorithm in msize.go produces some number of different allocation
- // size classes. NumSizeClasses is that number. It's needed here
+ // size classes. NumSizeClasses is that number. It's needed here
// because there are static arrays of this length; when msize runs its
// size choosing algorithm it double-checks that NumSizeClasses agrees.
_NumSizeClasses = 67
// Per-P, per order stack segment cache size.
_StackCacheSize = 32 * 1024
- // Number of orders that get caching. Order 0 is FixedStack
+ // Number of orders that get caching. Order 0 is FixedStack
// and each successive order is twice as large.
- // We want to cache 2KB, 4KB, 8KB, and 16KB stacks. Larger stacks
+ // We want to cache 2KB, 4KB, 8KB, and 16KB stacks. Larger stacks
// will be allocated directly.
// Since FixedStack is different on different systems, we
// must vary NumStackOrders to keep the same maximum cached size.
// Max number of threads to run garbage collection.
// 2, 3, and 4 are all plausible maximums depending
- // on the hardware details of the machine. The garbage
+ // on the hardware details of the machine. The garbage
// collector scales well to 32 cpus.
_MaxGcproc = 32
)
//
// SysFree returns it unconditionally; this is only used if
// an out-of-memory error has been detected midway through
-// an allocation. It is okay if SysFree is a no-op.
+// an allocation. It is okay if SysFree is a no-op.
//
// SysReserve reserves address space without allocating memory.
// If the pointer passed to it is non-nil, the caller wants the
// reservation there, but SysReserve can still choose another
-// location if that one is unavailable. On some systems and in some
+// location if that one is unavailable. On some systems and in some
// cases SysReserve will simply check that the address space is
-// available and not actually reserve it. If SysReserve returns
+// available and not actually reserve it. If SysReserve returns
// non-nil, it sets *reserved to true if the address space is
// reserved, false if it has merely been checked.
// NOTE: SysReserve returns OS-aligned memory, but the heap allocator
// reserved, not merely checked.
//
// SysFault marks a (already sysAlloc'd) region to fault
-// if accessed. Used only for debugging the runtime.
+// if accessed. Used only for debugging the runtime.
func mallocinit() {
initSizes()
limit = 0
// Set up the allocation arena, a contiguous area of memory where
- // allocated data will be found. The arena begins with a bitmap large
+ // allocated data will be found. The arena begins with a bitmap large
// enough to hold 4 bits per allocated word.
if sys.PtrSize == 8 && (limit == 0 || limit > 1<<30) {
// On a 64-bit machine, allocate from a single contiguous reservation.
// SysReserve to use 0x0000XXc000000000 if possible (XX=00...7f).
// Allocating a 512 GB region takes away 39 bits, and the amd64
// doesn't let us choose the top 17 bits, so that leaves the 9 bits
- // in the middle of 0x00c0 for us to choose. Choosing 0x00c0 means
+ // in the middle of 0x00c0 for us to choose. Choosing 0x00c0 means
// that the valid memory addresses will begin 0x00c0, 0x00c1, ..., 0x00df.
// In little-endian, that's c0 00, c1 00, ..., df 00. None of those are valid
// UTF-8 sequences, and they are otherwise as far away from
- // ff (likely a common byte) as possible. If that fails, we try other 0xXXc0
- // addresses. An earlier attempt to use 0x11f8 caused out of memory errors
+ // ff (likely a common byte) as possible. If that fails, we try other 0xXXc0
+ // addresses. An earlier attempt to use 0x11f8 caused out of memory errors
// on OS X during thread allocations. 0x00c0 causes conflicts with
// AddressSanitizer which reserves all memory up to 0x0100.
// These choices are both for debuggability and to reduce the
spansSize = round(spansSize, _PageSize)
// SysReserve treats the address we ask for, end, as a hint,
- // not as an absolute requirement. If we ask for the end
+ // not as an absolute requirement. If we ask for the end
// of the data segment but the operating system requires
// a little more space before we can start allocating, it will
- // give out a slightly higher pointer. Except QEMU, which
+ // give out a slightly higher pointer. Except QEMU, which
// is buggy, as usual: it won't adjust the pointer upward.
// So adjust it upward a little bit ourselves: 1/4 MB to get
// away from the running binary image and then round up
return newarray(typ, n)
}
-// rawmem returns a chunk of pointerless memory. It is
+// rawmem returns a chunk of pointerless memory. It is
// not zeroed.
func rawmem(size uintptr) unsafe.Pointer {
return mallocgc(size, nil, flagNoScan|flagNoZero)
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// related operations. In particular there are times when the GC assumes
// that the world is stopped but scheduler related code is still being
// executed, dealing with syscalls, dealing with putting gs on runnable
-// queues and so forth. This code can not execute write barriers because
+// queues and so forth. This code cannot execute write barriers because
// the GC might drop them on the floor. Stopping the world involves removing
// the p associated with an m. We use the fact that m.p == nil to indicate
// that we are in one these critical section and throw if the write is of
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// The result includes in its higher bits the bits for subsequent words
// described by the same bitmap byte.
func (h heapBits) bits() uint32 {
- return uint32(*h.bitp) >> h.shift
+ // The (shift & 31) eliminates a test and conditional branch
+ // from the generated code.
+ return uint32(*h.bitp) >> (h.shift & 31)
}
// isMarked reports whether the heap bits have the marked bit set.
throw("runtime: typeBitsBulkBarrier without type")
}
if typ.size != size {
- println("runtime: typeBitsBulkBarrier with type ", *typ._string, " of size ", typ.size, " but memory size", size)
+ println("runtime: typeBitsBulkBarrier with type ", typ._string, " of size ", typ.size, " but memory size", size)
throw("runtime: invalid typeBitsBulkBarrier")
}
if typ.kind&kindGCProg != 0 {
- println("runtime: typeBitsBulkBarrier with type ", *typ._string, " with GC prog")
+ println("runtime: typeBitsBulkBarrier with type ", typ._string, " with GC prog")
throw("runtime: invalid typeBitsBulkBarrier")
}
if !writeBarrier.needed {
// TODO(rsc): Perhaps introduce a different heapBitsSpan type.
// initSpan initializes the heap bitmap for a span.
+// It clears all mark and checkmark bits.
+// If this is a span of pointer-sized objects, it initializes all
+// words to pointer (and there are no dead bits).
+// Otherwise, it initializes all words to scalar/dead.
func (h heapBits) initSpan(size, n, total uintptr) {
if total%heapBitmapScale != 0 {
throw("initSpan: unaligned length")
// heapBitsSweepSpan coordinates the sweeping of a span by reading
// and updating the corresponding heap bitmap entries.
// For each free object in the span, heapBitsSweepSpan sets the type
-// bits for the first two words (or one for single-word objects) to typeDead
+// bits for the first four words (less for smaller objects) to scalar/dead
// and then calls f(p), where p is the object's base address.
// f is expected to add the object to a free list.
// For non-free objects, heapBitsSweepSpan turns off the marked bit.
}
if nw == 0 {
// No pointers! Caller was supposed to check.
- println("runtime: invalid type ", *typ._string)
+ println("runtime: invalid type ", typ._string)
throw("heapBitsSetType: called with non-pointer type")
return
}
if doubleCheck {
end := heapBitsForAddr(x + size)
if typ.kind&kindGCProg == 0 && (hbitp != end.bitp || (w == nw+2) != (end.shift == 2)) {
- println("ended at wrong bitmap byte for", *typ._string, "x", dataSize/typ.size)
+ println("ended at wrong bitmap byte for", typ._string, "x", dataSize/typ.size)
print("typ.size=", typ.size, " typ.ptrdata=", typ.ptrdata, " dataSize=", dataSize, " size=", size, "\n")
print("w=", w, " nw=", nw, " b=", hex(b), " nb=", nb, " hb=", hex(hb), "\n")
h0 := heapBitsForAddr(x)
}
}
if have != want {
- println("mismatch writing bits for", *typ._string, "x", dataSize/typ.size)
+ println("mismatch writing bits for", typ._string, "x", dataSize/typ.size)
print("typ.size=", typ.size, " typ.ptrdata=", typ.ptrdata, " dataSize=", dataSize, " size=", size, "\n")
print("kindGCProg=", typ.kind&kindGCProg != 0, "\n")
print("w=", w, " nw=", nw, " b=", hex(b), " nb=", nb, " hb=", hex(hb), "\n")
}
// Gets a span that has a free object in it and assigns it
-// to be the cached span for the given sizeclass. Returns this span.
+// to be the cached span for the given sizeclass. Returns this span.
func (c *mcache) refill(sizeclass int32) *mspan {
_g_ := getg()
// Free n objects from a span s back into the central free list c.
// Called during sweep.
-// Returns true if the span was returned to heap. Sets sweepgen to
+// Returns true if the span was returned to heap. Sets sweepgen to
// the latest generation.
// If preserve=true, don't return the span to heap nor relink in MCentral lists;
// caller takes care of it.
c.nonempty.insert(s)
}
- // delay updating sweepgen until here. This is the signal that
+ // delay updating sweepgen until here. This is the signal that
// the span may be used in an MCache, so it must come after the
// linked list operations above (actually, just after the
// lock of c above.)
s.needzero = 1
s.freelist = 0
unlock(&c.lock)
- heapBitsForSpan(s.base()).initSpan(s.layout())
mheap_.freeSpan(s, 0)
return true
}
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
func sysReserve(v unsafe.Pointer, n uintptr, reserved *bool) unsafe.Pointer {
// On 64-bit, people with ulimit -v set complain if we reserve too
- // much address space. Instead, assume that the reservation is okay
+ // much address space. Instead, assume that the reservation is okay
// and check the assumption in SysMap.
if sys.PtrSize == 8 && uint64(n) > 1<<32 || sys.GoosNacl != 0 {
*reserved = false
return p
}
-func sysMap(v unsafe.Pointer, n uintptr, reserved bool, sysStat *uint64) {
- const _ENOMEM = 12
+const _ENOMEM = 12
+func sysMap(v unsafe.Pointer, n uintptr, reserved bool, sysStat *uint64) {
mSysStatInc(sysStat, n)
// On 64-bit, we don't actually have v reserved, so tread carefully.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
func sysReserve(v unsafe.Pointer, n uintptr, reserved *bool) unsafe.Pointer {
// On 64-bit, people with ulimit -v set complain if we reserve too
- // much address space. Instead, assume that the reservation is okay
+ // much address space. Instead, assume that the reservation is okay
// if we can reserve at least 64K and check the assumption in SysMap.
// Only user-mode Linux (UML) rejects these requests.
if sys.PtrSize == 8 && uint64(n) > 1<<32 {
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
ADDQ $128, DI
CMPQ BX, $128
JAE loop_avx2_huge
- // In the desciption of MOVNTDQ in [1]
+ // In the description of MOVNTDQ in [1]
// "... fencing operation implemented with the SFENCE or MFENCE instruction
// should be used in conjunction with MOVNTDQ instructions..."
// [1] 64-ia-32-architectures-software-developer-manual-325462.pdf
// Inferno's libkern/memset-arm.s
// http://code.google.com/p/inferno-os/source/browse/libkern/memset-arm.s
//
-// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
+// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
// Revisions Copyright © 2000-2007 Vita Nuova Holdings Limited (www.vitanuova.com). All rights reserved.
// Portions Copyright 2009 The Go Authors. All rights reserved.
//
// Inferno's libkern/memmove-386.s
// http://code.google.com/p/inferno-os/source/browse/libkern/memmove-386.s
//
-// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
+// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
// Revisions Copyright © 2000-2007 Vita Nuova Holdings Limited (www.vitanuova.com). All rights reserved.
// Portions Copyright 2009 The Go Authors. All rights reserved.
//
MOVL n+8(FP), BX
// REP instructions have a high startup cost, so we handle small sizes
- // with some straightline code. The REP MOVSL instruction is really fast
- // for large sizes. The cutover is approximately 1K. We implement up to
+ // with some straightline code. The REP MOVSL instruction is really fast
+ // for large sizes. The cutover is approximately 1K. We implement up to
// 128 because that is the maximum SSE register load (loading all data
// into registers lets us ignore copy direction).
tail:
// Derived from Inferno's libkern/memmove-386.s (adapted for amd64)
// http://code.google.com/p/inferno-os/source/browse/libkern/memmove-386.s
//
-// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
+// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
// Revisions Copyright © 2000-2007 Vita Nuova Holdings Limited (www.vitanuova.com). All rights reserved.
// Portions Copyright 2009 The Go Authors. All rights reserved.
//
MOVQ n+16(FP), BX
// REP instructions have a high startup cost, so we handle small sizes
- // with some straightline code. The REP MOVSQ instruction is really fast
- // for large sizes. The cutover is approximately 2K.
+ // with some straightline code. The REP MOVSQ instruction is really fast
+ // for large sizes. The cutover is approximately 2K.
tail:
// move_129through256 or smaller work whether or not the source and the
// destination memory regions overlap because they load all data into
// Inferno's libkern/memmove-arm.s
// http://code.google.com/p/inferno-os/source/browse/libkern/memmove-arm.s
//
-// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
+// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
// Revisions Copyright © 2000-2007 Vita Nuova Holdings Limited (www.vitanuova.com). All rights reserved.
// Portions Copyright 2009 The Go Authors. All rights reserved.
//
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
CLD
// Note: we copy only 4 bytes at a time so that the tail is at most
- // 3 bytes. That guarantees that we aren't copying pointers with MOVSB.
+ // 3 bytes. That guarantees that we aren't copying pointers with MOVSB.
// See issue 13160.
RET
// Inferno's libkern/memmove-386.s
// http://code.google.com/p/inferno-os/source/browse/libkern/memmove-386.s
//
-// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
+// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
// Revisions Copyright © 2000-2007 Vita Nuova Holdings Limited (www.vitanuova.com). All rights reserved.
// Portions Copyright 2009 The Go Authors. All rights reserved.
//
MOVL n+8(FP), BX
// REP instructions have a high startup cost, so we handle small sizes
- // with some straightline code. The REP MOVSL instruction is really fast
- // for large sizes. The cutover is approximately 1K.
+ // with some straightline code. The REP MOVSL instruction is really fast
+ // for large sizes. The cutover is approximately 1K.
tail:
TESTL BX, BX
JEQ move_0
// Derived from Inferno's libkern/memmove-386.s (adapted for amd64)
// http://code.google.com/p/inferno-os/source/browse/libkern/memmove-386.s
//
-// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
+// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
// Revisions Copyright © 2000-2007 Vita Nuova Holdings Limited (www.vitanuova.com). All rights reserved.
// Portions Copyright 2009 The Go Authors. All rights reserved.
//
MOVQ n+16(FP), BX
// REP instructions have a high startup cost, so we handle small sizes
- // with some straightline code. The REP MOVSQ instruction is really fast
- // for large sizes. The cutover is approximately 1K.
+ // with some straightline code. The REP MOVSQ instruction is really fast
+ // for large sizes. The cutover is approximately 1K.
tail:
TESTQ BX, BX
JEQ move_0
BC 12, 9, backward // I think you should be able to write this as "BGT CR2, backward"
// Copying forward proceeds by copying R6 words then copying R7 bytes.
- // R3 and R4 are advanced as we copy. Becuase PPC64 lacks post-increment
+ // R3 and R4 are advanced as we copy. Because PPC64 lacks post-increment
// load/store, R3 and R4 point before the bytes that are to be copied.
BC 12, 6, noforwardlarge // "BEQ CR1, noforwardlarge"
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// SetFinalizer sets the finalizer associated with x to f.
// When the garbage collector finds an unreachable block
// with an associated finalizer, it clears the association and runs
-// f(x) in a separate goroutine. This makes x reachable again, but
-// now without an associated finalizer. Assuming that SetFinalizer
+// f(x) in a separate goroutine. This makes x reachable again, but
+// now without an associated finalizer. Assuming that SetFinalizer
// is not called again, the next time the garbage collector sees
// that x is unreachable, it will free x.
//
throw("runtime.SetFinalizer: first argument is nil")
}
if etyp.kind&kindMask != kindPtr {
- throw("runtime.SetFinalizer: first argument is " + *etyp._string + ", not pointer")
+ throw("runtime.SetFinalizer: first argument is " + etyp._string + ", not pointer")
}
ot := (*ptrtype)(unsafe.Pointer(etyp))
if ot.elem == nil {
}
if ftyp.kind&kindMask != kindFunc {
- throw("runtime.SetFinalizer: second argument is " + *ftyp._string + ", not a function")
+ throw("runtime.SetFinalizer: second argument is " + ftyp._string + ", not a function")
}
ft := (*functype)(unsafe.Pointer(ftyp))
if ft.dotdotdot || len(ft.in) != 1 {
- throw("runtime.SetFinalizer: cannot pass " + *etyp._string + " to finalizer " + *ftyp._string)
+ throw("runtime.SetFinalizer: cannot pass " + etyp._string + " to finalizer " + ftyp._string)
}
fint := ft.in[0]
switch {
// ok - same type
goto okarg
case fint.kind&kindMask == kindPtr:
- if (fint.x == nil || fint.x.name == nil || etyp.x == nil || etyp.x.name == nil) && (*ptrtype)(unsafe.Pointer(fint)).elem == ot.elem {
+ if (fint.x == nil || etyp.x == nil) && (*ptrtype)(unsafe.Pointer(fint)).elem == ot.elem {
// ok - not same type, but both pointers,
// one or the other is unnamed, and same element type, so assignable.
goto okarg
goto okarg
}
}
- throw("runtime.SetFinalizer: cannot pass " + *etyp._string + " to finalizer " + *ftyp._string)
+ throw("runtime.SetFinalizer: cannot pass " + etyp._string + " to finalizer " + ftyp._string)
okarg:
// compute size needed for return parameters
nret := uintptr(0)
})
}
-// Look up pointer v in heap. Return the span containing the object,
-// the start of the object, and the size of the object. If the object
+// Look up pointer v in heap. Return the span containing the object,
+// the start of the object, and the size of the object. If the object
// does not exist, return nil, nil, 0.
func findObject(v unsafe.Pointer) (s *mspan, x unsafe.Pointer, n uintptr) {
c := gomcache()
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Fixed-size object allocator. Returned memory is not zeroed.
+// Fixed-size object allocator. Returned memory is not zeroed.
//
// See malloc.go for overview.
}
// A generic linked list of blocks. (Typically the block is bigger than sizeof(MLink).)
-// Since assignments to mlink.next will result in a write barrier being preformed
-// this can not be used by some of the internal GC structures. For example when
+// Since assignments to mlink.next will result in a write barrier being performed
+// this cannot be used by some of the internal GC structures. For example when
// the sweeper is placing an unmarked object on the free list it does not want the
// write barrier to be called since that could result in the object being reachable.
type mlink struct {
// The compiler knows about this variable.
// If you change it, you must change the compiler too.
var writeBarrier struct {
- enabled bool // compiler emits a check of this before calling write barrier
- needed bool // whether we need a write barrier for current GC phase
- cgo bool // whether we need a write barrier for a cgo check
+ enabled bool // compiler emits a check of this before calling write barrier
+ needed bool // whether we need a write barrier for current GC phase
+ cgo bool // whether we need a write barrier for a cgo check
+ alignme uint64 // guarantee alignment so that compiler can use a 32 or 64-bit load
}
// gcBlackenEnabled is 1 if mutator assists and background mark
casgstatus(gp, _Grunning, _Gwaiting)
gp.waitreason = "garbage collection"
- // Run gc on the g0 stack. We do this so that the g stack
- // we're currently running on will no longer change. Cuts
+ // Run gc on the g0 stack. We do this so that the g stack
+ // we're currently running on will no longer change. Cuts
// the root set down a bit (g0 stacks are not scanned, and
// we don't need to scan gc's internal state). We also
// need to switch to g0 so we can shrink the stack.
gchelperstart()
- var gcw gcWork
- gcDrain(&gcw, gcDrainBlock)
+ gcw := &getg().m.p.ptr().gcw
+ gcDrain(gcw, gcDrainBlock)
gcw.dispose()
gcMarkRootCheck()
// Parallel mark over GC roots and heap
if gcphase == _GCmarktermination {
- var gcw gcWork
- gcDrain(&gcw, gcDrainBlock) // blocks in getfull
+ gcw := &_g_.m.p.ptr().gcw
+ gcDrain(gcw, gcDrainBlock) // blocks in getfull
gcw.dispose()
}
// Preemption must be disabled (because this uses a gcWork).
//
//go:nowritebarrier
-func markroot(i uint32) {
- // TODO: Consider using getg().m.p.ptr().gcw.
- var gcw gcWork
-
+func markroot(gcw *gcWork, i uint32) {
baseData := uint32(fixedRootCount)
baseBSS := baseData + uint32(work.nDataRoots)
baseSpans := baseBSS + uint32(work.nBSSRoots)
switch {
case baseData <= i && i < baseBSS:
for datap := &firstmoduledata; datap != nil; datap = datap.next {
- markrootBlock(datap.data, datap.edata-datap.data, datap.gcdatamask.bytedata, &gcw, int(i-baseData))
+ markrootBlock(datap.data, datap.edata-datap.data, datap.gcdatamask.bytedata, gcw, int(i-baseData))
}
case baseBSS <= i && i < baseSpans:
for datap := &firstmoduledata; datap != nil; datap = datap.next {
- markrootBlock(datap.bss, datap.ebss-datap.bss, datap.gcbssmask.bytedata, &gcw, int(i-baseBSS))
+ markrootBlock(datap.bss, datap.ebss-datap.bss, datap.gcbssmask.bytedata, gcw, int(i-baseBSS))
}
case i == fixedRootFinalizers:
for fb := allfin; fb != nil; fb = fb.alllink {
- scanblock(uintptr(unsafe.Pointer(&fb.fin[0])), uintptr(fb.cnt)*unsafe.Sizeof(fb.fin[0]), &finptrmask[0], &gcw)
+ scanblock(uintptr(unsafe.Pointer(&fb.fin[0])), uintptr(fb.cnt)*unsafe.Sizeof(fb.fin[0]), &finptrmask[0], gcw)
}
case i == fixedRootFlushCaches:
case baseSpans <= i && i < baseStacks:
// mark MSpan.specials
- markrootSpans(&gcw, int(i-baseSpans))
+ markrootSpans(gcw, int(i-baseSpans))
default:
// the rest is scanning goroutine stacks
}
})
}
-
- gcw.dispose()
}
// markrootBlock scans the shard'th shard of the block of memory [b0,
pcdata := pcdatavalue(f, _PCDATA_StackMapIndex, targetpc, cache)
if pcdata == -1 {
// We do not have a valid pcdata value but there might be a
- // stackmap for this function. It is likely that we are looking
+ // stackmap for this function. It is likely that we are looking
// at the function prologue, assume so and hope for the best.
pcdata = 0
}
if job >= work.markrootJobs {
break
}
- // TODO: Pass in gcw.
- markroot(job)
+ markroot(gcw, job)
}
}
if preserve {
throw("can't preserve large span")
}
- heapBitsForSpan(p).initSpan(s.layout())
s.needzero = 1
// Free the span after heapBitsSweepSpan
)
const (
- _Debugwbufs = false // if true check wbufs consistency
- _WorkbufSize = 2048 // in bytes; larger values result in less contention
+ _WorkbufSize = 2048 // in bytes; larger values result in less contention
)
// Garbage collector work pool abstraction.
//
// This implements a producer/consumer model for pointers to grey
-// objects. A grey object is one that is marked and on a work
-// queue. A black object is marked and not on a work queue.
+// objects. A grey object is one that is marked and on a work
+// queue. A black object is marked and not on a work queue.
//
// Write barriers, root discovery, stack scanning, and object scanning
-// produce pointers to grey objects. Scanning consumes pointers to
+// produce pointers to grey objects. Scanning consumes pointers to
// grey objects, thus blackening them, and then scans them,
// potentially producing new pointers to grey objects.
//
// A gcWork can be used on the stack as follows:
//
-// var gcw gcWork
-// disable preemption
-// .. call gcw.put() to produce and gcw.get() to consume ..
-// gcw.dispose()
-// enable preemption
-//
-// Or from the per-P gcWork cache:
-//
// (preemption must be disabled)
// gcw := &getg().m.p.ptr().gcw
// .. call gcw.put() to produce and gcw.get() to consume ..
}
func (w *gcWork) init() {
- w.wbuf1 = wbufptrOf(getempty(101))
- wbuf2 := trygetfull(102)
+ w.wbuf1 = wbufptrOf(getempty())
+ wbuf2 := trygetfull()
if wbuf2 == nil {
- wbuf2 = getempty(103)
+ wbuf2 = getempty()
}
w.wbuf2 = wbufptrOf(wbuf2)
}
// put enqueues a pointer for the garbage collector to trace.
// obj must point to the beginning of a heap object.
//go:nowritebarrier
-func (ww *gcWork) put(obj uintptr) {
- w := (*gcWork)(noescape(unsafe.Pointer(ww))) // TODO: remove when escape analysis is fixed
-
+func (w *gcWork) put(obj uintptr) {
wbuf := w.wbuf1.ptr()
if wbuf == nil {
w.init()
w.wbuf1, w.wbuf2 = w.wbuf2, w.wbuf1
wbuf = w.wbuf1.ptr()
if wbuf.nobj == len(wbuf.obj) {
- putfull(wbuf, 132)
- wbuf = getempty(133)
+ putfull(wbuf)
+ wbuf = getempty()
w.wbuf1 = wbufptrOf(wbuf)
}
}
// queue, tryGet returns 0. Note that there may still be pointers in
// other gcWork instances or other caches.
//go:nowritebarrier
-func (ww *gcWork) tryGet() uintptr {
- w := (*gcWork)(noescape(unsafe.Pointer(ww))) // TODO: remove when escape analysis is fixed
-
+func (w *gcWork) tryGet() uintptr {
wbuf := w.wbuf1.ptr()
if wbuf == nil {
w.init()
wbuf = w.wbuf1.ptr()
if wbuf.nobj == 0 {
owbuf := wbuf
- wbuf = trygetfull(167)
+ wbuf = trygetfull()
if wbuf == nil {
return 0
}
- putempty(owbuf, 166)
+ putempty(owbuf)
w.wbuf1 = wbufptrOf(wbuf)
}
}
// if necessary to ensure all pointers from all queues and caches have
// been retrieved. get returns 0 if there are no pointers remaining.
//go:nowritebarrier
-func (ww *gcWork) get() uintptr {
- w := (*gcWork)(noescape(unsafe.Pointer(ww))) // TODO: remove when escape analysis is fixed
-
+func (w *gcWork) get() uintptr {
wbuf := w.wbuf1.ptr()
if wbuf == nil {
w.init()
wbuf = w.wbuf1.ptr()
if wbuf.nobj == 0 {
owbuf := wbuf
- wbuf = getfull(185)
+ wbuf = getfull()
if wbuf == nil {
return 0
}
- putempty(owbuf, 184)
+ putempty(owbuf)
w.wbuf1 = wbufptrOf(wbuf)
}
}
func (w *gcWork) dispose() {
if wbuf := w.wbuf1.ptr(); wbuf != nil {
if wbuf.nobj == 0 {
- putempty(wbuf, 212)
+ putempty(wbuf)
} else {
- putfull(wbuf, 214)
+ putfull(wbuf)
}
w.wbuf1 = 0
wbuf = w.wbuf2.ptr()
if wbuf.nobj == 0 {
- putempty(wbuf, 218)
+ putempty(wbuf)
} else {
- putfull(wbuf, 220)
+ putfull(wbuf)
}
w.wbuf2 = 0
}
return
}
if wbuf := w.wbuf2.ptr(); wbuf.nobj != 0 {
- putfull(wbuf, 246)
- w.wbuf2 = wbufptrOf(getempty(247))
+ putfull(wbuf)
+ w.wbuf2 = wbufptrOf(getempty())
} else if wbuf := w.wbuf1.ptr(); wbuf.nobj > 4 {
w.wbuf1 = wbufptrOf(handoff(wbuf))
}
// avoid contending on the global work buffer lists.
type workbufhdr struct {
- node lfnode // must be first
- nobj int
- inuse bool // This workbuf is in use by some gorotuine and is not on the work.empty/full queues.
- log [4]int // line numbers forming a history of ownership changes to workbuf
+ node lfnode // must be first
+ nobj int
}
type workbuf struct {
// workbufs.
// If the GC asks for some work these are the only routines that
// make wbufs available to the GC.
-// Each of the gets and puts also take an distinct integer that is used
-// to record a brief history of changes to ownership of the workbuf.
-// The convention is to use a unique line number but any encoding
-// is permissible. For example if you want to pass in 2 bits of information
-// you could simple add lineno1*100000+lineno2.
-
-// logget records the past few values of entry to aid in debugging.
-// logget checks the buffer b is not currently in use.
-func (b *workbuf) logget(entry int) {
- if !_Debugwbufs {
- return
- }
- if b.inuse {
- println("runtime: logget fails log entry=", entry,
- "b.log[0]=", b.log[0], "b.log[1]=", b.log[1],
- "b.log[2]=", b.log[2], "b.log[3]=", b.log[3])
- throw("logget: get not legal")
- }
- b.inuse = true
- copy(b.log[1:], b.log[:])
- b.log[0] = entry
-}
-
-// logput records the past few values of entry to aid in debugging.
-// logput checks the buffer b is currently in use.
-func (b *workbuf) logput(entry int) {
- if !_Debugwbufs {
- return
- }
- if !b.inuse {
- println("runtime: logput fails log entry=", entry,
- "b.log[0]=", b.log[0], "b.log[1]=", b.log[1],
- "b.log[2]=", b.log[2], "b.log[3]=", b.log[3])
- throw("logput: put not legal")
- }
- b.inuse = false
- copy(b.log[1:], b.log[:])
- b.log[0] = entry
-}
func (b *workbuf) checknonempty() {
if b.nobj == 0 {
- println("runtime: nonempty check fails",
- "b.log[0]=", b.log[0], "b.log[1]=", b.log[1],
- "b.log[2]=", b.log[2], "b.log[3]=", b.log[3])
throw("workbuf is empty")
}
}
func (b *workbuf) checkempty() {
if b.nobj != 0 {
- println("runtime: empty check fails",
- "b.log[0]=", b.log[0], "b.log[1]=", b.log[1],
- "b.log[2]=", b.log[2], "b.log[3]=", b.log[3])
throw("workbuf is not empty")
}
}
// getempty pops an empty work buffer off the work.empty list,
// allocating new buffers if none are available.
-// entry is used to record a brief history of ownership.
//go:nowritebarrier
-func getempty(entry int) *workbuf {
+func getempty() *workbuf {
var b *workbuf
if work.empty != 0 {
b = (*workbuf)(lfstackpop(&work.empty))
if b == nil {
b = (*workbuf)(persistentalloc(unsafe.Sizeof(*b), sys.CacheLineSize, &memstats.gc_sys))
}
- b.logget(entry)
return b
}
// putempty puts a workbuf onto the work.empty list.
// Upon entry this go routine owns b. The lfstackpush relinquishes ownership.
//go:nowritebarrier
-func putempty(b *workbuf, entry int) {
+func putempty(b *workbuf) {
b.checkempty()
- b.logput(entry)
lfstackpush(&work.empty, &b.node)
}
// putfull accepts partially full buffers so the GC can avoid competing
// with the mutators for ownership of partially full buffers.
//go:nowritebarrier
-func putfull(b *workbuf, entry int) {
+func putfull(b *workbuf) {
b.checknonempty()
- b.logput(entry)
lfstackpush(&work.full, &b.node)
// We just made more work available. Let the GC controller
// trygetfull tries to get a full or partially empty workbuffer.
// If one is not immediately available return nil
//go:nowritebarrier
-func trygetfull(entry int) *workbuf {
+func trygetfull() *workbuf {
b := (*workbuf)(lfstackpop(&work.full))
if b != nil {
- b.logget(entry)
b.checknonempty()
return b
}
// This is in fact the termination condition for the STW mark
// phase.
//go:nowritebarrier
-func getfull(entry int) *workbuf {
+func getfull() *workbuf {
b := (*workbuf)(lfstackpop(&work.full))
if b != nil {
- b.logget(entry)
b.checknonempty()
return b
}
}
b = (*workbuf)(lfstackpop(&work.full))
if b != nil {
- b.logget(entry)
b.checknonempty()
return b
}
//go:nowritebarrier
func handoff(b *workbuf) *workbuf {
// Make new buffer with half of b's pointers.
- b1 := getempty(915)
+ b1 := getempty()
n := b.nobj / 2
b.nobj -= n
b1.nobj = n
_g_.m.gcstats.nhandoffcnt += uint64(n)
// Put b on full list - let first half of b get stolen.
- putfull(b, 942)
+ putfull(b)
return b1
}
// h_spans is a lookup table to map virtual address page IDs to *mspan.
// For allocated spans, their pages map to the span itself.
-// For free spans, only the lowest and highest pages map to the span itself. Internal
+// For free spans, only the lowest and highest pages map to the span itself. Internal
// pages map to an arbitrary span.
// For pages that have never been allocated, h_spans entries are nil.
var h_spans []*mspan // TODO: make this h.spans once mheap can be defined in Go
// Address is *not* guaranteed to be in map
// and may be anywhere in the span.
// Map entries for the middle of a span are only
-// valid for allocated spans. Free spans may have
+// valid for allocated spans. Free spans may have
// other garbage in their middles, so we have to
// check for that.
func (h *mheap) lookupMaybe(v unsafe.Pointer) *mspan {
}
// Adds the special record s to the list of special records for
-// the object p. All fields of s should be filled in except for
+// the object p. All fields of s should be filled in except for
// offset & next, which this routine will fill in.
// Returns true if the special was successfully added, false otherwise.
// (The add will fail only if a record with the same p and s->kind
ot *ptrtype
}
-// Adds a finalizer to the object p. Returns true if it succeeded.
+// Adds a finalizer to the object p. Returns true if it succeeded.
func addfinalizer(p unsafe.Pointer, f *funcval, nret uintptr, fint *_type, ot *ptrtype) bool {
lock(&mheap_.speciallock)
s := (*specialfinalizer)(mheap_.specialfinalizeralloc.alloc())
}
}
-// Do whatever cleanup needs to be done to deallocate s. It has
+// Do whatever cleanup needs to be done to deallocate s. It has
// already been unlinked from the MSpan specials list.
func freespecial(s *special, p unsafe.Pointer, size uintptr) {
switch s.kind {
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
import "unsafe"
-// mmap calls the mmap system call. It is implemented in assembly.
+// mmap calls the mmap system call. It is implemented in assembly.
func mmap(addr unsafe.Pointer, n uintptr, prot, flags, fd int32, off uint32) unsafe.Pointer
var blockprofilerate uint64 // in CPU ticks
// SetBlockProfileRate controls the fraction of goroutine blocking events
-// that are reported in the blocking profile. The profiler aims to sample
+// that are reported in the blocking profile. The profiler aims to sample
// an average of one blocking event per rate nanoseconds spent blocked.
//
// To include every blocking event in the profile, pass rate = 1.
//
// The tools that process the memory profiles assume that the
// profile rate is constant across the lifetime of the program
-// and equal to the current value. Programs that change the
+// and equal to the current value. Programs that change the
// memory profiling rate should do so just once, as early as
// possible in the execution of the program (for example,
// at the beginning of main).
ok = true
i := 0
for mp := first; mp != nil; mp = mp.alllink {
- for s := range mp.createstack {
- p[i].Stack0[s] = uintptr(mp.createstack[s])
- }
+ p[i].Stack0 = mp.createstack
i++
}
}
if typ == nil {
print("tracealloc(", p, ", ", hex(size), ")\n")
} else {
- print("tracealloc(", p, ", ", hex(size), ", ", *typ._string, ")\n")
+ print("tracealloc(", p, ", ", hex(size), ", ", typ._string, ")\n")
}
if gp.m.curg == nil || gp == gp.m.curg {
goroutineheader(gp)
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
const msanenabled = true
// If we are running on the system stack, the C program may have
-// marked part of that stack as uninitialized. We don't instrument
+// marked part of that stack as uninitialized. We don't instrument
// the runtime, but operations like a slice copy can call msanread
-// anyhow for values on the stack. Just ignore msanread when running
-// on the system stack. The other msan functions are fine.
+// anyhow for values on the stack. Just ignore msanread when running
+// on the system stack. The other msan functions are fine.
func msanread(addr unsafe.Pointer, sz uintptr) {
g := getg()
if g == g.m.g0 || g == g.m.gsignal {
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// and chopped up when new objects of the size class are needed.
// That page count is chosen so that chopping up the run of
// pages into objects of the given size wastes at most 12.5% (1.125x)
-// of the memory. It is not necessary that the cutoff here be
+// of the memory. It is not necessary that the cutoff here be
// the same as above.
//
// The two sources of waste multiply, so the worst possible case
package runtime
-// Size classes. Computed and initialized by InitSizes.
+// Size classes. Computed and initialized by InitSizes.
//
// SizeToClass(0 <= n <= MaxSmallSize) returns the size class,
// 1 <= sizeclass < NumSizeClasses, for n.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Size of the trailing by_size array differs between Go and C,
// and all data after by_size is local to runtime, not exported.
-// NumSizeClasses was changed, but we can not change Go struct because of backward compatibility.
+// NumSizeClasses was changed, but we cannot change Go struct because of backward compatibility.
// sizeof_C_MStats is what C thinks about size of Go struct.
var sizeof_C_MStats = unsafe.Offsetof(memstats.by_size) + 61*unsafe.Sizeof(memstats.by_size[0])
updatememstats(nil)
// Size of the trailing by_size array differs between Go and C,
- // NumSizeClasses was changed, but we can not change Go struct because of backward compatibility.
+ // NumSizeClasses was changed, but we cannot change Go struct because of backward compatibility.
memmove(unsafe.Pointer(stats), unsafe.Pointer(&memstats), sizeof_C_MStats)
// Stack numbers are part of the heap numbers, separate those out for user consumption
}
}
-// Atomically increases a given *system* memory stat. We are counting on this
+// Atomically increases a given *system* memory stat. We are counting on this
// stat never overflowing a uintptr, so this function must only be used for
// system memory stats.
//
}
}
-// Atomically decreases a given *system* memory stat. Same comments as
+// Atomically decreases a given *system* memory stat. Same comments as
// mSysStatInc apply.
//go:nosplit
func mSysStatDec(sysStat *uint64, n uintptr) {
func netpollopen(fd uintptr, pd *pollDesc) int32 {
// Arm both EVFILT_READ and EVFILT_WRITE in edge-triggered mode (EV_CLEAR)
- // for the whole fd lifetime. The notifications are automatically unregistered
+ // for the whole fd lifetime. The notifications are automatically unregistered
// when fd is closed.
var ev [2]keventt
*(*uintptr)(unsafe.Pointer(&ev[0].ident)) = fd
//
// The open and arming mechanisms are serialized using the lock
// inside PollDesc. This is required because the netpoll loop runs
-// asynchonously in respect to other Go code and by the time we get
+// asynchronously in respect to other Go code and by the time we get
// to call port_associate to update the association in the loop, the
// file descriptor might have been closed and reopened already. The
// lock allows runtime·netpollupdate to be called synchronously from
lock(&pd.lock)
// We don't register for any specific type of events yet, that's
// netpollarm's job. We merely ensure we call port_associate before
- // asynchonous connect/accept completes, so when we actually want
+ // asynchronous connect/accept completes, so when we actually want
// to do any I/O, the call to port_associate (from netpollarm,
// with the interested event set) will unblock port_getn right away
// because of the I/O readiness notification.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// The file contains tests that can not run under race detector for some reason.
+// The file contains tests that cannot run under race detector for some reason.
// +build !race
package runtime_test
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// The file contains tests that can not run under race detector for some reason.
+// The file contains tests that cannot run under race detector for some reason.
// +build !race
package runtime_test
goenvs_unix()
// Register our thread-creation callback (see sys_darwin_{amd64,386}.s)
- // but only if we're not using cgo. If we are using cgo we need
+ // but only if we're not using cgo. If we are using cgo we need
// to let the C pthread library install its own thread-creation callback.
if !iscgo {
if bsdthread_register() != 0 {
}
// Called to initialize a new m (including the bootstrap m).
-// Called on the new thread, can not allocate memory.
+// Called on the new thread, cannot allocate memory.
func minit() {
// Initialize signal handling.
_g_ := getg()
// Look for a response giving the return value.
// Any call can send this back with an error,
// and some calls only have return values so they
- // send it back on success too. I don't quite see how
+ // send it back on success too. I don't quite see how
// you know it's one of these and not the full response
// format, so just look if the message is right.
c := (*codemsg)(unsafe.Pointer(h))
}
// Called to initialize a new m (including the bootstrap m).
-// Called on the new thread, can not allocate memory.
+// Called on the new thread, cannot allocate memory.
func minit() {
_g_ := getg()
// Initialize signal handling.
// On DragonFly a thread created by pthread_create inherits
- // the signal stack of the creating thread. We always create
+ // the signal stack of the creating thread. We always create
// a new signal stack here, to avoid having two Go threads
- // using the same signal stack. This breaks the case of a
+ // using the same signal stack. This breaks the case of a
// thread created in C that calls sigaltstack and then calls a
// Go function, because we will lose track of the C code's
// sigaltstack, but it's the best we can do.
return 0;
// If there's not at least 16 MB left, we're probably
- // not going to be able to do much. Treat as no limit.
+ // not going to be able to do much. Treat as no limit.
rl.rlim_cur -= used;
if(rl.rlim_cur < (16<<20))
return 0;
}
// Called to initialize a new m (including the bootstrap m).
-// Called on the new thread, can not allocate memory.
+// Called on the new thread, cannot allocate memory.
func minit() {
_g_ := getg()
return 0;
// If there's not at least 16 MB left, we're probably
- // not going to be able to do much. Treat as no limit.
+ // not going to be able to do much. Treat as no limit.
rl.rlim_cur -= used;
if(rl.rlim_cur < (16<<20))
return 0;
// Some Linux kernels have a bug where futex of
// FUTEX_WAIT returns an internal error code
- // as an errno. Libpthread ignores the return value
+ // as an errno. Libpthread ignores the return value
// here, and so can we: as it says a few lines up,
// spurious wakeups are allowed.
if ns < 0 {
}
// Disable signals during clone, so that the new thread starts
- // with signals disabled. It will enable them in minit.
+ // with signals disabled. It will enable them in minit.
var oset sigset
rtsigprocmask(_SIG_SETMASK, &sigset_all, &oset, int32(unsafe.Sizeof(oset)))
ret := clone(cloneFlags, stk, unsafe.Pointer(mp), unsafe.Pointer(mp.g0), unsafe.Pointer(funcPC(mstart)))
func gettid() uint32
// Called to initialize a new m (including the bootstrap m).
-// Called on the new thread, can not allocate memory.
+// Called on the new thread, cannot allocate memory.
func minit() {
// Initialize signal handling.
_g_ := getg()
return 0;
// If there's not at least 16 MB left, we're probably
- // not going to be able to do much. Treat as no limit.
+ // not going to be able to do much. Treat as no limit.
rl.rlim_cur -= used;
if(rl.rlim_cur < (16<<20))
return 0;
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// Called to initialize a new m (including the bootstrap m).
-// Called on the new thread, can not allocate memory.
+// Called on the new thread, cannot allocate memory.
func minit() {
_g_ := getg()
return 0
}
-// This runs on a foreign stack, without an m or a g. No stack split.
+// This runs on a foreign stack, without an m or a g. No stack split.
//go:nosplit
func badsignal2() {
write(2, unsafe.Pointer(&badsignal1[0]), int32(len(badsignal1)))
}
// netbsdMStart is the function call that starts executing a newly
-// created thread. On NetBSD, a new thread inherits the signal stack
-// of the creating thread. That confuses minit, so we remove that
-// signal stack here before calling the regular mstart. It's a bit
+// created thread. On NetBSD, a new thread inherits the signal stack
+// of the creating thread. That confuses minit, so we remove that
+// signal stack here before calling the regular mstart. It's a bit
// baroque to remove a signal stack here only to add one in minit, but
// it's a simple change that keeps NetBSD working like other OS's.
// At this point all signals are blocked, so there is no race.
}
// Called to initialize a new m (including the bootstrap m).
-// Called on the new thread, can not allocate memory.
+// Called on the new thread, cannot allocate memory.
func minit() {
_g_ := getg()
_g_.m.procid = uint64(lwp_self())
// Initialize signal handling.
// On NetBSD a thread created by pthread_create inherits the
- // signal stack of the creating thread. We always create a
+ // signal stack of the creating thread. We always create a
// new signal stack here, to avoid having two Go threads using
- // the same signal stack. This breaks the case of a thread
+ // the same signal stack. This breaks the case of a thread
// created in C that calls sigaltstack and then calls a Go
// function, because we will lose track of the C code's
// sigaltstack, but it's the best we can do.
//
// From OpenBSD's __thrsleep(2) manual:
// "The abort argument, if not NULL, points to an int that will
- // be examined [...] immediately before blocking. If that int
+ // be examined [...] immediately before blocking. If that int
// is non-zero then __thrsleep() will immediately return EINTR
// without blocking."
ret := thrsleep(uintptr(unsafe.Pointer(&_g_.m.waitsemacount)), _CLOCK_MONOTONIC, tsp, 0, &_g_.m.waitsemacount)
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// Called to initialize a new m (including the bootstrap m).
-// Called on the new thread, can not allocate memory.
+// Called on the new thread, cannot allocate memory.
func minit() {
// Mask all SSE floating-point exceptions
// when running on the 64-bit kernel.
var _badsignal = []byte("runtime: signal received on thread not created by Go.\n")
-// This runs on a foreign stack, without an m or a g. No stack split.
+// This runs on a foreign stack, without an m or a g. No stack split.
//go:nosplit
func badsignal2() {
pwrite(2, unsafe.Pointer(&_badsignal[0]), int32(len(_badsignal)), -1)
}
// Called to initialize a new m (including the bootstrap m).
-// Called on the new thread, can not allocate memory.
+// Called on the new thread, cannot allocate memory.
func minit() {
var thandle uintptr
stdcall7(_DuplicateHandle, currentProcess, currentThread, currentProcess, uintptr(unsafe.Pointer(&thandle)), 0, 0, _DUPLICATE_SAME_ACCESS)
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// native_client/src/trusted/service_runtime/include/sys/errno.h
// The errors are mainly copied from Linux.
- _EPERM = 1 /* Operation not permitted */
- _ENOENT = 2 /* No such file or directory */
- _ESRCH = 3 /* No such process */
- _EINTR = 4 /* Interrupted system call */
- _EIO = 5 /* I/O error */
- _ENXIO = 6 /* No such device or address */
- _E2BIG = 7 /* Argument list too long */
- _ENOEXEC = 8 /* Exec format error */
- _EBADF = 9 /* Bad file number */
- _ECHILD = 10 /* No child processes */
- _EAGAIN = 11 /* Try again */
- _ENOMEM = 12 /* Out of memory */
+ _EPERM = 1 /* Operation not permitted */
+ _ENOENT = 2 /* No such file or directory */
+ _ESRCH = 3 /* No such process */
+ _EINTR = 4 /* Interrupted system call */
+ _EIO = 5 /* I/O error */
+ _ENXIO = 6 /* No such device or address */
+ _E2BIG = 7 /* Argument list too long */
+ _ENOEXEC = 8 /* Exec format error */
+ _EBADF = 9 /* Bad file number */
+ _ECHILD = 10 /* No child processes */
+ _EAGAIN = 11 /* Try again */
+ // _ENOMEM is defined in mem_bsd.go for nacl.
+ // _ENOMEM = 12 /* Out of memory */
_EACCES = 13 /* Permission denied */
_EFAULT = 14 /* Bad address */
_EBUSY = 16 /* Device or resource busy */
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// Disable signals during create, so that the new thread starts
- // with signals disabled. It will enable them in minit.
+ // with signals disabled. It will enable them in minit.
sigprocmask(_SIG_SETMASK, &sigset_all, &oset)
ret = pthread_create(&tid, &attr, funcPC(tstart_sysvicall), unsafe.Pointer(mp))
sigprocmask(_SIG_SETMASK, &oset, nil)
}
// Called to initialize a new m (including the bootstrap m).
-// Called on the new thread, can not allocate memory.
+// Called on the new thread, cannot allocate memory.
func minit() {
_g_ := getg()
asmcgocall(unsafe.Pointer(funcPC(miniterrno)), unsafe.Pointer(&libc____errno))
return 0;
// If there's not at least 16 MB left, we're probably
- // not going to be able to do much. Treat as no limit.
+ // not going to be able to do much. Treat as no limit.
rl.rlim_cur -= used;
if(rl.rlim_cur < (16<<20))
return 0;
var sem *semt
_g_ := getg()
- // Call libc's malloc rather than malloc. This will
- // allocate space on the C heap. We can't call malloc
+ // Call libc's malloc rather than malloc. This will
+ // allocate space on the C heap. We can't call malloc
// here because it could cause a deadlock.
_g_.m.libcall.fn = uintptr(unsafe.Pointer(&libc_malloc))
_g_.m.libcall.n = 1
//go:nosplit
func mmap(addr unsafe.Pointer, n uintptr, prot, flags, fd int32, off uint32) unsafe.Pointer {
- return unsafe.Pointer(sysvicall6(&libc_mmap, uintptr(addr), uintptr(n), uintptr(prot), uintptr(flags), uintptr(fd), uintptr(off)))
+ p, err := doMmap(uintptr(addr), n, uintptr(prot), uintptr(flags), uintptr(fd), uintptr(off))
+ if p == ^uintptr(0) {
+ return unsafe.Pointer(err)
+ }
+ return unsafe.Pointer(p)
+}
+
+//go:nosplit
+func doMmap(addr, n, prot, flags, fd, off uintptr) (uintptr, uintptr) {
+ var libcall libcall
+ libcall.fn = uintptr(unsafe.Pointer(&libc_mmap))
+ libcall.n = 6
+ libcall.args = uintptr(noescape(unsafe.Pointer(&addr)))
+ asmcgocall(unsafe.Pointer(&asmsysvicall6), unsafe.Pointer(&libcall))
+ return libcall.r1, libcall.err
}
//go:nosplit
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
import "unsafe"
type mOS struct {
+ machport uint32 // return address for mach ipc
waitsema uint32 // semaphore for parking on locks
}
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
_AT_SYSINFO = 32
)
-var _vdso uint32
-
func sysargs(argc int32, argv **byte) {
// skip over argv, envv to get to auxv
n := argc + 1
for i := 0; auxv[i] != _AT_NULL; i += 2 {
switch auxv[i] {
- case _AT_SYSINFO:
- _vdso = auxv[i+1]
-
case _AT_RANDOM:
startupRandomData = (*[16]byte)(unsafe.Pointer(uintptr(auxv[i+1])))[:]
}
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
func raiseproc(sig int32) {
}
-// Stubs so tests can link correctly. These should never be called.
+// Stubs so tests can link correctly. These should never be called.
func open(name *byte, mode, perm int32) int32
func closefd(fd int32) int32
func read(fd int32, p unsafe.Pointer, n int32) int32
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
throw("too many writes on closed pipe")
}
-// Stubs so tests can link correctly. These should never be called.
+// Stubs so tests can link correctly. These should never be called.
func open(name *byte, mode, perm int32) int32 {
throw("unimplemented")
return -1
throw("defer on system stack")
}
- // the arguments of fn are in a perilous state. The stack map
- // for deferproc does not describe them. So we can't let garbage
+ // the arguments of fn are in a perilous state. The stack map
+ // for deferproc does not describe them. So we can't let garbage
// collection or stack copying trigger until we've copied them out
- // to somewhere safe. The memmove below does that.
+ // to somewhere safe. The memmove below does that.
// Until the copy completes, we can only call nosplit routines.
sp := getcallersp(unsafe.Pointer(&siz))
argp := uintptr(unsafe.Pointer(&fn)) + unsafe.Sizeof(fn)
// If there is a deferred function, this will call runtime·jmpdefer,
// which will jump to the deferred function such that it appears
// to have been called by the caller of deferreturn at the point
-// just before deferreturn was called. The effect is that deferreturn
+// just before deferreturn was called. The effect is that deferreturn
// is called again and again until there are no more deferred functions.
// Cannot split the stack because we reuse the caller's frame to
// call the deferred function.
jmpdefer(fn, uintptr(unsafe.Pointer(&arg0)))
}
-// Goexit terminates the goroutine that calls it. No other goroutine is affected.
-// Goexit runs all deferred calls before terminating the goroutine. Because Goexit
+// Goexit terminates the goroutine that calls it. No other goroutine is affected.
+// Goexit runs all deferred calls before terminating the goroutine. Because Goexit
// is not panic, however, any recover calls in those deferred functions will return nil.
//
// Calling Goexit from the main goroutine terminates that goroutine
goexit1()
}
-// Print all currently active panics. Used when crashing.
+// Call all Error and String methods before freezing the world.
+// Used when crashing with panicking.
+// This must match types handled by printany.
+func preprintpanics(p *_panic) {
+ for p != nil {
+ switch v := p.arg.(type) {
+ case error:
+ p.arg = v.Error()
+ case stringer:
+ p.arg = v.String()
+ }
+ p = p.link
+ }
+}
+
+// Print all currently active panics. Used when crashing.
func printpanics(p *_panic) {
if p.link != nil {
printpanics(p.link)
d.fn = nil
gp._defer = d.link
- // trigger shrinkage to test stack copy. See stack_test.go:TestStackPanic
+ // trigger shrinkage to test stack copy. See stack_test.go:TestStackPanic
//GC()
pc := d.pc
}
// ran out of deferred calls - old-school panic now
+ // Because it is unsafe to call arbitrary user code after freezing
+ // the world, we call preprintpanics to invoke all necessary Error
+ // and String methods to prepare the panic strings before startpanic.
+ preprintpanics(gp._panic)
startpanic()
printpanics(gp._panic)
dopanic(0) // should not return
var paniclk mutex
// Unwind the stack after a deferred function calls recover
-// after a panic. Then arrange to continue running as though
+// after a panic. Then arrange to continue running as though
// the caller of the deferred function returned normally.
func recovery(gp *g) {
// Info about defer passed in G struct.
+++ /dev/null
-// Copyright 2012 The Go Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-// Parallel for algorithm.
-
-package runtime
-
-import (
- "runtime/internal/atomic"
- "runtime/internal/sys"
-)
-
-// A parfor holds state for the parallel for operation.
-type parfor struct {
- body func(*parfor, uint32) // executed for each element
- done uint32 // number of idle threads
- nthr uint32 // total number of threads
- thrseq uint32 // thread id sequencer
- cnt uint32 // iteration space [0, cnt)
- wait bool // if true, wait while all threads finish processing,
- // otherwise parfor may return while other threads are still working
-
- thr []parforthread // thread descriptors
-
- // stats
- nsteal uint64
- nstealcnt uint64
- nprocyield uint64
- nosyield uint64
- nsleep uint64
-}
-
-// A parforthread holds state for a single thread in the parallel for.
-type parforthread struct {
- // the thread's iteration space [32lsb, 32msb)
- pos uint64
- // stats
- nsteal uint64
- nstealcnt uint64
- nprocyield uint64
- nosyield uint64
- nsleep uint64
- pad [sys.CacheLineSize]byte
-}
-
-func parforalloc(nthrmax uint32) *parfor {
- return &parfor{
- thr: make([]parforthread, nthrmax),
- }
-}
-
-// Parforsetup initializes desc for a parallel for operation with nthr
-// threads executing n jobs.
-//
-// On return the nthr threads are each expected to call parfordo(desc)
-// to run the operation. During those calls, for each i in [0, n), one
-// thread will be used invoke body(desc, i).
-// If wait is true, no parfordo will return until all work has been completed.
-// If wait is false, parfordo may return when there is a small amount
-// of work left, under the assumption that another thread has that
-// work well in hand.
-func parforsetup(desc *parfor, nthr, n uint32, wait bool, body func(*parfor, uint32)) {
- if desc == nil || nthr == 0 || nthr > uint32(len(desc.thr)) || body == nil {
- print("desc=", desc, " nthr=", nthr, " count=", n, " body=", body, "\n")
- throw("parfor: invalid args")
- }
-
- desc.body = body
- desc.done = 0
- desc.nthr = nthr
- desc.thrseq = 0
- desc.cnt = n
- desc.wait = wait
- desc.nsteal = 0
- desc.nstealcnt = 0
- desc.nprocyield = 0
- desc.nosyield = 0
- desc.nsleep = 0
-
- for i := range desc.thr {
- begin := uint32(uint64(n) * uint64(i) / uint64(nthr))
- end := uint32(uint64(n) * uint64(i+1) / uint64(nthr))
- desc.thr[i].pos = uint64(begin) | uint64(end)<<32
- }
-}
-
-func parfordo(desc *parfor) {
- // Obtain 0-based thread index.
- tid := atomic.Xadd(&desc.thrseq, 1) - 1
- if tid >= desc.nthr {
- print("tid=", tid, " nthr=", desc.nthr, "\n")
- throw("parfor: invalid tid")
- }
-
- // If single-threaded, just execute the for serially.
- body := desc.body
- if desc.nthr == 1 {
- for i := uint32(0); i < desc.cnt; i++ {
- body(desc, i)
- }
- return
- }
-
- me := &desc.thr[tid]
- mypos := &me.pos
- for {
- for {
- // While there is local work,
- // bump low index and execute the iteration.
- pos := atomic.Xadd64(mypos, 1)
- begin := uint32(pos) - 1
- end := uint32(pos >> 32)
- if begin < end {
- body(desc, begin)
- continue
- }
- break
- }
-
- // Out of work, need to steal something.
- idle := false
- for try := uint32(0); ; try++ {
- // If we don't see any work for long enough,
- // increment the done counter...
- if try > desc.nthr*4 && !idle {
- idle = true
- atomic.Xadd(&desc.done, 1)
- }
-
- // ...if all threads have incremented the counter,
- // we are done.
- extra := uint32(0)
- if !idle {
- extra = 1
- }
- if desc.done+extra == desc.nthr {
- if !idle {
- atomic.Xadd(&desc.done, 1)
- }
- goto exit
- }
-
- // Choose a random victim for stealing.
- var begin, end uint32
- victim := fastrand1() % (desc.nthr - 1)
- if victim >= tid {
- victim++
- }
- victimpos := &desc.thr[victim].pos
- for {
- // See if it has any work.
- pos := atomic.Load64(victimpos)
- begin = uint32(pos)
- end = uint32(pos >> 32)
- if begin+1 >= end {
- end = 0
- begin = end
- break
- }
- if idle {
- atomic.Xadd(&desc.done, -1)
- idle = false
- }
- begin2 := begin + (end-begin)/2
- newpos := uint64(begin) | uint64(begin2)<<32
- if atomic.Cas64(victimpos, pos, newpos) {
- begin = begin2
- break
- }
- }
- if begin < end {
- // Has successfully stolen some work.
- if idle {
- throw("parfor: should not be idle")
- }
- atomic.Store64(mypos, uint64(begin)|uint64(end)<<32)
- me.nsteal++
- me.nstealcnt += uint64(end) - uint64(begin)
- break
- }
-
- // Backoff.
- if try < desc.nthr {
- // nothing
- } else if try < 4*desc.nthr {
- me.nprocyield++
- procyield(20)
- } else if !desc.wait {
- // If a caller asked not to wait for the others, exit now
- // (assume that most work is already done at this point).
- if !idle {
- atomic.Xadd(&desc.done, 1)
- }
- goto exit
- } else if try < 6*desc.nthr {
- me.nosyield++
- osyield()
- } else {
- me.nsleep++
- usleep(1)
- }
- }
- }
-
-exit:
- atomic.Xadd64(&desc.nsteal, int64(me.nsteal))
- atomic.Xadd64(&desc.nstealcnt, int64(me.nstealcnt))
- atomic.Xadd64(&desc.nprocyield, int64(me.nprocyield))
- atomic.Xadd64(&desc.nosyield, int64(me.nosyield))
- atomic.Xadd64(&desc.nsleep, int64(me.nsleep))
- me.nsteal = 0
- me.nstealcnt = 0
- me.nprocyield = 0
- me.nosyield = 0
- me.nsleep = 0
-}
+++ /dev/null
-// Copyright 2012 The Go Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-// The race detector does not understand ParFor synchronization.
-// +build !race
-
-package runtime_test
-
-import (
- . "runtime"
- "testing"
-)
-
-// Simple serial sanity test for parallelfor.
-func TestParFor(t *testing.T) {
- const P = 1
- const N = 20
- data := make([]uint64, N)
- for i := uint64(0); i < N; i++ {
- data[i] = i
- }
- desc := NewParFor(P)
- ParForSetup(desc, P, N, true, func(desc *ParFor, i uint32) {
- data[i] = data[i]*data[i] + 1
- })
- ParForDo(desc)
- for i := uint64(0); i < N; i++ {
- if data[i] != i*i+1 {
- t.Fatalf("Wrong element %d: %d", i, data[i])
- }
- }
-}
-
-// Test that nonblocking parallelfor does not block.
-func TestParFor2(t *testing.T) {
- const P = 7
- const N = 1003
- data := make([]uint64, N)
- for i := uint64(0); i < N; i++ {
- data[i] = i
- }
- desc := NewParFor(P)
- ParForSetup(desc, P, N, false, func(desc *ParFor, i uint32) {
- data[i] = data[i]*data[i] + 1
- })
- for p := 0; p < P; p++ {
- ParForDo(desc)
- }
- for i := uint64(0); i < N; i++ {
- if data[i] != i*i+1 {
- t.Fatalf("Wrong element %d: %d", i, data[i])
- }
- }
-}
-
-// Test that iterations are properly distributed.
-func TestParForSetup(t *testing.T) {
- const P = 11
- const N = 101
- desc := NewParFor(P)
- for n := uint32(0); n < N; n++ {
- for p := uint32(1); p <= P; p++ {
- ParForSetup(desc, p, n, true, func(desc *ParFor, i uint32) {})
- sum := uint32(0)
- size0 := uint32(0)
- end0 := uint32(0)
- for i := uint32(0); i < p; i++ {
- begin, end := ParForIters(desc, i)
- size := end - begin
- sum += size
- if i == 0 {
- size0 = size
- if begin != 0 {
- t.Fatalf("incorrect begin: %d (n=%d, p=%d)", begin, n, p)
- }
- } else {
- if size != size0 && size != size0+1 {
- t.Fatalf("incorrect size: %d/%d (n=%d, p=%d)", size, size0, n, p)
- }
- if begin != end0 {
- t.Fatalf("incorrect begin/end: %d/%d (n=%d, p=%d)", begin, end0, n, p)
- }
- }
- end0 = end
- }
- if sum != n {
- t.Fatalf("incorrect sum: %d/%d (p=%d)", sum, n, p)
- }
- }
- }
-}
-
-// Test parallel parallelfor.
-func TestParForParallel(t *testing.T) {
- N := uint64(1e7)
- if testing.Short() {
- N /= 10
- }
- data := make([]uint64, N)
- for i := uint64(0); i < N; i++ {
- data[i] = i
- }
- P := GOMAXPROCS(-1)
- c := make(chan bool, P)
- desc := NewParFor(uint32(P))
- ParForSetup(desc, uint32(P), uint32(N), false, func(desc *ParFor, i uint32) {
- data[i] = data[i]*data[i] + 1
- })
- for p := 1; p < P; p++ {
- go func() {
- ParForDo(desc)
- c <- true
- }()
- }
- ParForDo(desc)
- for p := 1; p < P; p++ {
- <-c
- }
- for i := uint64(0); i < N; i++ {
- if data[i] != i*i+1 {
- t.Fatalf("Wrong element %d: %d", i, data[i])
- }
- }
-
- data, desc = nil, nil
- GC()
-}
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//
// A Profile's methods can be called from multiple goroutines simultaneously.
//
-// Each Profile has a unique name. A few profiles are predefined:
+// Each Profile has a unique name. A few profiles are predefined:
//
// goroutine - stack traces of all current goroutines
// heap - a sampling of all heap allocations
// all known allocations. This exception helps mainly in programs running
// without garbage collection enabled, usually for debugging purposes.
//
-// The CPU profile is not available as a Profile. It has a special API,
+// The CPU profile is not available as a Profile. It has a special API,
// the StartCPUProfile and StopCPUProfile functions, because it streams
// output to a writer during profiling.
//
// Add adds the current execution stack to the profile, associated with value.
// Add stores value in an internal map, so value must be suitable for use as
// a map key and will not be garbage collected until the corresponding
-// call to Remove. Add panics if the profile already contains a stack for value.
+// call to Remove. Add panics if the profile already contains a stack for value.
//
// The skip parameter has the same meaning as runtime.Caller's skip
-// and controls where the stack trace begins. Passing skip=0 begins the
-// trace in the function calling Add. For example, given this
+// and controls where the stack trace begins. Passing skip=0 begins the
+// trace in the function calling Add. For example, given this
// execution stack:
//
// Add
}
// A countProfile is a set of stack traces to be printed as counts
-// grouped by stack trace. There are multiple implementations:
+// grouped by stack trace. There are multiple implementations:
// all that matters is that we can find out how many traces there are
// and obtain each trace in turn.
type countProfile interface {
// for a single stack trace.
func printStackRecord(w io.Writer, stk []uintptr, allFrames bool) {
show := allFrames
- wasPanic := false
- for i, pc := range stk {
- f := runtime.FuncForPC(pc)
- if f == nil {
+ frames := runtime.CallersFrames(stk)
+ for {
+ frame, more := frames.Next()
+ name := frame.Function
+ if name == "" {
show = true
- fmt.Fprintf(w, "#\t%#x\n", pc)
- wasPanic = false
+ fmt.Fprintf(w, "#\t%#x\n", frame.PC)
} else {
- tracepc := pc
- // Back up to call instruction.
- if i > 0 && pc > f.Entry() && !wasPanic {
- if runtime.GOARCH == "386" || runtime.GOARCH == "amd64" {
- tracepc--
- } else {
- tracepc -= 4 // arm, etc
- }
- }
- file, line := f.FileLine(tracepc)
- name := f.Name()
// Hide runtime.goexit and any runtime functions at the beginning.
// This is useful mainly for allocation traces.
- wasPanic = name == "runtime.panic"
if name == "runtime.goexit" || !show && strings.HasPrefix(name, "runtime.") {
continue
}
show = true
- fmt.Fprintf(w, "#\t%#x\t%s+%#x\t%s:%d\n", pc, name, pc-f.Entry(), file, line)
+ fmt.Fprintf(w, "#\t%#x\t%s+%#x\t%s:%d\n", frame.PC, name, frame.PC-frame.Entry, frame.File, frame.Line)
+ }
+ if !more {
+ break
}
}
if !show {
func writeGoroutineStacks(w io.Writer) error {
// We don't know how big the buffer needs to be to collect
- // all the goroutines. Start with 1 MB and try a few times, doubling each time.
+ // all the goroutines. Start with 1 MB and try a few times, doubling each time.
// Give up and use a truncated trace if 64 MB is not enough.
buf := make([]byte, 1<<20)
for i := 0; ; i++ {
// Go code built with -buildmode=c-archive or -buildmode=c-shared.
// StartCPUProfile relies on the SIGPROF signal, but that signal will
// be delivered to the main program's SIGPROF signal handler (if any)
-// not to the one used by Go. To make it work, call os/signal.Notify
+// not to the one used by Go. To make it work, call os/signal.Notify
// for syscall.SIGPROF, but note that doing so may break any profiling
// being done by the main program.
func StartCPUProfile(w io.Writer) error {
// 100 Hz is a reasonable choice: it is frequent enough to
// produce useful data, rare enough not to bog down the
// system, and a nice round number to make it easy to
- // convert sample counts to seconds. Instead of requiring
+ // convert sample counts to seconds. Instead of requiring
// each client to specify the frequency, we hard code it.
const hz = 100
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
})
// Lock the main goroutine onto this, the main OS thread,
- // during initialization. Most programs won't care, but a few
+ // during initialization. Most programs won't care, but a few
// do require certain calls to be made by the main thread.
// Those can arrange for main.main to run in the main thread
// by calling runtime.LockOSThread during initialization
//go:nosplit
-// Gosched yields the processor, allowing other goroutines to run. It does not
+// Gosched yields the processor, allowing other goroutines to run. It does not
// suspend the current goroutine, so execution resumes automatically.
func Gosched() {
mcall(gosched_m)
sched.maxmcount = 10000
- // Cache the framepointer experiment. This affects stack unwinding.
+ // Cache the framepointer experiment. This affects stack unwinding.
framepointer_enabled = haveexperiment("framepointer")
tracebackinit()
}
if buildVersion == "" {
- // Condition should never trigger. This code just serves
+ // Condition should never trigger. This code just serves
// to ensure runtime·buildVersion is kept in the resulting binary.
buildVersion = "unknown"
}
if newval == oldval&^_Gscan {
success = atomic.Cas(&gp.atomicstatus, oldval, newval)
}
- case _Gscanenqueue:
- if newval == _Gwaiting {
- success = atomic.Cas(&gp.atomicstatus, oldval, newval)
- }
}
if !success {
print("runtime: casfrom_Gscanstatus failed gp=", gp, ", oldval=", hex(oldval), ", newval=", hex(newval), "\n")
func castogscanstatus(gp *g, oldval, newval uint32) bool {
switch oldval {
case _Grunnable,
+ _Grunning,
_Gwaiting,
_Gsyscall:
if newval == oldval|_Gscan {
return atomic.Cas(&gp.atomicstatus, oldval, newval)
}
- case _Grunning:
- if newval == _Gscanrunning || newval == _Gscanenqueue {
- return atomic.Cas(&gp.atomicstatus, oldval, newval)
- }
}
print("runtime: castogscanstatus oldval=", hex(oldval), " newval=", hex(newval), "\n")
throw("castogscanstatus")
_Gscanwaiting,
_Gscansyscall:
casfrom_Gscanstatus(gp, s, s&^_Gscan)
-
- // Scan is now completed.
- // Goroutine now needs to be made runnable.
- // We put it on the global run queue; ready blocks on the global scheduler lock.
- case _Gscanenqueue:
- casfrom_Gscanstatus(gp, _Gscanenqueue, _Gwaiting)
- if gp != getg().m.curg {
- throw("processing Gscanenqueue on wrong m")
- }
- dropg()
- ready(gp, 0)
}
}
// in the hope that it will be available next time.
// It would have been even better to start it before the collection,
// but doing so requires allocating memory, so it's tricky to
- // coordinate. This lazy approach works out in practice:
+ // coordinate. This lazy approach works out in practice:
// we don't mind if the first couple gc rounds don't have quite
// the maximum number of procs.
newm(mhelpgc, nil)
atomic.Storeuintptr(&extram, uintptr(unsafe.Pointer(mp)))
}
-// Create a new m. It will start off with a call to fn, or else the scheduler.
+// Create a new m. It will start off with a call to fn, or else the scheduler.
// fn needs to be static and not a heap allocated closure.
// May run with m.p==nil, so write barriers are not allowed.
//go:nowritebarrier
func dropg() {
_g_ := getg()
- if _g_.m.lockedg == nil {
- _g_.m.curg.m = nil
- _g_.m.curg = nil
- }
+ _g_.m.curg.m = nil
+ _g_.m.curg = nil
}
func parkunlock_c(gp *g, lock unsafe.Pointer) bool {
// The goroutine g exited its system call.
// Arrange for it to run on a cpu again.
// This is called only from the go syscall library, not
-// from the low-level system calls used by the
+// from the low-level system calls used by the runtime.
//go:nosplit
func exitsyscall(dummy int32) {
_g_ := getg()
// Create a new g running fn with narg bytes of arguments starting
// at argp and returning nret bytes of results. callerpc is the
-// address of the go statement that created this. The new g is put
+// address of the go statement that created this. The new g is put
// on the queue of g's waiting to run.
func newproc1(fn *funcval, argp *uint8, narg int32, nret int32, callerpc uintptr) *g {
_g_ := getg()
_p_.gfree = gp.schedlink.ptr()
_p_.gfreecnt--
if gp.stack.lo == 0 {
- // Stack was deallocated in gfput. Allocate a new one.
+ // Stack was deallocated in gfput. Allocate a new one.
systemstack(func() {
gp.stack, gp.stkbar = stackalloc(_FixedStack)
})
_g_.m.locks--
}
-// Change number of processors. The world is stopped, sched is locked.
+// Change number of processors. The world is stopped, sched is locked.
// gcworkbufs are not being modified by either the GC or
// the write barrier code.
// Returns list of Ps with local work, they need to be scheduled by the caller.
// The check is based on number of running M's, if 0 -> deadlock.
func checkdead() {
// For -buildmode=c-shared or -buildmode=c-archive it's OK if
- // there are no running goroutines. The calling program is
+ // there are no running goroutines. The calling program is
// assumed to be running.
if islibrary || isarchive {
return
}
// Tell all goroutines that they have been preempted and they should stop.
-// This function is purely best-effort. It can fail to inform a goroutine if a
+// This function is purely best-effort. It can fail to inform a goroutine if a
// processor just started running it.
// No locks need to be held.
// Returns true if preemption request was issued to at least one goroutine.
}
// Tell the goroutine running on processor P to stop.
-// This function is purely best-effort. It can incorrectly fail to inform the
-// goroutine. It can send inform the wrong goroutine. Even if it informs the
+// This function is purely best-effort. It can incorrectly fail to inform the
+// goroutine. It can send inform the wrong goroutine. Even if it informs the
// correct goroutine, that goroutine might ignore the request if it is
// simultaneously executing newstack.
// No lock needs to be held.
if runqputslow(_p_, gp, h, t) {
return
}
- // the queue is not full, now the put above must suceed
+ // the queue is not full, now the put above must succeed
goto retry
}
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
PASS
Found 1 data race\(s\)
FAIL`},
+
+ {"slicebytetostring_pc", "run", "atexit_sleep_ms=0", `
+package main
+func main() {
+ done := make(chan string)
+ data := make([]byte, 10)
+ go func() {
+ done <- string(data)
+ }()
+ data[0] = 1
+ <-done
+}
+`, `
+ runtime\.slicebytetostring\(\)
+ .*/runtime/string\.go:.*
+ main\.main\.func1\(\)
+ .*/main.go:7`},
}
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
ret:
RET
+// func runtime·racefuncenterfp(fp uintptr)
+// Called from instrumented code.
+// Like racefuncenter but passes FP, not PC
+TEXT runtime·racefuncenterfp(SB), NOSPLIT, $0-8
+ MOVQ fp+0(FP), R11
+ MOVQ -8(R11), R11
+ JMP racefuncenter<>(SB)
+
// func runtime·racefuncenter(pc uintptr)
// Called from instrumented code.
TEXT runtime·racefuncenter(SB), NOSPLIT, $0-8
+ MOVQ callpc+0(FP), R11
+ JMP racefuncenter<>(SB)
+
+// Common code for racefuncenter/racefuncenterfp
+// R11 = caller's return address
+TEXT racefuncenter<>(SB), NOSPLIT, $0-0
MOVQ DX, R15 // save function entry context (for closures)
get_tls(R12)
MOVQ g(R12), R14
MOVQ g_racectx(R14), RARG0 // goroutine context
- MOVQ callpc+0(FP), RARG1
+ MOVQ R11, RARG1
// void __tsan_func_enter(ThreadState *thr, void *pc);
MOVQ $__tsan_func_enter(SB), AX
// racecall<> preserves R15
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
TEXT main(SB),NOSPLIT,$0
JMP runtime·rt0_go(SB)
-
-TEXT _fallback_vdso(SB),NOSPLIT,$0
- INT $0x80
- RET
-
-DATA runtime·_vdso(SB)/4, $_fallback_vdso(SB)
-GLOBL runtime·_vdso(SB), NOPTR, $4
-
// When building with -buildmode=c-shared, this symbol is called when the shared
// library is loaded.
TEXT _rt0_arm_linux_lib(SB),NOSPLIT,$32
- // Preserve callee-save registers. Raspberry Pi's dlopen(), for example,
+ // Preserve callee-save registers. Raspberry Pi's dlopen(), for example,
// actually cares that R11 is preserved.
MOVW R4, 12(R13)
MOVW R5, 16(R13)
"-ex", "echo END\n",
"-ex", "echo BEGIN print strvar\n",
"-ex", "print strvar",
- "-ex", "echo END\n",
- "-ex", "echo BEGIN print ptrvar\n",
- "-ex", "print ptrvar",
"-ex", "echo END\n"}
// without framepointer, gdb cannot backtrace our non-standard
t.Fatalf("print strvar failed: %s", bl)
}
- if bl := blocks["print ptrvar"]; !strVarRe.MatchString(bl) {
- t.Fatalf("print ptrvar failed: %s", bl)
- }
-
btGoroutineRe := regexp.MustCompile(`^#0\s+runtime.+at`)
if bl := blocks["goroutine 2 bt"]; canBackTrace && !btGoroutineRe.MatchString(bl) {
t.Fatalf("goroutine 2 bt failed: %s", bl)
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package runtime_test
+
+import (
+ "debug/elf"
+ "debug/macho"
+ "encoding/binary"
+ "internal/testenv"
+ "io"
+ "io/ioutil"
+ "os"
+ "os/exec"
+ "path/filepath"
+ "runtime"
+ "strings"
+ "testing"
+)
+
+var lldbPath string
+
+func checkLldbPython(t *testing.T) {
+ cmd := exec.Command("lldb", "-P")
+ out, err := cmd.CombinedOutput()
+ if err != nil {
+ t.Skipf("skipping due to issue running lldb: %v\n%s", err, out)
+ }
+ lldbPath = strings.TrimSpace(string(out))
+
+ cmd = exec.Command("/usr/bin/python2.7", "-c", "import sys;sys.path.append(sys.argv[1]);import lldb; print('go lldb python support')", lldbPath)
+ out, err = cmd.CombinedOutput()
+
+ if err != nil {
+ t.Skipf("skipping due to issue running python: %v\n%s", err, out)
+ }
+ if string(out) != "go lldb python support\n" {
+ t.Skipf("skipping due to lack of python lldb support: %s", out)
+ }
+
+ if runtime.GOOS == "darwin" {
+ // Try to see if we have debugging permissions.
+ cmd = exec.Command("/usr/sbin/DevToolsSecurity", "-status")
+ out, err = cmd.CombinedOutput()
+ if err != nil {
+ t.Skipf("DevToolsSecurity failed: %v", err)
+ } else if !strings.Contains(string(out), "enabled") {
+ t.Skip(string(out))
+ }
+ cmd = exec.Command("/usr/bin/groups")
+ out, err = cmd.CombinedOutput()
+ if err != nil {
+ t.Skipf("groups failed: %v", err)
+ } else if !strings.Contains(string(out), "_developer") {
+ t.Skip("Not in _developer group")
+ }
+ }
+}
+
+const lldbHelloSource = `
+package main
+import "fmt"
+func main() {
+ mapvar := make(map[string]string,5)
+ mapvar["abc"] = "def"
+ mapvar["ghi"] = "jkl"
+ intvar := 42
+ ptrvar := &intvar
+ fmt.Println("hi") // line 10
+ _ = ptrvar
+}
+`
+
+const lldbScriptSource = `
+import sys
+sys.path.append(sys.argv[1])
+import lldb
+import os
+
+TIMEOUT_SECS = 5
+
+debugger = lldb.SBDebugger.Create()
+debugger.SetAsync(True)
+target = debugger.CreateTargetWithFileAndArch("a.exe", None)
+if target:
+ print "Created target"
+ main_bp = target.BreakpointCreateByLocation("main.go", 10)
+ if main_bp:
+ print "Created breakpoint"
+ process = target.LaunchSimple(None, None, os.getcwd())
+ if process:
+ print "Process launched"
+ listener = debugger.GetListener()
+ process.broadcaster.AddListener(listener, lldb.SBProcess.eBroadcastBitStateChanged)
+ while True:
+ event = lldb.SBEvent()
+ if listener.WaitForEvent(TIMEOUT_SECS, event):
+ if lldb.SBProcess.GetRestartedFromEvent(event):
+ continue
+ state = process.GetState()
+ if state in [lldb.eStateUnloaded, lldb.eStateLaunching, lldb.eStateRunning]:
+ continue
+ else:
+ print "Timeout launching"
+ break
+ if state == lldb.eStateStopped:
+ for t in process.threads:
+ if t.GetStopReason() == lldb.eStopReasonBreakpoint:
+ print "Hit breakpoint"
+ frame = t.GetFrameAtIndex(0)
+ if frame:
+ if frame.line_entry:
+ print "Stopped at %s:%d" % (frame.line_entry.file.basename, frame.line_entry.line)
+ if frame.function:
+ print "Stopped in %s" % (frame.function.name,)
+ var = frame.FindVariable('intvar')
+ if var:
+ print "intvar = %s" % (var.GetValue(),)
+ else:
+ print "no intvar"
+ else:
+ print "Process state", state
+ process.Destroy()
+else:
+ print "Failed to create target a.exe"
+
+lldb.SBDebugger.Destroy(debugger)
+sys.exit()
+`
+
+const expectedLldbOutput = `Created target
+Created breakpoint
+Process launched
+Hit breakpoint
+Stopped at main.go:10
+Stopped in main.main
+intvar = 42
+`
+
+func TestLldbPython(t *testing.T) {
+ testenv.MustHaveGoBuild(t)
+ if final := os.Getenv("GOROOT_FINAL"); final != "" && runtime.GOROOT() != final {
+ t.Skip("gdb test can fail with GOROOT_FINAL pending")
+ }
+
+ checkLldbPython(t)
+
+ dir, err := ioutil.TempDir("", "go-build")
+ if err != nil {
+ t.Fatalf("failed to create temp directory: %v", err)
+ }
+ defer os.RemoveAll(dir)
+
+ src := filepath.Join(dir, "main.go")
+ err = ioutil.WriteFile(src, []byte(lldbHelloSource), 0644)
+ if err != nil {
+ t.Fatalf("failed to create file: %v", err)
+ }
+
+ cmd := exec.Command("go", "build", "-gcflags", "-N -l", "-o", "a.exe")
+ cmd.Dir = dir
+ out, err := cmd.CombinedOutput()
+ if err != nil {
+ t.Fatalf("building source %v\n%s", err, out)
+ }
+
+ src = filepath.Join(dir, "script.py")
+ err = ioutil.WriteFile(src, []byte(lldbScriptSource), 0755)
+ if err != nil {
+ t.Fatalf("failed to create script: %v", err)
+ }
+
+ cmd = exec.Command("/usr/bin/python2.7", "script.py", lldbPath)
+ cmd.Dir = dir
+ got, _ := cmd.CombinedOutput()
+
+ if string(got) != expectedLldbOutput {
+ if strings.Contains(string(got), "Timeout launching") {
+ t.Skip("Timeout launching")
+ }
+ t.Fatalf("Unexpected lldb output:\n%s", got)
+ }
+}
+
+// Check that aranges are valid even when lldb isn't installed.
+func TestDwarfAranges(t *testing.T) {
+ testenv.MustHaveGoBuild(t)
+ dir, err := ioutil.TempDir("", "go-build")
+ if err != nil {
+ t.Fatalf("failed to create temp directory: %v", err)
+ }
+ defer os.RemoveAll(dir)
+
+ src := filepath.Join(dir, "main.go")
+ err = ioutil.WriteFile(src, []byte(lldbHelloSource), 0644)
+ if err != nil {
+ t.Fatalf("failed to create file: %v", err)
+ }
+
+ cmd := exec.Command("go", "build", "-o", "a.exe")
+ cmd.Dir = dir
+ out, err := cmd.CombinedOutput()
+ if err != nil {
+ t.Fatalf("building source %v\n%s", err, out)
+ }
+
+ filename := filepath.Join(dir, "a.exe")
+ if f, err := elf.Open(filename); err == nil {
+ sect := f.Section(".debug_aranges")
+ if sect == nil {
+ t.Fatal("Missing aranges section")
+ }
+ verifyAranges(t, f.ByteOrder, sect.Open())
+ } else if f, err := macho.Open(filename); err == nil {
+ sect := f.Section("__debug_aranges")
+ if sect == nil {
+ t.Fatal("Missing aranges section")
+ }
+ verifyAranges(t, f.ByteOrder, sect.Open())
+ } else {
+ t.Skip("Not an elf or macho binary.")
+ }
+}
+
+func verifyAranges(t *testing.T, byteorder binary.ByteOrder, data io.ReadSeeker) {
+ var header struct {
+ UnitLength uint32 // does not include the UnitLength field
+ Version uint16
+ Offset uint32
+ AddressSize uint8
+ SegmentSize uint8
+ }
+ for {
+ offset, err := data.Seek(0, 1)
+ if err != nil {
+ t.Fatalf("Seek error: %v", err)
+ }
+ if err = binary.Read(data, byteorder, &header); err == io.EOF {
+ return
+ } else if err != nil {
+ t.Fatalf("Error reading arange header: %v", err)
+ }
+ tupleSize := int64(header.SegmentSize) + 2*int64(header.AddressSize)
+ lastTupleOffset := offset + int64(header.UnitLength) + 4 - tupleSize
+ if lastTupleOffset%tupleSize != 0 {
+ t.Fatalf("Invalid arange length %d, (addr %d, seg %d)", header.UnitLength, header.AddressSize, header.SegmentSize)
+ }
+ if _, err = data.Seek(lastTupleOffset, 0); err != nil {
+ t.Fatalf("Seek error: %v", err)
+ }
+ buf := make([]byte, tupleSize)
+ if n, err := data.Read(buf); err != nil || int64(n) < tupleSize {
+ t.Fatalf("Read error: %v", err)
+ }
+ for _, val := range buf {
+ if val != 0 {
+ t.Fatalf("Invalid terminator")
+ }
+ }
+ }
+}
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
argv **byte
)
-// nosplit for use in linux/386 startup linux_setup_vdso
+// nosplit for use in linux startup sysargs
//go:nosplit
func argv_index(argv **byte, i int32) *byte {
return *(**byte)(add(unsafe.Pointer(argv), uintptr(i)*sys.PtrSize))
func goenvs_unix() {
// TODO(austin): ppc64 in dynamic linking mode doesn't
- // guarantee env[] will immediately follow argv. Might cause
+ // guarantee env[] will immediately follow argv. Might cause
// problems.
n := int32(0)
for argv_index(argv, argc+1+n) != nil {
_Gwaiting // 4
_Gmoribund_unused // 5 currently unused, but hardcoded in gdb scripts
_Gdead // 6
- _Genqueue // 7 Only the Gscanenqueue is used.
+ _Genqueue_unused // 7 currently unused
_Gcopystack // 8 in this state when newstack is moving the stack
// the following encode that the GC is scanning the stack and what to do when it is done
_Gscan = 0x1000 // atomicstatus&~Gscan = the non-scan state,
_Gscanwaiting = _Gscan + _Gwaiting // 0x1004 When scanning completes make it Gwaiting
// _Gscanmoribund_unused, // not possible
// _Gscandead, // not possible
- _Gscanenqueue = _Gscan + _Genqueue // When scanning completes make it Grunnable and put on runqueue
+ // _Gscanenqueue_unused // not possible
)
const (
park note
alllink *m // on allm
schedlink muintptr
- machport uint32 // return address for mach ipc (os x)
mcache *mcache
lockedg *g
createstack [32]uintptr // stack that created this thread.
_Structrnd = sys.RegSize
)
-// startup_random_data holds random bytes initialized at startup. These come from
+// startup_random_data holds random bytes initialized at startup. These come from
// the ELF AT_RANDOM auxiliary vector (vdso_linux_amd64.go or os_linux_386.go).
var startupRandomData []byte
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
. "runtime"
"syscall"
"testing"
+ "unsafe"
)
var pid, tid int
t.Fatalf("pid=%d but tid=%d", pid, tid)
}
}
+
+// Test that error values are negative. Use address 1 (a misaligned
+// pointer) to get -EINVAL.
+func TestMincoreErrorSign(t *testing.T) {
+ var dst byte
+ v := Mincore(unsafe.Pointer(uintptr(1)), 1, &dst)
+
+ const EINVAL = 0x16
+ if v != -EINVAL {
+ t.Errorf("mincore = %v, want %v", v, -EINVAL)
+ }
+}
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build darwin dragonfly freebsd linux nacl netbsd openbsd solaris
+
+package runtime_test
+
+import (
+ "runtime"
+ "runtime/internal/sys"
+ "testing"
+)
+
+// Test that the error value returned by mmap is positive, as that is
+// what the code in mem_bsd.go, mem_darwin.go, and mem_linux.go expects.
+// See the uses of ENOMEM in sysMap in those files.
+func TestMmapErrorSign(t *testing.T) {
+ p := runtime.Mmap(nil, ^uintptr(0)&^(sys.PhysPageSize-1), 0, runtime.MAP_ANON|runtime.MAP_PRIVATE, -1, 0)
+
+ // The runtime.mmap function is nosplit, but t.Errorf is not.
+ // Reset the pointer so that we don't get an "invalid stack
+ // pointer" error from t.Errorf if we call it.
+ v := uintptr(p)
+ p = nil
+
+ if v != runtime.ENOMEM {
+ t.Errorf("mmap = %v, want %v", v, runtime.ENOMEM)
+ }
+}
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// of the larger addresses must themselves be invalid addresses.
// We might get unlucky and the OS might have mapped one of these
// addresses, but probably not: they're all in the first page, very high
-// adderesses that normally an OS would reserve for itself, or malformed
+// addresses that normally an OS would reserve for itself, or malformed
// addresses. Even so, we might have to remove one or two on different
// systems. We will see.
if GOOS == "windows" || GOOS == "nacl" {
t.Skip("skipping OS that doesn't have open/read/write/close")
}
- // make sure we get the correct error code if open fails. Same for
- // read/write/close on the resulting -1 fd. See issue 10052.
+ // make sure we get the correct error code if open fails. Same for
+ // read/write/close on the resulting -1 fd. See issue 10052.
nonfile := []byte("/notreallyafile")
fd := Open(&nonfile[0], 0, 0)
if fd != -1 {
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// only 0 or 1 cases plus default into simpler constructs.
// The only way we can end up with such small sel.ncase
// values here is for a larger select in which most channels
- // have been nilled out. The general code handles those
+ // have been nilled out. The general code handles those
// cases correctly, and they are rare enough not to bother
// optimizing (and needing to test).
sg.g = gp
// Note: selectdone is adjusted for stack copies in stack1.go:adjustsudogs
sg.selectdone = (*uint32)(noescape(unsafe.Pointer(&done)))
+ // No stack splits between assigning elem and enqueuing
+ // sg on gp.waiting where copystack can find it.
sg.elem = cas.elem
sg.releasetime = 0
if t0 != 0 {
return
}
- // x==y==nil. Either sgp is the only element in the queue,
- // or it has already been removed. Use q.first to disambiguate.
+ // x==y==nil. Either sgp is the only element in the queue,
+ // or it has already been removed. Use q.first to disambiguate.
if q.first == sgp {
q.first = nil
q.last = nil
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Reset the signal handler and raise the signal.
// We are currently running inside a signal handler, so the
- // signal is blocked. We need to unblock it before raising the
+ // signal is blocked. We need to unblock it before raising the
// signal, or the signal we raise will be ignored until we return
- // from the signal handler. We know that the signal was unblocked
+ // from the signal handler. We know that the signal was unblocked
// before entering the handler, or else we would not have received
- // it. That means that we don't have to worry about blocking it
+ // it. That means that we don't have to worry about blocking it
// again.
unblocksig(sig)
setsig(sig, handler, false)
// This is called when we receive a signal when there is no signal stack.
// This can only happen if non-Go code calls sigaltstack to disable the
-// signal stack. This is called via cgocallback to establish a stack.
+// signal stack. This is called via cgocallback to establish a stack.
func noSignalStack(sig uint32) {
println("signal", sig, "received on thread with no signal stack")
throw("non-Go code disabled sigaltstack")
}
// This is called if we receive a signal when there is a signal stack
-// but we are not on it. This can only happen if non-Go code called
+// but we are not on it. This can only happen if non-Go code called
// sigaction without setting the SS_ONSTACK flag.
func sigNotOnStack(sig uint32) {
println("signal", sig, "received but handler not on signal stack")
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
func sigfwd(fn uintptr, sig uint32, info *siginfo, ctx unsafe.Pointer)
// Determines if the signal should be handled by Go and if not, forwards the
-// signal to the handler that was installed before Go's. Returns whether the
+// signal to the handler that was installed before Go's. Returns whether the
// signal was forwarded.
// This is called by the signal handler, and the world may be stopped.
//go:nosplit
if c.sigcode() == _SI_USER || flags&_SigPanic == 0 {
return false
}
- // Determine if the signal occurred inside Go code. We test that:
+ // Determine if the signal occurred inside Go code. We test that:
// (1) we were in a goroutine (i.e., m.curg != nil), and
// (2) we weren't in CGO (i.e., m.curg.syscallsp == 0).
g := getg()
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Only push runtime.sigpanic if pc != 0.
// If pc == 0, probably panicked because of a
- // call to a nil func. Not pushing that onto sp will
+ // call to a nil func. Not pushing that onto sp will
// make the trace look like a call to runtime.sigpanic instead.
// (Otherwise the trace will end at runtime.sigpanic and we
// won't get to see who faulted.)
level, _, docrash := gotraceback()
if level > 0 {
goroutineheader(gp)
-
- // On Linux/386, all system calls go through the vdso kernel_vsyscall routine.
- // Normally we don't see those PCs, but during signals we can.
- // If we see a PC in the vsyscall area (it moves around, but near the top of memory),
- // assume we're blocked in the vsyscall routine, which has saved
- // three words on the stack after the initial call saved the caller PC.
- // Pop all four words off SP and use the saved PC.
- // The check of the stack bounds here should suffice to avoid a fault
- // during the actual PC pop.
- // If we do load a bogus PC, not much harm done: we weren't going
- // to get a decent traceback anyway.
- // TODO(rsc): Make this more precise: we should do more checks on the PC,
- // and we should find out whether different versions of the vdso page
- // use different prologues that store different amounts on the stack.
- pc := uintptr(c.eip())
- sp := uintptr(c.esp())
- if GOOS == "linux" && pc >= 0xf4000000 && gp.stack.lo <= sp && sp+16 <= gp.stack.hi {
- // Assume in vsyscall page.
- sp += 16
- pc = *(*uintptr)(unsafe.Pointer(sp - 4))
- print("runtime: unwind vdso kernel_vsyscall: pc=", hex(pc), " sp=", hex(sp), "\n")
- }
-
- tracebacktrap(pc, sp, 0, gp)
+ tracebacktrap(uintptr(c.eip()), uintptr(c.esp()), 0, gp)
if crashing > 0 && gp != _g_.m.curg && _g_.m.curg != nil && readgstatus(_g_.m.curg)&^_Gscan == _Grunning {
// tracebackothers on original m skipped this one; trace it now.
goroutineheader(_g_.m.curg)
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Only push runtime.sigpanic if pc != 0.
// If pc == 0, probably panicked because of a
- // call to a nil func. Not pushing that onto sp will
+ // call to a nil func. Not pushing that onto sp will
// make the trace look like a call to runtime.sigpanic instead.
// (Otherwise the trace will end at runtime.sigpanic and we
// won't get to see who faulted.)
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Only push runtime·sigpanic if r.ip() != 0.
// If r.ip() == 0, probably panicked because of a
- // call to a nil func. Not pushing that onto sp will
+ // call to a nil func. Not pushing that onto sp will
// make the trace look like a call to runtime·sigpanic instead.
// (Otherwise the trace will end at runtime·sigpanic and we
// won't get to see who faulted.)
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// sigsend is called by the signal handler to queue a new signal.
// signal_recv is called by the Go program to receive a newly queued signal.
// Synchronization between sigsend and signal_recv is based on the sig.state
-// variable. It can be in 3 states: sigIdle, sigReceiving and sigSending.
+// variable. It can be in 3 states: sigIdle, sigReceiving and sigSending.
// sigReceiving means that signal_recv is blocked on sig.Note and there are no
// new pending signals.
// sigSending means that sig.mask *may* contain new pending signals,
func signal_enable(s uint32) {
if !sig.inuse {
// The first call to signal_enable is for us
- // to use for initialization. It does not pass
+ // to use for initialization. It does not pass
// signal information in m.
sig.inuse = true // enable reception of signals; cannot disable
noteclear(&sig.note)
return sig.ignored[s/32]&(1<<(s&31)) != 0
}
-// This runs on a foreign stack, without an m or a g. No stack split.
+// This runs on a foreign stack, without an m or a g. No stack split.
//go:nosplit
//go:norace
//go:nowritebarrierrec
func signal_enable(s uint32) {
if !sig.inuse {
// The first call to signal_enable is for us
- // to use for initialization. It does not pass
+ // to use for initialization. It does not pass
// signal information in m.
sig.inuse = true // enable reception of signals; cannot disable
noteclear(&sig.note)
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Software floating point interpretaton of ARM 7500 FP instructions.
+// Software floating point interpretation of ARM 7500 FP instructions.
// The interpretation is not bit compatible with the 7500.
// It uses true little-endian doubles, while the 7500 used mixed-endian.
// The original C code and the long comment below are
// from FreeBSD's /usr/src/lib/msun/src/e_sqrt.c and
-// came with this notice. The go code is a simplified
+// came with this notice. The go code is a simplified
// version of the original C.
//
// ====================================================
_StackGuard = 720*sys.StackGuardMultiplier + _StackSystem
// After a stack split check the SP is allowed to be this
- // many bytes below the stack guard. This saves an instruction
+ // many bytes below the stack guard. This saves an instruction
// in the checking sequence for tiny frames.
_StackSmall = 128
return log2
}
-// Allocates a stack from the free pool. Must be called with
+// Allocates a stack from the free pool. Must be called with
// stackpoolmu held.
func stackpoolalloc(order uint8) gclinkptr {
list := &stackpool[order]
s := list.first
if s == nil {
- // no free stacks. Allocate another span worth.
+ // no free stacks. Allocate another span worth.
s = mheap_.allocStack(_StackCacheSize >> _PageShift)
if s == nil {
throw("out of memory")
return x
}
-// Adds stack x to the free pool. Must be called with stackpoolmu held.
+// Adds stack x to the free pool. Must be called with stackpoolmu held.
func stackpoolfree(x gclinkptr, order uint8) {
s := mheap_.lookup(unsafe.Pointer(x))
if s.state != _MSpanStack {
// Adjustpointer checks whether *vpp is in the old stack described by adjinfo.
// If so, it rewrites *vpp to point into the new stack.
func adjustpointer(adjinfo *adjustinfo, vpp unsafe.Pointer) {
- pp := (*unsafe.Pointer)(vpp)
+ pp := (*uintptr)(vpp)
p := *pp
if stackDebug >= 4 {
- print(" ", pp, ":", p, "\n")
+ print(" ", pp, ":", hex(p), "\n")
}
- if adjinfo.old.lo <= uintptr(p) && uintptr(p) < adjinfo.old.hi {
- *pp = add(p, adjinfo.delta)
+ if adjinfo.old.lo <= p && p < adjinfo.old.hi {
+ *pp = p + adjinfo.delta
if stackDebug >= 3 {
- print(" adjust ptr ", pp, ":", p, " -> ", *pp, "\n")
+ print(" adjust ptr ", pp, ":", hex(p), " -> ", hex(*pp), "\n")
}
}
}
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
select {
case <-done:
case <-time.After(20 * time.Second):
- t.Fatal("finalizer did not run")
+ t.Error("finalizer did not run")
+ return
}
}()
wg.Wait()
<-done
})
}()
-
wg.Wait()
}
}
func TestStackPanic(t *testing.T) {
- // Test that stack copying copies panics correctly. This is difficult
+ // Test that stack copying copies panics correctly. This is difficult
// to test because it is very unlikely that the stack will be copied
- // in the middle of gopanic. But it can happen.
+ // in the middle of gopanic. But it can happen.
// To make this test effective, edit panic.go:gopanic and uncomment
// the GC() call just before freedefer(d).
defer func() {
if raceenabled && l > 0 {
racereadrangepc(unsafe.Pointer(&b[0]),
uintptr(l),
- getcallerpc(unsafe.Pointer(&b)),
+ getcallerpc(unsafe.Pointer(&buf)),
funcPC(slicebytetostring))
}
if msanenabled && l > 0 {
func stringtoslicebyte(buf *tmpBuf, s string) []byte {
var b []byte
if buf != nil && len(s) <= len(buf) {
- b = buf[:len(s)]
+ b = buf[:len(s):len(s)]
} else {
b = rawbyteslice(len(s))
}
}
var a []rune
if buf != nil && n <= len(buf) {
- a = buf[:n]
+ a = buf[:n:n]
} else {
a = rawruneslice(n)
}
if raceenabled && len(a) > 0 {
racereadrangepc(unsafe.Pointer(&a[0]),
uintptr(len(a))*unsafe.Sizeof(a[0]),
- getcallerpc(unsafe.Pointer(&a)),
+ getcallerpc(unsafe.Pointer(&buf)),
funcPC(slicerunetostring))
}
if msanenabled && len(a) > 0 {
}
}
+func BenchmarkArrayEqual(b *testing.B) {
+ a1 := [16]byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}
+ a2 := [16]byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}
+ b.ResetTimer()
+ for i := 0; i < b.N; i++ {
+ if a1 != a2 {
+ b.Fatal("not equal")
+ }
+ }
+}
+
func TestStringW(t *testing.T) {
strings := []string{
"hello",
t.Fatalf("want 0 allocs, got %v", n)
}
}
+
+func TestString2Slice(t *testing.T) {
+ // Make sure we don't return slices that expose
+ // an unzeroed section of stack-allocated temp buf
+ // between len and cap. See issue 14232.
+ s := "foož"
+ b := ([]byte)(s)
+ if cap(b) != 5 {
+ t.Errorf("want cap of 5, got %d", cap(b))
+ }
+ r := ([]rune)(s)
+ if cap(r) != 4 {
+ t.Errorf("want cap of 4, got %d", cap(r))
+ }
+}
// in asm_*.s
//go:noescape
-func memeq(a, b unsafe.Pointer, size uintptr) bool
+func memequal(a, b unsafe.Pointer, size uintptr) bool
// noescape hides a pointer from escape analysis. noescape is
// the identity function but escape analysis doesn't think the
"unsafe"
)
+// Frames may be used to get function/file/line information for a
+// slice of PC values returned by Callers.
+type Frames struct {
+ callers []uintptr
+
+ // If previous caller in iteration was a panic, then
+ // ci.callers[0] is the address of the faulting instruction
+ // instead of the return address of the call.
+ wasPanic bool
+}
+
+// Frame is the information returned by Frames for each call frame.
+type Frame struct {
+ // Program counter for this frame; multiple frames may have
+ // the same PC value.
+ PC uintptr
+
+ // Func for this frame; may be nil for non-Go code or fully
+ // inlined functions.
+ Func *Func
+
+ // Function name, file name, and line number for this call frame.
+ // May be the empty string or zero if not known.
+ // If Func is not nil then Function == Func.Name().
+ Function string
+ File string
+ Line int
+
+ // Entry point for the function; may be zero if not known.
+ // If Func is not nil then Entry == Func.Entry().
+ Entry uintptr
+}
+
+// CallersFrames takes a slice of PC values returned by Callers and
+// prepares to return function/file/line information.
+// Do not change the slice until you are done with the Frames.
+func CallersFrames(callers []uintptr) *Frames {
+ return &Frames{callers, false}
+}
+
+// Next returns frame information for the next caller.
+// If more is false, there are no more callers (the Frame value is valid).
+func (ci *Frames) Next() (frame Frame, more bool) {
+ if len(ci.callers) == 0 {
+ ci.wasPanic = false
+ return Frame{}, false
+ }
+ pc := ci.callers[0]
+ ci.callers = ci.callers[1:]
+ more = len(ci.callers) > 0
+ f := FuncForPC(pc)
+ if f == nil {
+ ci.wasPanic = false
+ return Frame{}, more
+ }
+
+ entry := f.Entry()
+ xpc := pc
+ if xpc > entry && !ci.wasPanic {
+ xpc--
+ }
+ file, line := f.FileLine(xpc)
+
+ function := f.Name()
+ ci.wasPanic = entry == sigpanicPC
+
+ frame = Frame{
+ PC: xpc,
+ Func: f,
+ Function: function,
+ File: file,
+ Line: line,
+ Entry: entry,
+ }
+
+ return frame, more
+}
+
// NOTE: Func does not expose the actual unexported fields, because we return *Func
// values to users, and we want to keep them from being able to overwrite the data
// with (say) *f = Func{}.
// Each bucket represents 4096 bytes of the text segment.
// Each subbucket represents 256 bytes of the text segment.
// To find a function given a pc, locate the bucket and subbucket for
-// that pc. Add together the idx and subbucket value to obtain a
-// function index. Then scan the functab array starting at that
+// that pc. Add together the idx and subbucket value to obtain a
+// function index. Then scan the functab array starting at that
// index to find the target function.
// This table uses 20 bytes for every 4096 bytes of code, or ~0.5% overhead.
type findfuncbucket struct {
POPL AX
POPAL
- // Now segment is established. Initialize m, g.
+ // Now segment is established. Initialize m, g.
get_tls(BP)
MOVL m_g0(DX), AX
MOVL AX, g(BP)
MOVW R1, 24(R6)
// switch stack and g
- MOVW R6, R13 // sigtramp can not re-entrant, so no need to back up R13.
+ MOVW R6, R13 // sigtramp is not re-entrant, so no need to back up R13.
MOVW R5, g
BL (R0)
MOVD R1, 48(R6)
// switch stack and g
- MOVD R6, RSP // sigtramp can not re-entrant, so no need to back up RSP.
+ MOVD R6, RSP // sigtramp is not re-entrant, so no need to back up RSP.
MOVD R5, g
BL (R0)
MOVQ DI, g(CX)
// On DragonFly, a new thread inherits the signal stack of the
- // creating thread. That confuses minit, so we remove that
- // signal stack here before calling the regular mstart. It's
+ // creating thread. That confuses minit, so we remove that
+ // signal stack here before calling the regular mstart. It's
// a bit baroque to remove a signal stack here only to add one
// in minit, but it's a simple change that keeps DragonFly
- // working like other OS's. At this point all signals are
+ // working like other OS's. At this point all signals are
// blocked, so there is no race.
SUBQ $8, SP
MOVQ $0, 0(SP)
// Most linux systems use glibc's dynamic linker, which puts the
// __kernel_vsyscall vdso helper at 0x10(GS) for easy access from position
-// independent code and setldt in this file does the same in the statically
-// linked case. Android, however, uses bionic's dynamic linker, which does not
-// save the helper anywhere, and so the only way to invoke a syscall from
-// position independent code is boring old int $0x80 (which is also what
-// bionic's syscall wrappers use).
-#ifdef GOOS_android
+// independent code and setldt in runtime does the same in the statically
+// linked case. However, systems that use alternative libc such as Android's
+// bionic and musl, do not save the helper anywhere, and so the only way to
+// invoke a syscall from position independent code is boring old int $0x80
+// (which is also what syscall wrappers in bionic/musl use).
+//
+// The benchmarks also showed that using int $0x80 is as fast as calling
+// *%gs:0x10 except on AMD Opteron. See https://golang.org/cl/19833
+// for the benchmark program and raw data.
+//#define INVOKE_SYSCALL CALL 0x10(GS) // non-portable
#define INVOKE_SYSCALL INT $0x80
-#else
-#define INVOKE_SYSCALL CALL 0x10(GS)
-#endif
TEXT runtime·exit(SB),NOSPLIT,$0
MOVL $252, AX // syscall number
POPL AX
POPAL
- // Now segment is established. Initialize m, g.
+ // Now segment is established. Initialize m, g.
get_tls(AX)
MOVL DX, g(AX)
MOVL BX, g_m(DX)
*/
ADDL $0x4, DX // address
MOVL DX, 0(DX)
- // We copy the glibc dynamic linker behaviour of storing the
- // __kernel_vsyscall entry point at 0x10(GS) so that it can be invoked
- // by "CALL 0x10(GS)" in all situations, not only those where the
- // binary is actually dynamically linked.
- MOVL runtime·_vdso(SB), AX
- MOVL AX, 0x10(DX)
#endif
// set up user_desc
// Call the function stored in _cgo_mmap using the GCC calling convention.
// This must be called on the system stack.
-TEXT runtime·callCgoMmap(SB),NOSPLIT,$0
+TEXT runtime·callCgoMmap(SB),NOSPLIT,$16
MOVQ addr+0(FP), DI
MOVQ n+8(FP), SI
MOVL prot+16(FP), DX
MOVL fd+24(FP), R8
MOVL off+28(FP), R9
MOVQ _cgo_mmap(SB), AX
+ MOVQ SP, BX
+ ANDQ $~15, SP // alignment as per amd64 psABI
+ MOVQ BX, 0(SP)
CALL AX
+ MOVQ 0(SP), SP
MOVQ AX, ret+32(FP)
RET
// Call fn
CALL R12
- // It shouldn't return. If it does, exit that thread.
+ // It shouldn't return. If it does, exit that thread.
MOVL $111, DI
MOVL $60, AX
SYSCALL
MOVW $16(R13), R13
BL (R0)
- // It shouldn't return. If it does, exit that thread.
+ // It shouldn't return. If it does, exit that thread.
SUB $16, R13 // restore the stack pointer to avoid memory corruption
MOVW $0, R0
MOVW R0, 4(R13)
MOVD $SYS_mmap, R8
SVC
+ CMN $4095, R0
+ BCC 2(PC)
+ NEG R0,R0
MOVD R0, ret+32(FP)
RET
MOVV dst+16(FP), R6
MOVV $SYS_mincore, R2
SYSCALL
+ SUBVU R2, R0, R2 // caller expects negative errno
MOVW R2, ret+24(FP)
RET
MOVD n+8(FP), R4
MOVD dst+16(FP), R5
SYSCALL $SYS_mincore
+ NEG R3 // caller expects negative errno
MOVW R3, ret+24(FP)
RET
LEAL 24(SP), AX
MOVL AX, 20(SP)
NACL_SYSCALL(SYS_mmap)
+ CMPL AX, $-4095
+ JNA 2(PC)
+ NEGL AX
MOVL AX, ret+24(FP)
RET
POPL AX
POPAL
- // Now segment is established. Initialize m, g.
+ // Now segment is established. Initialize m, g.
get_tls(AX)
MOVL DX, g(AX)
MOVL BX, g_m(DX)
// Call fn
CALL R12
- // It shouldn't return. If it does, exit.
+ // It shouldn't return. If it does, exit.
MOVL $310, AX // sys__lwp_exit
SYSCALL
JMP -3(PC) // keep exiting
POPL AX
POPAL
- // Now segment is established. Initialize m, g.
+ // Now segment is established. Initialize m, g.
get_tls(AX)
MOVL DX, g(AX)
MOVL BX, g_m(DX)
// Call fn
CALL R12
- // It shouldn't return. If it does, exit
+ // It shouldn't return. If it does, exit
MOVQ $0, DI // arg 1 - notdead
MOVL $302, AX // sys___threxit
SYSCALL
// pipe(3c) wrapper that returns fds in AX, DX.
// NOT USING GO CALLING CONVENTION.
TEXT runtime·pipe1(SB),NOSPLIT,$0
- SUBQ $16, SP // 8 bytes will do, but stack has to be 16-byte alligned
+ SUBQ $16, SP // 8 bytes will do, but stack has to be 16-byte aligned
MOVQ SP, DI
LEAQ libc_pipe(SB), AX
CALL AX
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
return
}
if pc[0] == 0xcc {
- // This is a breakpoint inserted by gdb. We could use
- // runtime·findfunc to find the function. But if we
+ // This is a breakpoint inserted by gdb. We could use
+ // runtime·findfunc to find the function. But if we
// do that, then we will continue execution at the
// function entry point, and we will not hit the gdb
- // breakpoint. So for this case we don't change
+ // breakpoint. So for this case we don't change
// buf.pc, so that when we return we will execute
- // the jump instruction and carry on. This means that
+ // the jump instruction and carry on. This means that
// stack unwinding may not work entirely correctly
// (https://golang.org/issue/5723) but the user is
// running under gdb anyhow.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
register("GoexitInPanic", GoexitInPanic)
register("PanicAfterGoexit", PanicAfterGoexit)
register("RecoveredPanicAfterGoexit", RecoveredPanicAfterGoexit)
-
+ register("PanicTraceback", PanicTraceback)
+ register("GoschedInPanic", GoschedInPanic)
+ register("SyscallInPanic", SyscallInPanic)
}
func SimpleDeadlock() {
runtime.Goexit()
}
+type errorThatGosched struct{}
+
+func (errorThatGosched) Error() string {
+ runtime.Gosched()
+ return "errorThatGosched"
+}
+
+func GoschedInPanic() {
+ panic(errorThatGosched{})
+}
+
+type errorThatPrint struct{}
+
+func (errorThatPrint) Error() string {
+ fmt.Println("1")
+ fmt.Println("2")
+ return "3"
+}
+
+func SyscallInPanic() {
+ panic(errorThatPrint{})
+}
+
func PanicAfterGoexit() {
defer func() {
panic("hello")
}()
runtime.Goexit()
}
+
+func PanicTraceback() {
+ pt1()
+}
+
+func pt1() {
+ defer func() {
+ panic("panic pt1")
+ }()
+ pt2()
+}
+
+func pt2() {
+ defer func() {
+ panic("panic pt2")
+ }()
+ panic("hello")
+}
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2016 The Go Authors. All rights reserved.
+// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
/*
void foo1(void) {}
+void foo2(void* p) {}
*/
import "C"
import (
"fmt"
+ "os"
"runtime"
+ "strconv"
"time"
+ "unsafe"
)
func init() {
register("CgoSignalDeadlock", CgoSignalDeadlock)
register("CgoTraceback", CgoTraceback)
+ register("CgoCheckBytes", CgoCheckBytes)
}
func CgoSignalDeadlock() {
runtime.Stack(buf, true)
fmt.Printf("OK\n")
}
+
+func CgoCheckBytes() {
+ try, _ := strconv.Atoi(os.Getenv("GO_CGOCHECKBYTES_TRY"))
+ if try <= 0 {
+ try = 1
+ }
+ b := make([]byte, 1e6*try)
+ start := time.Now()
+ for i := 0; i < 1e3*try; i++ {
+ C.foo2(unsafe.Pointer(&b[0]))
+ if time.Since(start) > time.Second {
+ break
+ }
+ }
+}
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package main
+
+/*
+char *geterror() {
+ return "cgo error";
+}
+*/
+import "C"
+import (
+ "fmt"
+)
+
+func init() {
+ register("CgoPanicDeadlock", CgoPanicDeadlock)
+}
+
+type cgoError struct{}
+
+func (cgoError) Error() string {
+ fmt.Print("") // necessary to trigger the deadlock
+ return C.GoString(C.geterror())
+}
+
+func CgoPanicDeadlock() {
+ panic(cgoError{})
+}
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2016 The Go Authors. All rights reserved.
+// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2016 The Go Authors. All rights reserved.
+// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2016 The Go Authors. All rights reserved.
+// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// This file defines flags attached to various functions
-// and data objects. The compilers, assemblers, and linker must
+// and data objects. The compilers, assemblers, and linker must
// all agree on these values.
-// Don't profile the marked routine. This flag is deprecated.
+// Don't profile the marked routine. This flag is deprecated.
#define NOPROF 1
-// It is ok for the linker to get multiple of these symbols. It will
+// It is ok for the linker to get multiple of these symbols. It will
// pick one of the duplicates to use.
#define DUPOK 2
// Don't insert stack check preamble.
i int // heap index
// Timer wakes up at when, and then at when+period, ... (period > 0 only)
- // each time calling f(now, arg) in the timer goroutine, so f must be
+ // each time calling f(arg, now) in the timer goroutine, so f must be
// a well-behaved function and not block.
when int64
period int64
goparkunlock(&timers.lock, "timer goroutine (idle)", traceEvGoBlock, 1)
continue
}
- // At least one timer pending. Sleep until then.
+ // At least one timer pending. Sleep until then.
timers.sleeping = true
noteclear(&timers.waitnote)
unlock(&timers.lock)
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
}
-// Generic traceback. Handles runtime stack prints (pcbuf == nil),
+// Generic traceback. Handles runtime stack prints (pcbuf == nil),
// the runtime.Callers function (pcbuf != nil), as well as the garbage
// collector (callback != nil). A little clunky to merge these, but avoids
// duplicating the code and all its subtlety.
if (n > 0 || flags&_TraceTrap == 0) && frame.pc > f.entry && !waspanic {
tracepc--
}
- print(funcname(f), "(")
+ name := funcname(f)
+ if name == "runtime.gopanic" {
+ name = "panic"
+ }
+ print(name, "(")
argp := (*[100]uintptr)(unsafe.Pointer(frame.argp))
for i := uintptr(0); i < frame.arglen/sys.PtrSize; i++ {
if i >= 10 {
level, _, _ := gotraceback()
name := funcname(f)
- // Special case: always show runtime.panic frame, so that we can
+ // Special case: always show runtime.gopanic frame, so that we can
// see where a panic started in the middle of a stack trace.
// See golang.org/issue/5832.
- if name == "runtime.panic" {
+ if name == "runtime.gopanic" {
return true
}
_Gsyscall: "syscall",
_Gwaiting: "waiting",
_Gdead: "dead",
- _Genqueue: "enqueue",
_Gcopystack: "copystack",
}
goroutineheader(gp)
// Note: gp.m == g.m occurs when tracebackothers is
// called from a signal handler initiated during a
- // systemstack call. The original G is still in the
+ // systemstack call. The original G is still in the
// running state, and we want to print its stack.
if gp.m != g.m && readgstatus(gp)&^_Gscan == _Grunning {
print("\tgoroutine running on other thread; stack unavailable\n")
// If the KindGCProg bit is set in kind, gcdata is a GC program.
// Otherwise it is a ptrmask bitmap. See mbitmap.go for details.
gcdata *byte
- _string *string
+ _string string
x *uncommontype
- ptrto *_type
+}
+
+func hasPrefix(s, prefix string) bool {
+ return len(s) >= len(prefix) && s[:len(prefix)] == prefix
+}
+
+func (t *_type) name() string {
+ if hasPrefix(t._string, "map[") {
+ return ""
+ }
+ if hasPrefix(t._string, "struct {") {
+ return ""
+ }
+ if hasPrefix(t._string, "chan ") {
+ return ""
+ }
+ if hasPrefix(t._string, "func(") {
+ return ""
+ }
+ if t._string[0] == '[' || t._string[0] == '*' {
+ return ""
+ }
+ i := len(t._string) - 1
+ for i >= 0 {
+ if t._string[i] == '.' {
+ break
+ }
+ i--
+ }
+ return t._string[i+1:]
}
type method struct {
}
type uncommontype struct {
- name *string
pkgpath *string
mhdr []method
}
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
def = (*elf64Verdef)(add(unsafe.Pointer(def), uintptr(def.vd_next)))
}
- return -1 // can not match any version
+ return -1 // cannot match any version
}
func vdso_parse_symbols(info *vdso_info, version int32) {
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Inferno's libkern/vlop-386.s
// http://code.google.com/p/inferno-os/source/browse/libkern/vlop-386.s
//
-// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
+// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
// Revisions Copyright © 2000-2007 Vita Nuova Holdings Limited (www.vitanuova.com). All rights reserved.
// Portions Copyright 2009 The Go Authors. All rights reserved.
//
// Inferno's libkern/vlop-arm.s
// http://code.google.com/p/inferno-os/source/browse/libkern/vlop-arm.s
//
-// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
+// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
// Revisions Copyright © 2000-2007 Vita Nuova Holdings Limited (www.vitanuova.com). All rights reserved.
// Portions Copyright 2009 The Go Authors. All rights reserved.
//
// Inferno's libkern/vlrt-arm.c
// http://code.google.com/p/inferno-os/source/browse/libkern/vlrt-arm.c
//
-// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
+// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved.
// Revisions Copyright © 2000-2007 Vita Nuova Holdings Limited (www.vitanuova.com). All rights reserved.
// Portions Copyright 2009 The Go Authors. All rights reserved.
//
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Search uses binary search to find and return the smallest index i
// in [0, n) at which f(i) is true, assuming that on the range [0, n),
-// f(i) == true implies f(i+1) == true. That is, Search requires that
+// f(i) == true implies f(i+1) == true. That is, Search requires that
// f is false for some (possibly empty) prefix of the input range [0, n)
// and then true for the (possibly empty) remainder; Search returns
-// the first true index. If there is no such index, Search returns n.
+// the first true index. If there is no such index, Search returns n.
// (Note that the "not found" return value is not -1 as in, for instance,
// strings.Index.)
// Search calls f(i) only for i in the range [0, n).
}
// SearchFloat64s searches for x in a sorted slice of float64s and returns the index
-// as specified by Search. The return value is the index to insert x if x is not
+// as specified by Search. The return value is the index to insert x if x is not
// present (it could be len(a)).
// The slice must be sorted in ascending order.
//
}
// SearchStrings searches for x in a sorted slice of strings and returns the index
-// as specified by Search. The return value is the index to insert x if x is not
+// as specified by Search. The return value is the index to insert x if x is not
// present (it could be len(a)).
// The slice must be sorted in ascending order.
//
}
// Abstract exhaustive test: all sizes up to 100,
-// all possible return values. If there are any small
+// all possible return values. If there are any small
// corner cases, this test exercises them.
func TestSearchExhaustive(t *testing.T) {
for size := 0; size <= 100; size++ {
package sort
// A type, typically a collection, that satisfies sort.Interface can be
-// sorted by the routines in this package. The methods require that the
+// sorted by the routines in this package. The methods require that the
// elements of the collection be enumerated by an integer index.
type Interface interface {
// Len is the number of elements in the collection.
pivot := lo
a, c := lo+1, hi-1
- for ; a != c && data.Less(a, pivot); a++ {
+ for ; a < c && data.Less(a, pivot); a++ {
}
b := a
for {
- for ; b != c && !data.Less(pivot, b); b++ { // data[b] <= pivot
+ for ; b < c && !data.Less(pivot, b); b++ { // data[b] <= pivot
}
- for ; b != c && data.Less(pivot, c-1); c-- { // data[c-1] > pivot
+ for ; b < c && data.Less(pivot, c-1); c-- { // data[c-1] > pivot
}
- if b == c {
+ if b >= c {
break
}
// data[b] > pivot; data[c-1] <= pivot
// data[a <= i < b] unexamined
// data[b <= i < c] = pivot
for {
- for ; a != b && !data.Less(b-1, pivot); b-- { // data[b] == pivot
+ for ; a < b && !data.Less(b-1, pivot); b-- { // data[b] == pivot
}
- for ; a != b && data.Less(a, pivot); a++ { // data[a] < pivot
+ for ; a < b && data.Less(a, pivot); a++ { // data[a] < pivot
}
- if a == b {
+ if a >= b {
break
}
// data[a] == pivot; data[b-1] < pivot
// Notes on stable sorting:
// The used algorithms are simple and provable correct on all input and use
-// only logarithmic additional stack space. They perform well if compared
+// only logarithmic additional stack space. They perform well if compared
// experimentally to other stable in-place sorting algorithms.
//
// Remarks on other algorithms evaluated:
// unstable or rely on enough different elements in each step to encode the
// performed block rearrangements. See also "In-Place Merging Algorithms",
// Denham Coates-Evely, Department of Computer Science, Kings College,
-// January 2004 and the reverences in there.
+// January 2004 and the references in there.
// - Often "optimal" algorithms are optimal in the number of assignments
// but Interface has only Swap as operation.
}
}
+type nonDeterministicTestingData struct {
+ r *rand.Rand
+}
+
+func (t *nonDeterministicTestingData) Len() int {
+ return 500
+}
+func (t *nonDeterministicTestingData) Less(i, j int) bool {
+ if i < 0 || j < 0 || i >= t.Len() || j >= t.Len() {
+ panic("nondeterministic comparison out of bounds")
+ }
+ return t.r.Float32() < 0.5
+}
+func (t *nonDeterministicTestingData) Swap(i, j int) {
+ if i < 0 || j < 0 || i >= t.Len() || j >= t.Len() {
+ panic("nondeterministic comparison out of bounds")
+ }
+}
+
+func TestNonDeterministicComparison(t *testing.T) {
+ // Ensure that sort.Sort does not panic when Less returns inconsistent results.
+ // See https://golang.org/issue/14377.
+ defer func() {
+ if r := recover(); r != nil {
+ t.Error(r)
+ }
+ }()
+
+ td := &nonDeterministicTestingData{
+ r: rand.New(rand.NewSource(0)),
+ }
+
+ for i := 0; i < 10; i++ {
+ Sort(td)
+ }
+}
+
func BenchmarkSortString1K(b *testing.B) {
b.StopTimer()
for i := 0; i < b.N; i++ {
data.initB()
Stable(data)
if !IsSorted(data) {
- t.Errorf("Stable shuffeled sorted %d ints (order)", n)
+ t.Errorf("Stable shuffled sorted %d ints (order)", n)
}
if !data.inOrder() {
- t.Errorf("Stable shuffeled sorted %d ints (stability)", n)
+ t.Errorf("Stable shuffled sorted %d ints (stability)", n)
}
// sorted reversed
// ParseBool returns the boolean value represented by the string.
// It accepts 1, t, T, TRUE, true, True, 0, f, F, FALSE, false, False.
// Any other value returns an error.
-func ParseBool(str string) (value bool, err error) {
+func ParseBool(str string) (bool, error) {
switch str {
case "1", "t", "T", "true", "TRUE", "True":
return true, nil
// If s is syntactically well-formed but is more than 1/2 ULP
// away from the largest floating point number of the given size,
// ParseFloat returns f = ±Inf, err.Err = ErrRange.
-func ParseFloat(s string, bitSize int) (f float64, err error) {
+func ParseFloat(s string, bitSize int) (float64, error) {
if bitSize == 32 {
- f1, err1 := atof32(s)
- return float64(f1), err1
+ f, err := atof32(s)
+ return float64(f), err
}
- f1, err1 := atof64(s)
- return f1, err1
+ return atof64(s)
}
func init() {
// The atof routines return NumErrors wrapping
- // the error and the string. Convert the table above.
+ // the error and the string. Convert the table above.
for i := range atoftests {
test := &atoftests[i]
if test.err != nil {
const maxUint64 = (1<<64 - 1)
// ParseUint is like ParseInt but for unsigned numbers.
-func ParseUint(s string, base int, bitSize int) (n uint64, err error) {
+func ParseUint(s string, base int, bitSize int) (uint64, error) {
+ var n uint64
+ var err error
var cutoff, maxVal uint64
if bitSize == 0 {
}
// ParseInt interprets a string s in the given base (2 to 36) and
-// returns the corresponding value i. If base == 0, the base is
+// returns the corresponding value i. If base == 0, the base is
// implied by the string's prefix: base 16 for "0x", base 8 for
// "0", and base 10 otherwise.
//
// The bitSize argument specifies the integer type
-// that the result must fit into. Bit sizes 0, 8, 16, 32, and 64
+// that the result must fit into. Bit sizes 0, 8, 16, 32, and 64
// correspond to int, int8, int16, int32, and int64.
//
// The errors that ParseInt returns have concrete type *NumError
-// and include err.Num = s. If s is empty or contains invalid
+// and include err.Num = s. If s is empty or contains invalid
// digits, err.Err = ErrSyntax and the returned value is 0;
// if the value corresponding to s cannot be represented by a
// signed integer of the given size, err.Err = ErrRange and the
}
// Atoi is shorthand for ParseInt(s, 10, 0).
-func Atoi(s string) (i int, err error) {
+func Atoi(s string) (int, error) {
i64, err := ParseInt(s, 10, 0)
return int(i64), err
}
func init() {
// The atoi routines return NumErrors wrapping
- // the error and the string. Convert the tables above.
+ // the error and the string. Convert the tables above.
for i := range atoui64tests {
test := &atoui64tests[i]
if test.err != nil {
}
v := float64(n)
// We expect that v*pow2(e) fits in a float64,
- // but pow2(e) by itself may not. Be careful.
+ // but pow2(e) by itself may not. Be careful.
if e <= -1000 {
v *= pow2(-1000)
e += 1000
var float64info = floatInfo{52, 11, -1023}
// FormatFloat converts the floating-point number f to a string,
-// according to the format fmt and precision prec. It rounds the
+// according to the format fmt and precision prec. It rounds the
// result assuming that the original was obtained from a floating-point
// value of bitSize bits (32 for float32, 64 for float64).
//
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package strconv
-import (
- "unicode/utf8"
-)
+import "unicode/utf8"
const lowerhex = "0123456789abcdef"
func quoteWith(s string, quote byte, ASCIIonly, graphicOnly bool) string {
- var runeTmp [utf8.UTFMax]byte
- buf := make([]byte, 0, 3*len(s)/2) // Try to avoid more allocations.
+ return string(appendQuotedWith(make([]byte, 0, 3*len(s)/2), s, quote, ASCIIonly, graphicOnly))
+}
+
+func quoteRuneWith(r rune, quote byte, ASCIIonly, graphicOnly bool) string {
+ return string(appendQuotedRuneWith(nil, r, quote, ASCIIonly, graphicOnly))
+}
+
+func appendQuotedWith(buf []byte, s string, quote byte, ASCIIonly, graphicOnly bool) []byte {
buf = append(buf, quote)
for width := 0; len(s) > 0; s = s[width:] {
r := rune(s[0])
buf = append(buf, lowerhex[s[0]&0xF])
continue
}
- if r == rune(quote) || r == '\\' { // always backslashed
- buf = append(buf, '\\')
+ buf = appendEscapedRune(buf, r, width, quote, ASCIIonly, graphicOnly)
+ }
+ buf = append(buf, quote)
+ return buf
+}
+
+func appendQuotedRuneWith(buf []byte, r rune, quote byte, ASCIIonly, graphicOnly bool) []byte {
+ buf = append(buf, quote)
+ if !utf8.ValidRune(r) {
+ r = utf8.RuneError
+ }
+ buf = appendEscapedRune(buf, r, utf8.RuneLen(r), quote, ASCIIonly, graphicOnly)
+ buf = append(buf, quote)
+ return buf
+}
+
+func appendEscapedRune(buf []byte, r rune, width int, quote byte, ASCIIonly, graphicOnly bool) []byte {
+ var runeTmp [utf8.UTFMax]byte
+ if r == rune(quote) || r == '\\' { // always backslashed
+ buf = append(buf, '\\')
+ buf = append(buf, byte(r))
+ return buf
+ }
+ if ASCIIonly {
+ if r < utf8.RuneSelf && IsPrint(r) {
buf = append(buf, byte(r))
- continue
+ return buf
}
- if ASCIIonly {
- if r < utf8.RuneSelf && IsPrint(r) {
- buf = append(buf, byte(r))
- continue
+ } else if IsPrint(r) || graphicOnly && isInGraphicList(r) {
+ n := utf8.EncodeRune(runeTmp[:], r)
+ buf = append(buf, runeTmp[:n]...)
+ return buf
+ }
+ switch r {
+ case '\a':
+ buf = append(buf, `\a`...)
+ case '\b':
+ buf = append(buf, `\b`...)
+ case '\f':
+ buf = append(buf, `\f`...)
+ case '\n':
+ buf = append(buf, `\n`...)
+ case '\r':
+ buf = append(buf, `\r`...)
+ case '\t':
+ buf = append(buf, `\t`...)
+ case '\v':
+ buf = append(buf, `\v`...)
+ default:
+ switch {
+ case r < ' ':
+ buf = append(buf, `\x`...)
+ buf = append(buf, lowerhex[byte(r)>>4])
+ buf = append(buf, lowerhex[byte(r)&0xF])
+ case r > utf8.MaxRune:
+ r = 0xFFFD
+ fallthrough
+ case r < 0x10000:
+ buf = append(buf, `\u`...)
+ for s := 12; s >= 0; s -= 4 {
+ buf = append(buf, lowerhex[r>>uint(s)&0xF])
}
- } else if IsPrint(r) || graphicOnly && isInGraphicList(r) {
- n := utf8.EncodeRune(runeTmp[:], r)
- buf = append(buf, runeTmp[:n]...)
- continue
- }
- switch r {
- case '\a':
- buf = append(buf, `\a`...)
- case '\b':
- buf = append(buf, `\b`...)
- case '\f':
- buf = append(buf, `\f`...)
- case '\n':
- buf = append(buf, `\n`...)
- case '\r':
- buf = append(buf, `\r`...)
- case '\t':
- buf = append(buf, `\t`...)
- case '\v':
- buf = append(buf, `\v`...)
default:
- switch {
- case r < ' ':
- buf = append(buf, `\x`...)
- buf = append(buf, lowerhex[s[0]>>4])
- buf = append(buf, lowerhex[s[0]&0xF])
- case r > utf8.MaxRune:
- r = 0xFFFD
- fallthrough
- case r < 0x10000:
- buf = append(buf, `\u`...)
- for s := 12; s >= 0; s -= 4 {
- buf = append(buf, lowerhex[r>>uint(s)&0xF])
- }
- default:
- buf = append(buf, `\U`...)
- for s := 28; s >= 0; s -= 4 {
- buf = append(buf, lowerhex[r>>uint(s)&0xF])
- }
+ buf = append(buf, `\U`...)
+ for s := 28; s >= 0; s -= 4 {
+ buf = append(buf, lowerhex[r>>uint(s)&0xF])
}
}
}
- buf = append(buf, quote)
- return string(buf)
-
+ return buf
}
-// Quote returns a double-quoted Go string literal representing s. The
+// Quote returns a double-quoted Go string literal representing s. The
// returned string uses Go escape sequences (\t, \n, \xFF, \u0100) for
// control characters and non-printable characters as defined by
// IsPrint.
// AppendQuote appends a double-quoted Go string literal representing s,
// as generated by Quote, to dst and returns the extended buffer.
func AppendQuote(dst []byte, s string) []byte {
- return append(dst, Quote(s)...)
+ return appendQuotedWith(dst, s, '"', false, false)
}
// QuoteToASCII returns a double-quoted Go string literal representing s.
// AppendQuoteToASCII appends a double-quoted Go string literal representing s,
// as generated by QuoteToASCII, to dst and returns the extended buffer.
func AppendQuoteToASCII(dst []byte, s string) []byte {
- return append(dst, QuoteToASCII(s)...)
+ return appendQuotedWith(dst, s, '"', true, false)
}
// QuoteToGraphic returns a double-quoted Go string literal representing s.
// AppendQuoteToGraphic appends a double-quoted Go string literal representing s,
// as generated by QuoteToGraphic, to dst and returns the extended buffer.
func AppendQuoteToGraphic(dst []byte, s string) []byte {
- return append(dst, QuoteToGraphic(s)...)
+ return appendQuotedWith(dst, s, '"', false, true)
}
// QuoteRune returns a single-quoted Go character literal representing the
// rune. The returned string uses Go escape sequences (\t, \n, \xFF, \u0100)
// for control characters and non-printable characters as defined by IsPrint.
func QuoteRune(r rune) string {
- // TODO: avoid the allocation here.
- return quoteWith(string(r), '\'', false, false)
+ return quoteRuneWith(r, '\'', false, false)
}
// AppendQuoteRune appends a single-quoted Go character literal representing the rune,
// as generated by QuoteRune, to dst and returns the extended buffer.
func AppendQuoteRune(dst []byte, r rune) []byte {
- return append(dst, QuoteRune(r)...)
+ return appendQuotedRuneWith(dst, r, '\'', false, false)
}
// QuoteRuneToASCII returns a single-quoted Go character literal representing
// \u0100) for non-ASCII characters and non-printable characters as defined
// by IsPrint.
func QuoteRuneToASCII(r rune) string {
- // TODO: avoid the allocation here.
- return quoteWith(string(r), '\'', true, false)
+ return quoteRuneWith(r, '\'', true, false)
}
// AppendQuoteRuneToASCII appends a single-quoted Go character literal representing the rune,
// as generated by QuoteRuneToASCII, to dst and returns the extended buffer.
func AppendQuoteRuneToASCII(dst []byte, r rune) []byte {
- return append(dst, QuoteRuneToASCII(r)...)
+ return appendQuotedRuneWith(dst, r, '\'', true, false)
}
// QuoteRuneToGraphic returns a single-quoted Go character literal representing
// \u0100) for non-ASCII characters and non-printable characters as defined
// by IsGraphic.
func QuoteRuneToGraphic(r rune) string {
- // TODO: avoid the allocation here.
- return quoteWith(string(r), '\'', false, true)
+ return quoteRuneWith(r, '\'', false, true)
}
// AppendQuoteRuneToGraphic appends a single-quoted Go character literal representing the rune,
// as generated by QuoteRuneToGraphic, to dst and returns the extended buffer.
func AppendQuoteRuneToGraphic(dst []byte, r rune) []byte {
- return append(dst, QuoteRuneToGraphic(r)...)
+ return appendQuotedRuneWith(dst, r, '\'', false, true)
}
// CanBackquote reports whether the string s can be represented
// that s quotes. (If s is single-quoted, it would be a Go
// character literal; Unquote returns the corresponding
// one-character string.)
-func Unquote(s string) (t string, err error) {
+func Unquote(s string) (string, error) {
n := len(s)
if n < 2 {
return "", ErrSyntax
}
}
+func BenchmarkQuote(b *testing.B) {
+ for i := 0; i < b.N; i++ {
+ Quote("\a\b\f\r\n\t\v\a\b\f\r\n\t\v\a\b\f\r\n\t\v")
+ }
+}
+
+func BenchmarkQuoteRune(b *testing.B) {
+ for i := 0; i < b.N; i++ {
+ QuoteRune('\a')
+ }
+}
+
+var benchQuoteBuf []byte
+
+func BenchmarkAppendQuote(b *testing.B) {
+ for i := 0; i < b.N; i++ {
+ benchQuoteBuf = AppendQuote(benchQuoteBuf[:0], "\a\b\f\r\n\t\v\a\b\f\r\n\t\v\a\b\f\r\n\t\v")
+ }
+}
+
+var benchQuoteRuneBuf []byte
+
+func BenchmarkAppendQuoteRune(b *testing.B) {
+ for i := 0; i < b.N; i++ {
+ benchQuoteRuneBuf = AppendQuoteRune(benchQuoteRuneBuf[:0], '\a')
+ }
+}
+
type quoteRuneTest struct {
in rune
out string
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
a := make([]byte, n+1)
b := make([]byte, n+1)
for len := 0; len < 128; len++ {
- // randomish but deterministic data. No 0 or 255.
+ // randomish but deterministic data. No 0 or 255.
for i := 0; i < len; i++ {
a[i] = byte(1 + 31*i%254)
b[i] = byte(1 + 31*i%254)
return
}
-func (r *Reader) ReadByte() (b byte, err error) {
+func (r *Reader) ReadByte() (byte, error) {
r.prevRune = -1
if r.i >= int64(len(r.s)) {
return 0, io.EOF
}
- b = r.s[r.i]
+ b := r.s[r.i]
r.i++
- return
+ return b, nil
}
func (r *Reader) UnreadByte() error {
return a
}
-// Join concatenates the elements of a to create a single string. The separator string
+// Join concatenates the elements of a to create a single string. The separator string
// sep is placed between elements in the resulting string.
func Join(a []string, sep string) string {
if len(a) == 0 {
// dropped from the string with no replacement.
func Map(mapping func(rune) rune, s string) string {
// In the worst case, the string can grow when mapped, making
- // things unpleasant. But it's so rare we barge in assuming it's
- // fine. It could also shrink but that falls out naturally.
+ // things unpleasant. But it's so rare we barge in assuming it's
+ // fine. It could also shrink but that falls out naturally.
maxbytes := len(s) // length of b
nbytes := 0 // number of bytes encoded in b
// The output buffer b is initialized on demand, the first
return false
}
- // General case. SimpleFold(x) returns the next equivalent rune > x
+ // General case. SimpleFold(x) returns the next equivalent rune > x
// or wraps around to smaller values.
r := unicode.SimpleFold(sr)
for r != sr && r < tr {
return false
}
- // One string is empty. Are both?
+ // One string is empty. Are both?
return s == t
}
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
func TestMap(t *testing.T) {
// Run a couple of awful growth/shrinkage tests
a := tenRunes('a')
- // 1. Grow. This triggers two reallocations in Map.
+ // 1. Grow. This triggers two reallocations in Map.
maxRune := func(rune) rune { return unicode.MaxRune }
m := Map(maxRune, a)
expect := tenRunes(unicode.MaxRune)
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
ok:
RET
-// Fast, cached version of check. No frame, just MOVW CMP RET after first time.
+// Fast, cached version of check. No frame, just MOVW CMP RET after first time.
TEXT fastCheck64<>(SB),NOSPLIT,$-4
MOVW ok64<>(SB), R0
CMP $0, R0 // have we been here before?
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
--- /dev/null
+// Copyright 2012 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+#include "textflag.h"
+
+#define DMB_ISH_7 \
+ MOVB runtime·goarm(SB), R11; \
+ CMP $7, R11; \
+ BLT 2(PC); \
+ WORD $0xf57ff05b // dmb ish
+
+// Plan9/ARM atomic operations.
+// TODO(minux): this only supports ARMv6K or higher.
+
+TEXT ·CompareAndSwapInt32(SB),NOSPLIT,$0
+ B ·CompareAndSwapUint32(SB)
+
+TEXT ·CompareAndSwapUint32(SB),NOSPLIT,$0
+ B ·armCompareAndSwapUint32(SB)
+
+TEXT ·CompareAndSwapUintptr(SB),NOSPLIT,$0
+ B ·CompareAndSwapUint32(SB)
+
+TEXT ·AddInt32(SB),NOSPLIT,$0
+ B ·AddUint32(SB)
+
+TEXT ·AddUint32(SB),NOSPLIT,$0
+ B ·armAddUint32(SB)
+
+TEXT ·AddUintptr(SB),NOSPLIT,$0
+ B ·AddUint32(SB)
+
+TEXT ·SwapInt32(SB),NOSPLIT,$0
+ B ·SwapUint32(SB)
+
+TEXT ·SwapUint32(SB),NOSPLIT,$0
+ B ·armSwapUint32(SB)
+
+TEXT ·SwapUintptr(SB),NOSPLIT,$0
+ B ·SwapUint32(SB)
+
+TEXT ·CompareAndSwapInt64(SB),NOSPLIT,$0
+ B ·CompareAndSwapUint64(SB)
+
+TEXT ·CompareAndSwapUint64(SB),NOSPLIT,$-4
+ B ·armCompareAndSwapUint64(SB)
+
+TEXT ·AddInt64(SB),NOSPLIT,$0
+ B ·addUint64(SB)
+
+TEXT ·AddUint64(SB),NOSPLIT,$0
+ B ·addUint64(SB)
+
+TEXT ·SwapInt64(SB),NOSPLIT,$0
+ B ·swapUint64(SB)
+
+TEXT ·SwapUint64(SB),NOSPLIT,$0
+ B ·swapUint64(SB)
+
+TEXT ·LoadInt32(SB),NOSPLIT,$0
+ B ·LoadUint32(SB)
+
+TEXT ·LoadUint32(SB),NOSPLIT,$0-8
+ MOVW addr+0(FP), R1
+load32loop:
+ LDREX (R1), R2 // loads R2
+ STREX R2, (R1), R0 // stores R2
+ CMP $0, R0
+ BNE load32loop
+ MOVW R2, val+4(FP)
+ DMB_ISH_7
+ RET
+
+TEXT ·LoadInt64(SB),NOSPLIT,$0
+ B ·loadUint64(SB)
+
+TEXT ·LoadUint64(SB),NOSPLIT,$0
+ B ·loadUint64(SB)
+
+TEXT ·LoadUintptr(SB),NOSPLIT,$0
+ B ·LoadUint32(SB)
+
+TEXT ·LoadPointer(SB),NOSPLIT,$0
+ B ·LoadUint32(SB)
+
+TEXT ·StoreInt32(SB),NOSPLIT,$0
+ B ·StoreUint32(SB)
+
+TEXT ·StoreUint32(SB),NOSPLIT,$0-8
+ MOVW addr+0(FP), R1
+ MOVW val+4(FP), R2
+ DMB_ISH_7
+storeloop:
+ LDREX (R1), R4 // loads R4
+ STREX R2, (R1), R0 // stores R2
+ CMP $0, R0
+ BNE storeloop
+ RET
+
+TEXT ·StoreInt64(SB),NOSPLIT,$0
+ B ·storeUint64(SB)
+
+TEXT ·StoreUint64(SB),NOSPLIT,$0
+ B ·storeUint64(SB)
+
+TEXT ·StoreUintptr(SB),NOSPLIT,$0
+ B ·StoreUint32(SB)
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// (Is the function atomic?)
//
// For each function, we write a "hammer" function that repeatedly
-// uses the atomic operation to add 1 to a value. After running
+// uses the atomic operation to add 1 to a value. After running
// multiple hammers in parallel, check that we end with the correct
// total.
// Swap can't add 1, so it uses a different scheme.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
{complex(0, 0), complex(1, 2), complex(3, 4), complex(5, 6)},
}
p := 4 * runtime.GOMAXPROCS(0)
+ N := int(1e5)
+ if testing.Short() {
+ p /= 2
+ N = 1e3
+ }
for _, test := range tests {
var v Value
done := make(chan bool)
go func() {
r := rand.New(rand.NewSource(rand.Int63()))
loop:
- for j := 0; j < 1e5; j++ {
+ for j := 0; j < N; j++ {
x := test[r.Intn(len(test))]
v.Store(x)
x = v.Load()
}
// Wait atomically unlocks c.L and suspends execution
-// of the calling goroutine. After later resuming execution,
-// Wait locks c.L before returning. Unlike in other systems,
+// of the calling goroutine. After later resuming execution,
+// Wait locks c.L before returning. Unlike in other systems,
// Wait cannot return unless awoken by Broadcast or Signal.
//
// Because c.L is not locked when Wait first resumes, the caller
// typically cannot assume that the condition is true when
-// Wait returns. Instead, the caller should Wait in a loop:
+// Wait returns. Instead, the caller should Wait in a loop:
//
// c.L.Lock()
// for !condition() {
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// license that can be found in the LICENSE file.
// Package sync provides basic synchronization primitives such as mutual
-// exclusion locks. Other than the Once and WaitGroup types, most are intended
-// for use by low-level library routines. Higher-level synchronization is
+// exclusion locks. Other than the Once and WaitGroup types, most are intended
+// for use by low-level library routines. Higher-level synchronization is
// better done via channels and communication.
//
// Values containing the types defined in this package should not be copied.
// first time for this instance of Once. In other words, given
// var once Once
// if once.Do(f) is called multiple times, only the first call will invoke f,
-// even if f has a different value in each invocation. A new instance of
+// even if f has a different value in each invocation. A new instance of
// Once is required for each function to execute.
//
-// Do is intended for initialization that must be run exactly once. Since f
+// Do is intended for initialization that must be run exactly once. Since f
// is niladic, it may be necessary to use a function literal to capture the
// arguments to a function to be invoked by Do:
// config.once.Do(func() { config.init(filename) })
func (p *Pool) pin() *poolLocal {
pid := runtime_procPin()
// In pinSlow we store to localSize and then to local, here we load in opposite order.
- // Since we've disabled preemption, GC can not happen in between.
+ // Since we've disabled preemption, GC cannot happen in between.
// Thus here we must observe local at least as large localSize.
// We can observe a newer/larger local, it is fine (we must observe its zero-initialized-ness).
s := atomic.LoadUintptr(&p.localSize) // load-acquire
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
}
-// Unlock unlocks rw for writing. It is a run-time error if rw is
+// Unlock unlocks rw for writing. It is a run-time error if rw is
// not locked for writing on entry to Unlock.
//
// As with Mutexes, a locked RWMutex is not associated with a particular
-// goroutine. One goroutine may RLock (Lock) an RWMutex and then
+// goroutine. One goroutine may RLock (Lock) an RWMutex and then
// arrange for another goroutine to RUnlock (Unlock) it.
func (rw *RWMutex) Unlock() {
if race.Enabled {
// A WaitGroup waits for a collection of goroutines to finish.
// The main goroutine calls Add to set the number of
-// goroutines to wait for. Then each of the goroutines
-// runs and calls Done when finished. At the same time,
+// goroutines to wait for. Then each of the goroutines
+// runs and calls Done when finished. At the same time,
// Wait can be used to block until all goroutines have finished.
type WaitGroup struct {
// 64-bit value: high 32 bits are counter, low 32 bits are waiter count.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// func Syscall(trap uintptr, a1, a2, a3 uintptr) (r1, r2, err uintptr);
// Trap # in AX, args in BX CX DX SI DI, return in AX
-// Most linux systems use glibc's dynamic linker, which puts the
-// __kernel_vsyscall vdso helper at 0x10(GS) for easy access from position
-// independent code and setldt in runtime does the same in the statically
-// linked case. Android, however, uses bionic's dynamic linker, which does not
-// save the helper anywhere, and so the only way to invoke a syscall from
-// position independent code is boring old int $0x80 (which is also what
-// bionic's syscall wrappers use).
-#ifdef GOOS_android
+// See ../runtime/sys_linux_386.s for the reason why we always use int 0x80
+// instead of the glibc-specific "CALL 0x10(GS)".
#define INVOKE_SYSCALL INT $0x80
-#else
-#define INVOKE_SYSCALL CALL 0x10(GS)
-#endif
TEXT ·Syscall(SB),NOSPLIT,$0-28
CALL runtime·entersyscall(SB)
--- /dev/null
+// Copyright 2009 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+#include "textflag.h"
+
+#define SYS_SEEK 39 /* from zsysnum_plan9.go */
+
+// System call support for plan9 on arm
+
+TEXT sysresult<>(SB),NOSPLIT,$12
+ MOVW $runtime·emptystring+0(SB), R2
+ CMP $-1, R0
+ B.NE ok
+ MOVW R1, save-4(SP)
+ BL runtime·errstr(SB)
+ MOVW save-4(SP), R1
+ MOVW $err-12(SP), R2
+ok:
+ MOVM.IA (R2), [R3-R4]
+ MOVM.IA [R3-R4], (R1)
+ RET
+
+//func Syscall(trap, a1, a2, a3 uintptr) (r1, r2 uintptr, err ErrorString)
+TEXT ·Syscall(SB),NOSPLIT,$0
+ BL runtime·entersyscall(SB)
+ MOVW trap+0(FP), R0 // syscall num
+ MOVM.IA.W (R13),[R1-R2] // pop LR and caller's LR
+ SWI 0
+ MOVM.DB.W [R1-R2],(R13) // push LR and caller's LR
+ MOVW $0, R2
+ MOVW $r1+16(FP), R1
+ MOVM.IA.W [R0,R2], (R1)
+ BL sysresult<>(SB)
+ BL runtime·exitsyscall(SB)
+ RET
+
+//func Syscall6(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err ErrorString)
+// Actually Syscall5 but the rest of the code expects it to be named Syscall6.
+TEXT ·Syscall6(SB),NOSPLIT,$0
+ BL runtime·entersyscall(SB)
+ MOVW trap+0(FP), R0 // syscall num
+ MOVM.IA.W (R13),[R1-R2] // pop LR and caller's LR
+ SWI 0
+ MOVM.DB.W [R1-R2],(R13) // push LR and caller's LR
+ MOVW $0, R1
+ MOVW $r1+28(FP), R1
+ MOVM.IA.W [R0,R2], (R1)
+ BL sysresult<>(SB)
+ BL runtime·exitsyscall(SB)
+ RET
+
+//func RawSyscall(trap, a1, a2, a3 uintptr) (r1, r2, err uintptr)
+TEXT ·RawSyscall(SB),NOSPLIT,$0
+ MOVW trap+0(FP), R0 // syscall num
+ MOVM.IA.W (R13),[R1] // pop caller's LR
+ SWI 0
+ MOVM.DB.W [R1],(R13) // push caller's LR
+ MOVW R0, r1+16(FP)
+ MOVW R0, r2+20(FP)
+ MOVW R0, err+24(FP)
+ RET
+
+//func RawSyscall6(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2, err uintptr)
+// Actually RawSyscall5 but the rest of the code expects it to be named RawSyscall6.
+TEXT ·RawSyscall6(SB),NOSPLIT,$0
+ MOVW trap+0(FP), R0 // syscall num
+ MOVM.IA.W (R13),[R1] // pop caller's LR
+ SWI 0
+ MOVM.DB.W [R1],(R13) // push caller's LR
+ MOVW R0, r1+28(FP)
+ MOVW R0, r2+32(FP)
+ MOVW R0, err+36(FP)
+ RET
+
+//func seek(placeholder uintptr, fd int, offset int64, whence int) (newoffset int64, err string)
+TEXT ·seek(SB),NOSPLIT,$0
+ MOVW $newoffset_lo+20(FP), R5
+ MOVW R5, placeholder+0(FP) //placeholder = dest for return value
+ MOVW $SYS_SEEK, R0 // syscall num
+ MOVM.IA.W (R13),[R1] // pop LR
+ SWI 0
+ MOVM.DB.W [R1],(R13) // push LR
+ CMP $-1, R0
+ MOVW.EQ R0, 0(R5)
+ MOVW.EQ R0, 4(R5)
+ MOVW $err+28(FP), R1
+ BL sysresult<>(SB)
+ RET
+
+//func exit(code int)
+// Import runtime·exit for cleanly exiting.
+TEXT ·exit(SB),NOSPLIT,$4
+ MOVW code+0(FP), R0
+ MOVW R0, e-4(SP)
+ BL runtime·exit(SB)
+ RET
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// If a dup or exec fails, write the errno error to pipe.
// (Pipe is close-on-exec so if exec succeeds, it will be closed.)
// In the child, this function must not acquire any locks, because
-// they might have been locked at the time of the fork. This means
+// they might have been locked at the time of the fork. This means
// no rescheduling, no malloc calls, and no new stack segments.
// For the same reason compiler does not race instrument it.
// The calls to RawSyscall are okay because they are assembly
// If a dup or exec fails, write the errno error to pipe.
// (Pipe is close-on-exec so if exec succeeds, it will be closed.)
// In the child, this function must not acquire any locks, because
-// they might have been locked at the time of the fork. This means
+// they might have been locked at the time of the fork. This means
// no rescheduling, no malloc calls, and no new stack segments.
// For the same reason compiler does not race instrument it.
// The calls to RawSyscall are okay because they are assembly
// Lock synchronizing creation of new file descriptors with fork.
//
// We want the child in a fork/exec sequence to inherit only the
-// file descriptors we intend. To do that, we mark all file
+// file descriptors we intend. To do that, we mark all file
// descriptors close-on-exec and then, in the child, explicitly
// unmark the ones we want the exec'ed program to keep.
// Unix doesn't make this easy: there is, in general, no way to
-// allocate a new file descriptor close-on-exec. Instead you
+// allocate a new file descriptor close-on-exec. Instead you
// have to allocate the descriptor and then mark it close-on-exec.
// If a fork happens between those two events, the child's exec
// will inherit an unwanted file descriptor.
//
// This lock solves that race: the create new fd/mark close-on-exec
// operation is done holding ForkLock for reading, and the fork itself
-// is done holding ForkLock for writing. At least, that's the idea.
+// is done holding ForkLock for writing. At least, that's the idea.
// There are some complications.
//
// Some system calls that create new file descriptors can block
// for arbitrarily long times: open on a hung NFS server or named
-// pipe, accept on a socket, and so on. We can't reasonably grab
+// pipe, accept on a socket, and so on. We can't reasonably grab
// the lock across those operations.
//
// It is worse to inherit some file descriptors than others.
// If a non-malicious child accidentally inherits an open ordinary file,
-// that's not a big deal. On the other hand, if a long-lived child
+// that's not a big deal. On the other hand, if a long-lived child
// accidentally inherits the write end of a pipe, then the reader
// of that pipe will not see EOF until that child exits, potentially
-// causing the parent program to hang. This is a common problem
+// causing the parent program to hang. This is a common problem
// in threaded C programs that use popen.
//
// Luckily, the file descriptors that are most important not to
// The rules for which file descriptor-creating operations use the
// ForkLock are as follows:
//
-// 1) Pipe. Does not block. Use the ForkLock.
-// 2) Socket. Does not block. Use the ForkLock.
-// 3) Accept. If using non-blocking mode, use the ForkLock.
+// 1) Pipe. Does not block. Use the ForkLock.
+// 2) Socket. Does not block. Use the ForkLock.
+// 3) Accept. If using non-blocking mode, use the ForkLock.
// Otherwise, live with the race.
-// 4) Open. Can block. Use O_CLOEXEC if available (Linux).
+// 4) Open. Can block. Use O_CLOEXEC if available (Linux).
// Otherwise, live with the race.
-// 5) Dup. Does not block. Use the ForkLock.
+// 5) Dup. Does not block. Use the ForkLock.
// On Linux, could use fcntl F_DUPFD_CLOEXEC
// instead of the ForkLock, but only for dup(fd, -1).
// (The pipe write end is close-on-exec so if exec succeeds, it will be closed.)
//
// In the child, this function must not acquire any locks, because
-// they might have been locked at the time of the fork. This means
+// they might have been locked at the time of the fork. This means
// no rescheduling, no malloc calls, and no new stack segments.
// The calls to RawSyscall are okay because they are assembly
// functions that do not grow the stack.
// If a dup or exec fails, write the errno error to pipe.
// (Pipe is close-on-exec so if exec succeeds, it will be closed.)
// In the child, this function must not acquire any locks, because
-// they might have been locked at the time of the fork. This means
+// they might have been locked at the time of the fork. This means
// no rescheduling, no malloc calls, and no new stack segments.
//
// We call hand-crafted syscalls, implemented in
// Lock synchronizing creation of new file descriptors with fork.
//
// We want the child in a fork/exec sequence to inherit only the
-// file descriptors we intend. To do that, we mark all file
+// file descriptors we intend. To do that, we mark all file
// descriptors close-on-exec and then, in the child, explicitly
// unmark the ones we want the exec'ed program to keep.
// Unix doesn't make this easy: there is, in general, no way to
-// allocate a new file descriptor close-on-exec. Instead you
+// allocate a new file descriptor close-on-exec. Instead you
// have to allocate the descriptor and then mark it close-on-exec.
// If a fork happens between those two events, the child's exec
// will inherit an unwanted file descriptor.
//
// This lock solves that race: the create new fd/mark close-on-exec
// operation is done holding ForkLock for reading, and the fork itself
-// is done holding ForkLock for writing. At least, that's the idea.
+// is done holding ForkLock for writing. At least, that's the idea.
// There are some complications.
//
// Some system calls that create new file descriptors can block
// for arbitrarily long times: open on a hung NFS server or named
-// pipe, accept on a socket, and so on. We can't reasonably grab
+// pipe, accept on a socket, and so on. We can't reasonably grab
// the lock across those operations.
//
// It is worse to inherit some file descriptors than others.
// If a non-malicious child accidentally inherits an open ordinary file,
-// that's not a big deal. On the other hand, if a long-lived child
+// that's not a big deal. On the other hand, if a long-lived child
// accidentally inherits the write end of a pipe, then the reader
// of that pipe will not see EOF until that child exits, potentially
-// causing the parent program to hang. This is a common problem
+// causing the parent program to hang. This is a common problem
// in threaded C programs that use popen.
//
// Luckily, the file descriptors that are most important not to
// The rules for which file descriptor-creating operations use the
// ForkLock are as follows:
//
-// 1) Pipe. Does not block. Use the ForkLock.
-// 2) Socket. Does not block. Use the ForkLock.
-// 3) Accept. If using non-blocking mode, use the ForkLock.
+// 1) Pipe. Does not block. Use the ForkLock.
+// 2) Socket. Does not block. Use the ForkLock.
+// 3) Accept. If using non-blocking mode, use the ForkLock.
// Otherwise, live with the race.
-// 4) Open. Can block. Use O_CLOEXEC if available (Linux).
+// 4) Open. Can block. Use O_CLOEXEC if available (Linux).
// Otherwise, live with the race.
-// 5) Dup. Does not block. Use the ForkLock.
+// 5) Dup. Does not block. Use the ForkLock.
// On Linux, could use fcntl F_DUPFD_CLOEXEC
// instead of the ForkLock, but only for dup(fd, -1).
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
# Line must be of the form
# func Open(path string, mode int, perm int) (fd int, errno error)
# Split into name, in params, out params.
- if(!/^\/\/sys(nb)? (\w+)\(([^()]*)\)\s*(?:\(([^()]+)\))?\s*(?:=\s*((?i)SYS_[A-Z0-9_]+))?$/) {
+ if(!/^\/\/sys(nb)? (\w+)\(([^()]*)\)\s*(?:\(([^()]+)\))?\s*(?:=\s*((?i)_?SYS_[A-Z0-9_]+))?$/) {
print STDERR "$ARGV:$.: malformed //sys declaration\n";
$errors = 1;
next;
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2015 The Go Authors. All rights reserved.
+// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//
// - The kernel form appends leading bytes to the prefix field
// to make the <length, prefix> tuple to be conformed with
- // the routing messeage boundary
+ // the routing message boundary
l := int(rsaAlignOf(int(b[0])))
if len(b) < l {
return nil, EINVAL
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// license that can be found in the LICENSE file.
// Package syscall contains an interface to the low-level operating system
-// primitives. The details vary depending on the underlying system, and
+// primitives. The details vary depending on the underlying system, and
// by default, godoc will display the syscall documentation for the current
-// system. If you want godoc to display syscall documentation for another
-// system, set $GOOS and $GOARCH to the desired system. For example, if
+// system. If you want godoc to display syscall documentation for another
+// system, set $GOOS and $GOARCH to the desired system. For example, if
// you want to view documentation for freebsd/arm on linux/amd64, set $GOOS
// to freebsd and $GOARCH to arm.
// The primary use of syscall is inside other packages that provide a more
return nil, nil
}
- // Sanity check group count. Max is 16 on BSD.
+ // Sanity check group count. Max is 16 on BSD.
if n < 0 || n > 1000 {
return nil, EINVAL
}
// NOTE(rsc): It seems strange to set the buffer to have
// size CTL_MAXNAME+2 but use only CTL_MAXNAME
- // as the size. I don't know why the +2 is here, but the
+ // as the size. I don't know why the +2 is here, but the
// kernel uses +2 for its own implementation of this function.
// I am scared that if we don't include the +2 here, the kernel
// will silently write 2 words farther than we specify
}
// ParseDirent parses up to max directory entries in buf,
-// appending the names to names. It returns the number
+// appending the names to names. It returns the number
// bytes consumed from buf, the number of entries added
// to names, and the new names slice.
func ParseDirent(buf []byte, max int, names []string) (consumed int, count int, newnames []string) {
//sysnb gettimeofday(tp *Timeval) (sec int32, usec int32, err error)
func Gettimeofday(tv *Timeval) (err error) {
// The tv passed to gettimeofday must be non-nil
- // but is otherwise unused. The answers come back
+ // but is otherwise unused. The answers come back
// in the two registers.
sec, usec, err := gettimeofday(tv)
tv.Sec = int32(sec)
//sysnb gettimeofday(tp *Timeval) (sec int64, usec int32, err error)
func Gettimeofday(tv *Timeval) (err error) {
// The tv passed to gettimeofday must be non-nil
- // but is otherwise unused. The answers come back
+ // but is otherwise unused. The answers come back
// in the two registers.
sec, usec, err := gettimeofday(tv)
tv.Sec = sec
//sysnb gettimeofday(tp *Timeval) (sec int32, usec int32, err error)
func Gettimeofday(tv *Timeval) (err error) {
// The tv passed to gettimeofday must be non-nil
- // but is otherwise unused. The answers come back
+ // but is otherwise unused. The answers come back
// in the two registers.
sec, usec, err := gettimeofday(tv)
tv.Sec = int32(sec)
//sysnb gettimeofday(tp *Timeval) (sec int64, usec int32, err error)
func Gettimeofday(tv *Timeval) (err error) {
// The tv passed to gettimeofday must be non-nil
- // but is otherwise unused. The answers come back
+ // but is otherwise unused. The answers come back
// in the two registers.
sec, usec, err := gettimeofday(tv)
tv.Sec = sec
// NOTE(rsc): It seems strange to set the buffer to have
// size CTL_MAXNAME+2 but use only CTL_MAXNAME
- // as the size. I don't know why the +2 is here, but the
+ // as the size. I don't know why the +2 is here, but the
// kernel uses +2 for its own implementation of this function.
// I am scared that if we don't include the +2 here, the kernel
// will silently write 2 words farther than we specify
}
// ParseDirent parses up to max directory entries in buf,
-// appending the names to names. It returns the number
+// appending the names to names. It returns the number
// bytes consumed from buf, the number of entries added
// to names, and the new names slice.
func ParseDirent(buf []byte, max int, names []string) (consumed int, count int, newnames []string) {
// NOTE(rsc): It seems strange to set the buffer to have
// size CTL_MAXNAME+2 but use only CTL_MAXNAME
- // as the size. I don't know why the +2 is here, but the
+ // as the size. I don't know why the +2 is here, but the
// kernel uses +2 for its own implementation of this function.
// I am scared that if we don't include the +2 here, the kernel
// will silently write 2 words farther than we specify
}
// ParseDirent parses up to max directory entries in buf,
-// appending the names to names. It returns the number
+// appending the names to names. It returns the number
// bytes consumed from buf, the number of entries added
// to names, and the new names slice.
func ParseDirent(buf []byte, max int, names []string) (consumed int, count int, newnames []string) {
return nil, nil
}
- // Sanity check group count. Max is 1<<16 on Linux.
+ // Sanity check group count. Max is 1<<16 on Linux.
if n < 0 || n > 1<<20 {
return nil, EINVAL
}
// 0x7F (stopped), or a signal number that caused an exit.
// The 0x80 bit is whether there was a core dump.
// An extra number (exit code, signal causing a stop)
-// is in the high bits. At least that's the idea.
-// There are various irregularities. For example, the
+// is in the high bits. At least that's the idea.
+// There are various irregularities. For example, the
// "continued" status is 0xFFFF, distinguishing itself
// from stopped via the core dump bit.
var buf [sizeofPtr]byte
- // Leading edge. PEEKTEXT/PEEKDATA don't require aligned
+ // Leading edge. PEEKTEXT/PEEKDATA don't require aligned
// access (PEEKUSER warns that it might), but if we don't
// align our reads, we might straddle an unmapped page
// boundary and not get the bytes leading up to the page
// On x86 Linux, all the socket calls go through an extra indirection,
// I think because the 5-register system call interface can't handle
-// the 6-argument calls like sendto and recvfrom. Instead the
+// the 6-argument calls like sendto and recvfrom. Instead the
// arguments to the underlying system call are the number below
-// and a pointer to an array of uintptr. We hide the pointer in the
+// and a pointer to an array of uintptr. We hide the pointer in the
// socketcall assembly to avoid allocation on every system call.
const (
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
const PathMax = 256
// An Errno is an unsigned number describing an error condition.
-// It implements the error interface. The zero Errno is by convention
+// It implements the error interface. The zero Errno is by convention
// a non-error, so code to convert from Errno to error should use:
// err = nil
// if errno != 0 {
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
// ParseDirent parses up to max directory entries in buf,
-// appending the names to names. It returns the number
+// appending the names to names. It returns the number
// bytes consumed from buf, the number of entries added
// to names, and the new names slice.
func ParseDirent(buf []byte, max int, names []string) (consumed int, count int, newnames []string) {
return nil, nil
}
- // Sanity check group count. Max is 16 on BSD.
+ // Sanity check group count. Max is 16 on BSD.
if n < 0 || n > 1000 {
return nil, EINVAL
}
}
// An Errno is an unsigned number describing an error condition.
-// It implements the error interface. The zero Errno is by convention
+// It implements the error interface. The zero Errno is by convention
// a non-error, so code to convert from Errno to error should use:
// err = nil
// if errno != 0 {
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Decode a single Huffman block from f.
// hl and hd are the Huffman states for the lit/length values
-// and the distance values, respectively. If hd == nil, using the
+// and the distance values, respectively. If hd == nil, using the
// fixed distance encoding associated with fixed Huffman blocks.
func (f *decompressor) huffmanBlock(hl, hd *huffmanDecoder) {
for {
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
--- /dev/null
+// mksyscall.pl -l32 -plan9 syscall_plan9.go
+// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT
+
+// +build arm,plan9
+
+package syscall
+
+import "unsafe"
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
+func fd2path(fd int, buf []byte) (err error) {
+ var _p0 unsafe.Pointer
+ if len(buf) > 0 {
+ _p0 = unsafe.Pointer(&buf[0])
+ } else {
+ _p0 = unsafe.Pointer(&_zero)
+ }
+ r0, _, e1 := Syscall(SYS_FD2PATH, uintptr(fd), uintptr(_p0), uintptr(len(buf)))
+ if int32(r0) == -1 {
+ err = e1
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
+func pipe(p *[2]int32) (err error) {
+ r0, _, e1 := Syscall(SYS_PIPE, uintptr(unsafe.Pointer(p)), 0, 0)
+ if int32(r0) == -1 {
+ err = e1
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
+func await(s []byte) (n int, err error) {
+ var _p0 unsafe.Pointer
+ if len(s) > 0 {
+ _p0 = unsafe.Pointer(&s[0])
+ } else {
+ _p0 = unsafe.Pointer(&_zero)
+ }
+ r0, _, e1 := Syscall(SYS_AWAIT, uintptr(_p0), uintptr(len(s)), 0)
+ n = int(r0)
+ if int32(r0) == -1 {
+ err = e1
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
+func open(path string, mode int) (fd int, err error) {
+ var _p0 *byte
+ _p0, err = BytePtrFromString(path)
+ if err != nil {
+ return
+ }
+ r0, _, e1 := Syscall(SYS_OPEN, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0)
+ use(unsafe.Pointer(_p0))
+ fd = int(r0)
+ if int32(r0) == -1 {
+ err = e1
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
+func create(path string, mode int, perm uint32) (fd int, err error) {
+ var _p0 *byte
+ _p0, err = BytePtrFromString(path)
+ if err != nil {
+ return
+ }
+ r0, _, e1 := Syscall(SYS_CREATE, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm))
+ use(unsafe.Pointer(_p0))
+ fd = int(r0)
+ if int32(r0) == -1 {
+ err = e1
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
+func remove(path string) (err error) {
+ var _p0 *byte
+ _p0, err = BytePtrFromString(path)
+ if err != nil {
+ return
+ }
+ r0, _, e1 := Syscall(SYS_REMOVE, uintptr(unsafe.Pointer(_p0)), 0, 0)
+ use(unsafe.Pointer(_p0))
+ if int32(r0) == -1 {
+ err = e1
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
+func stat(path string, edir []byte) (n int, err error) {
+ var _p0 *byte
+ _p0, err = BytePtrFromString(path)
+ if err != nil {
+ return
+ }
+ var _p1 unsafe.Pointer
+ if len(edir) > 0 {
+ _p1 = unsafe.Pointer(&edir[0])
+ } else {
+ _p1 = unsafe.Pointer(&_zero)
+ }
+ r0, _, e1 := Syscall(SYS_STAT, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(edir)))
+ use(unsafe.Pointer(_p0))
+ n = int(r0)
+ if int32(r0) == -1 {
+ err = e1
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
+func bind(name string, old string, flag int) (err error) {
+ var _p0 *byte
+ _p0, err = BytePtrFromString(name)
+ if err != nil {
+ return
+ }
+ var _p1 *byte
+ _p1, err = BytePtrFromString(old)
+ if err != nil {
+ return
+ }
+ r0, _, e1 := Syscall(SYS_BIND, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(flag))
+ use(unsafe.Pointer(_p0))
+ use(unsafe.Pointer(_p1))
+ if int32(r0) == -1 {
+ err = e1
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
+func mount(fd int, afd int, old string, flag int, aname string) (err error) {
+ var _p0 *byte
+ _p0, err = BytePtrFromString(old)
+ if err != nil {
+ return
+ }
+ var _p1 *byte
+ _p1, err = BytePtrFromString(aname)
+ if err != nil {
+ return
+ }
+ r0, _, e1 := Syscall6(SYS_MOUNT, uintptr(fd), uintptr(afd), uintptr(unsafe.Pointer(_p0)), uintptr(flag), uintptr(unsafe.Pointer(_p1)), 0)
+ use(unsafe.Pointer(_p0))
+ use(unsafe.Pointer(_p1))
+ if int32(r0) == -1 {
+ err = e1
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
+func wstat(path string, edir []byte) (err error) {
+ var _p0 *byte
+ _p0, err = BytePtrFromString(path)
+ if err != nil {
+ return
+ }
+ var _p1 unsafe.Pointer
+ if len(edir) > 0 {
+ _p1 = unsafe.Pointer(&edir[0])
+ } else {
+ _p1 = unsafe.Pointer(&_zero)
+ }
+ r0, _, e1 := Syscall(SYS_WSTAT, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(edir)))
+ use(unsafe.Pointer(_p0))
+ if int32(r0) == -1 {
+ err = e1
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
+func chdir(path string) (err error) {
+ var _p0 *byte
+ _p0, err = BytePtrFromString(path)
+ if err != nil {
+ return
+ }
+ r0, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0)
+ use(unsafe.Pointer(_p0))
+ if int32(r0) == -1 {
+ err = e1
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
+func Dup(oldfd int, newfd int) (fd int, err error) {
+ r0, _, e1 := Syscall(SYS_DUP, uintptr(oldfd), uintptr(newfd), 0)
+ fd = int(r0)
+ if int32(r0) == -1 {
+ err = e1
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
+func Pread(fd int, p []byte, offset int64) (n int, err error) {
+ var _p0 unsafe.Pointer
+ if len(p) > 0 {
+ _p0 = unsafe.Pointer(&p[0])
+ } else {
+ _p0 = unsafe.Pointer(&_zero)
+ }
+ r0, _, e1 := Syscall6(SYS_PREAD, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), uintptr(offset>>32), 0)
+ n = int(r0)
+ if int32(r0) == -1 {
+ err = e1
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
+func Pwrite(fd int, p []byte, offset int64) (n int, err error) {
+ var _p0 unsafe.Pointer
+ if len(p) > 0 {
+ _p0 = unsafe.Pointer(&p[0])
+ } else {
+ _p0 = unsafe.Pointer(&_zero)
+ }
+ r0, _, e1 := Syscall6(SYS_PWRITE, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), uintptr(offset>>32), 0)
+ n = int(r0)
+ if int32(r0) == -1 {
+ err = e1
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
+func Close(fd int) (err error) {
+ r0, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0)
+ if int32(r0) == -1 {
+ err = e1
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
+func Fstat(fd int, edir []byte) (n int, err error) {
+ var _p0 unsafe.Pointer
+ if len(edir) > 0 {
+ _p0 = unsafe.Pointer(&edir[0])
+ } else {
+ _p0 = unsafe.Pointer(&_zero)
+ }
+ r0, _, e1 := Syscall(SYS_FSTAT, uintptr(fd), uintptr(_p0), uintptr(len(edir)))
+ n = int(r0)
+ if int32(r0) == -1 {
+ err = e1
+ }
+ return
+}
+
+// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
+
+func Fwstat(fd int, edir []byte) (err error) {
+ var _p0 unsafe.Pointer
+ if len(edir) > 0 {
+ _p0 = unsafe.Pointer(&edir[0])
+ } else {
+ _p0 = unsafe.Pointer(&_zero)
+ }
+ r0, _, e1 := Syscall(SYS_FWSTAT, uintptr(fd), uintptr(_p0), uintptr(len(edir)))
+ if int32(r0) == -1 {
+ err = e1
+ }
+ return
+}
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2013 The Go Authors. All rights reserved.
+// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Although the return value has type float64, it will always be an integral value.
//
// To compute the number of allocations, the function will first be run once as
-// a warm-up. The average number of allocations over the specified number of
+// a warm-up. The average number of allocations over the specified number of
// runs will then be measured and returned.
//
// AllocsPerRun sets GOMAXPROCS to 1 during its measurement and will restore
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
netBytes uint64
}
-// StartTimer starts timing a test. This function is called automatically
+// StartTimer starts timing a test. This function is called automatically
// before a benchmark starts, but it can also used to resume timing after
// a call to StopTimer.
func (b *B) StartTimer() {
}
}
-// StopTimer stops timing a test. This can be used to pause the timer
+// StopTimer stops timing a test. This can be used to pause the timer
// while performing complex initialization that you don't
// want to measure.
func (b *B) StopTimer() {
return b.result
}
-// launch launches the benchmark function. It gradually increases the number
+// launch launches the benchmark function. It gradually increases the number
// of benchmark iterations until the benchmark runs for the requested benchtime.
// It prints timing information in this form
// testing.BenchmarkHello 100000 19 ns/op
// An internal function but exported because it is cross-package; part of the implementation
// of the "go test" command.
func RunBenchmarks(matchString func(pat, str string) (bool, error), benchmarks []InternalBenchmark) {
+ runBenchmarksInternal(matchString, benchmarks)
+}
+
+func runBenchmarksInternal(matchString func(pat, str string) (bool, error), benchmarks []InternalBenchmark) bool {
// If no flag was specified, don't run benchmarks.
if len(*matchBenchmarks) == 0 {
- return
+ return true
}
// Collect matching benchmarks and determine longest name.
maxprocs := 1
}
}
}
+ ok := true
for _, Benchmark := range bs {
for _, procs := range cpuList {
runtime.GOMAXPROCS(procs)
fmt.Printf("%-*s\t", maxlen, benchName)
r := b.run()
if b.failed {
+ ok = false
// The output could be very long here, but probably isn't.
// We print it all, regardless, because we don't want to trim the reason
// the benchmark failed.
}
}
}
+ return ok
}
// trimOutput shortens the output from a benchmark, which can be very long.
var ErrTimeout = errors.New("timeout")
// TimeoutReader returns ErrTimeout on the second read
-// with no data. Subsequent calls to read succeed.
+// with no data. Subsequent calls to read succeed.
func TimeoutReader(r io.Reader) io.Reader { return &timeoutReader{r, 0} }
type timeoutReader struct {
"strings"
)
-var defaultMaxCount *int = flag.Int("quickchecks", 100, "The default number of iterations for each check")
+var defaultMaxCount = flag.Int("quickchecks", 100, "The default number of iterations for each check")
// A Generator can generate random values of its own type.
type Generator interface {
case reflect.Uintptr:
v.SetUint(uint64(randInt64(rand)))
case reflect.Map:
- numElems := rand.Intn(size)
- v.Set(reflect.MakeMap(concrete))
- for i := 0; i < numElems; i++ {
- key, ok1 := sizedValue(concrete.Key(), rand, size)
- value, ok2 := sizedValue(concrete.Elem(), rand, size)
- if !ok1 || !ok2 {
- return reflect.Value{}, false
+ if generateNilValue(rand) {
+ v.Set(reflect.Zero(concrete)) // Generate nil map.
+ } else {
+ numElems := rand.Intn(size)
+ v.Set(reflect.MakeMap(concrete))
+ for i := 0; i < numElems; i++ {
+ key, ok1 := sizedValue(concrete.Key(), rand, size)
+ value, ok2 := sizedValue(concrete.Elem(), rand, size)
+ if !ok1 || !ok2 {
+ return reflect.Value{}, false
+ }
+ v.SetMapIndex(key, value)
}
- v.SetMapIndex(key, value)
}
case reflect.Ptr:
- if rand.Intn(size) == 0 {
+ if generateNilValue(rand) {
v.Set(reflect.Zero(concrete)) // Generate nil pointer.
} else {
elem, ok := sizedValue(concrete.Elem(), rand, size)
v.Elem().Set(elem)
}
case reflect.Slice:
- numElems := rand.Intn(size)
- sizeLeft := size - numElems
- v.Set(reflect.MakeSlice(concrete, numElems, numElems))
- for i := 0; i < numElems; i++ {
- elem, ok := sizedValue(concrete.Elem(), rand, sizeLeft)
- if !ok {
- return reflect.Value{}, false
+ if generateNilValue(rand) {
+ v.Set(reflect.Zero(concrete)) // Generate nil slice.
+ } else {
+ slCap := rand.Intn(size)
+ slLen := rand.Intn(slCap + 1)
+ sizeLeft := size - slCap
+ v.Set(reflect.MakeSlice(concrete, slLen, slCap))
+ for i := 0; i < slLen; i++ {
+ elem, ok := sizedValue(concrete.Elem(), rand, sizeLeft)
+ if !ok {
+ return reflect.Value{}, false
+ }
+ v.Index(i).Set(elem)
}
- v.Index(i).Set(elem)
}
case reflect.Array:
for i := 0; i < v.Len(); i++ {
}
// Check looks for an input to f, any function that returns bool,
-// such that f returns false. It calls f repeatedly, with arbitrary
-// values for each argument. If f returns false on a given input,
+// such that f returns false. It calls f repeatedly, with arbitrary
+// values for each argument. If f returns false on a given input,
// Check returns that input as a *CheckError.
// For example:
//
// t.Error(err)
// }
// }
-func Check(f interface{}, config *Config) (err error) {
+func Check(f interface{}, config *Config) error {
if config == nil {
config = &defaultConfig
}
fVal, fType, ok := functionAndType(f)
if !ok {
- err = SetupError("argument is not a function")
- return
+ return SetupError("argument is not a function")
}
if fType.NumOut() != 1 {
- err = SetupError("function does not return one value")
- return
+ return SetupError("function does not return one value")
}
if fType.Out(0).Kind() != reflect.Bool {
- err = SetupError("function does not return a bool")
- return
+ return SetupError("function does not return a bool")
}
arguments := make([]reflect.Value, fType.NumIn())
maxCount := config.getMaxCount()
for i := 0; i < maxCount; i++ {
- err = arbitraryValues(arguments, fType, config, rand)
+ err := arbitraryValues(arguments, fType, config, rand)
if err != nil {
- return
+ return err
}
if !fVal.Call(arguments)[0].Bool() {
- err = &CheckError{i + 1, toInterfaces(arguments)}
- return
+ return &CheckError{i + 1, toInterfaces(arguments)}
}
}
- return
+ return nil
}
// CheckEqual looks for an input on which f and g return different results.
// It calls f and g repeatedly with arbitrary values for each argument.
// If f and g return different answers, CheckEqual returns a *CheckEqualError
// describing the input and the outputs.
-func CheckEqual(f, g interface{}, config *Config) (err error) {
+func CheckEqual(f, g interface{}, config *Config) error {
if config == nil {
config = &defaultConfig
}
x, xType, ok := functionAndType(f)
if !ok {
- err = SetupError("f is not a function")
- return
+ return SetupError("f is not a function")
}
y, yType, ok := functionAndType(g)
if !ok {
- err = SetupError("g is not a function")
- return
+ return SetupError("g is not a function")
}
if xType != yType {
- err = SetupError("functions have different types")
- return
+ return SetupError("functions have different types")
}
arguments := make([]reflect.Value, xType.NumIn())
maxCount := config.getMaxCount()
for i := 0; i < maxCount; i++ {
- err = arbitraryValues(arguments, xType, config, rand)
+ err := arbitraryValues(arguments, xType, config, rand)
if err != nil {
- return
+ return err
}
xOut := toInterfaces(x.Call(arguments))
yOut := toInterfaces(y.Call(arguments))
if !reflect.DeepEqual(xOut, yOut) {
- err = &CheckEqualError{CheckError{i + 1, toInterfaces(arguments)}, xOut, yOut}
- return
+ return &CheckEqualError{CheckError{i + 1, toInterfaces(arguments)}, xOut, yOut}
}
}
- return
+ return nil
}
// arbitraryValues writes Values to args such that args contains Values
}
return strings.Join(s, ", ")
}
+
+func generateNilValue(r *rand.Rand) bool { return r.Intn(20) == 0 }
f := func(a A) bool { return true }
Check(f, nil)
}
-
-// Some serialization formats (e.g. encoding/pem) cannot distinguish
-// between a nil and an empty map or slice, so avoid generating the
-// zero value for these.
-func TestNonZeroSliceAndMap(t *testing.T) {
- type Q struct {
- M map[int]int
- S []int
- }
- f := func(q Q) bool {
- return q.M != nil && q.S != nil
- }
- err := Check(f, nil)
- if err != nil {
- t.Fatal(err)
- }
-}
//
// The benchmark function must run the target code b.N times.
// During benchmark execution, b.N is adjusted until the benchmark function lasts
-// long enough to be timed reliably. The output
+// long enough to be timed reliably. The output
// BenchmarkHello 10000000 282 ns/op
// means that the loop ran 10000000 times at a speed of 282 ns per loop.
//
var (
// The short flag requests that tests run more quickly, but its functionality
- // is provided by test writers themselves. The testing package is just its
- // home. The all.bash installation script sets it to make installation more
+ // is provided by test writers themselves. The testing package is just its
+ // home. The all.bash installation script sets it to make installation more
// efficient, but by default the flag is off so a plain "go test" will do a
// full test of the package.
short = flag.Bool("test.short", false, "run smaller test suite to save time")
// This previous version duplicated code (those lines are in
// tRunner no matter what), but worse the goroutine teardown
// implicit in runtime.Goexit was not guaranteed to complete
- // before the test exited. If a test deferred an important cleanup
+ // before the test exited. If a test deferred an important cleanup
// function (like removing temporary files), there was no guarantee
- // it would run on a test failure. Because we send on c.signal during
+ // it would run on a test failure. Because we send on c.signal during
// a top-of-stack deferred function now, we know that the send
// only happens after any other stacked defers have completed.
c.finished = true
testOk := RunTests(m.matchString, m.tests)
exampleOk := RunExamples(m.matchString, m.examples)
stopAlarm()
- if !testOk || !exampleOk {
+ if !testOk || !exampleOk || !runBenchmarksInternal(m.matchString, m.benchmarks) {
fmt.Println("FAIL")
after()
return 1
}
fmt.Println("PASS")
- RunBenchmarks(m.matchString, m.benchmarks)
after()
return 0
}
-// Copyright 2014 The Go Authors. All rights reserved.
+// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package scanner provides a scanner and tokenizer for UTF-8-encoded text.
// It takes an io.Reader providing the source, which then can be tokenized
-// through repeated calls to the Scan function. For compatibility with
+// through repeated calls to the Scan function. For compatibility with
// existing tools, the NUL character is not allowed. If the first character
// in the source is a UTF-8 encoded byte order mark (BOM), it is discarded.
//
// By default, a Scanner skips white space and Go comments and recognizes all
-// literals as defined by the Go language specification. It may be
+// literals as defined by the Go language specification. It may be
// customized to recognize only a subset of those literals and to recognize
// different identifier and white space characters.
package scanner
if !pos.IsValid() {
pos = s.Pos()
}
- fmt.Fprintf(os.Stderr, "%s: %s\n", pos, msg)
+ fmt.Fprintf(os.Stderr, "text/scanner: %s: %s\n", pos, msg)
}
func (s *Scanner) isIdentRune(ch rune, i int) bool {
// a b c d.
// 123 12345 1234567 123456789.
}
+
+func Example_elastic() {
+ // Observe how the b's and the d's, despite appearing in the
+ // second cell of each line, belong to different columns.
+ w := tabwriter.NewWriter(os.Stdout, 0, 0, 1, '.', tabwriter.AlignRight|tabwriter.Debug)
+ fmt.Fprintln(w, "a\tb\tc")
+ fmt.Fprintln(w, "aa\tbb\tcc")
+ fmt.Fprintln(w, "aaa\t") // trailing tab
+ fmt.Fprintln(w, "aaaa\tdddd\teeee")
+ w.Flush()
+
+ // output:
+ // ....a|..b|c
+ // ...aa|.bb|cc
+ // ..aaa|
+ // .aaaa|.dddd|eeee
+}
+
+func Example_trailingTab() {
+ // Observe that the third line has no trailing tab,
+ // so its final cell is not part of an aligned column.
+ const padding = 3
+ w := tabwriter.NewWriter(os.Stdout, 0, 0, padding, '-', tabwriter.AlignRight|tabwriter.Debug)
+ fmt.Fprintln(w, "a\tb\taligned\t")
+ fmt.Fprintln(w, "aa\tbb\taligned\t")
+ fmt.Fprintln(w, "aaa\tbbb\tunaligned") // no trailing tab
+ fmt.Fprintln(w, "aaaa\tbbbb\taligned\t")
+ w.Flush()
+
+ // output:
+ // ------a|------b|---aligned|
+ // -----aa|-----bb|---aligned|
+ // ----aaa|----bbb|unaligned
+ // ---aaaa|---bbbb|---aligned|
+}
// A Writer is a filter that inserts padding around tab-delimited
// columns in its input to align them in the output.
//
-// The Writer treats incoming bytes as UTF-8 encoded text consisting
-// of cells terminated by (horizontal or vertical) tabs or line
-// breaks (newline or formfeed characters). Cells in adjacent lines
-// constitute a column. The Writer inserts padding as needed to
-// make all cells in a column have the same width, effectively
-// aligning the columns. It assumes that all characters have the
-// same width except for tabs for which a tabwidth must be specified.
-// Note that cells are tab-terminated, not tab-separated: trailing
-// non-tab text at the end of a line does not form a column cell.
+// The Writer treats incoming bytes as UTF-8-encoded text consisting
+// of cells terminated by horizontal ('\t') or vertical ('\v') tabs,
+// and newline ('\n') or formfeed ('\f') characters; both newline and
+// formfeed act as line breaks.
+//
+// Tab-terminated cells in contiguous lines constitute a column. The
+// Writer inserts padding as needed to make all cells in a column have
+// the same width, effectively aligning the columns. It assumes that
+// all characters have the same width, except for tabs for which a
+// tabwidth must be specified. Column cells must be tab-terminated, not
+// tab-separated: non-tab terminated trailing text at the end of a line
+// forms a cell but that cell is not part of an aligned column.
+// For instance, in this example (where | stands for a horizontal tab):
+//
+// aaaa|bbb|d
+// aa |b |dd
+// a |
+// aa |cccc|eee
+//
+// the b and c are in distinct columns (the b column is not contiguous
+// all the way). The d and e are not in a column at all (there's no
+// terminating tab, nor would the column be contiguous).
//
// The Writer assumes that all Unicode code points have the same width;
-// this may not be true in some fonts.
+// this may not be true in some fonts or if the string contains combining
+// characters.
//
// If DiscardEmptyColumns is set, empty columns that are terminated
// entirely by vertical (or "soft") tabs are discarded. Columns
// width of the escaped text is always computed excluding the Escape
// characters.
//
-// The formfeed character ('\f') acts like a newline but it also
-// terminates all columns in the current line (effectively calling
-// Flush). Cells in the next line start new columns. Unless found
+// The formfeed character acts like a newline but it also terminates
+// all columns in the current line (effectively calling Flush). Tab-
+// terminated cells in the next line start new columns. Unless found
// inside an HTML tag or inside an escaped text segment, formfeed
// characters appear as newlines in the output.
//
// that any data buffered in the Writer is written to output. Any
// incomplete escape sequence at the end is considered
// complete for formatting purposes.
-//
-func (b *Writer) Flush() (err error) {
+func (b *Writer) Flush() error {
+ return b.flush()
+}
+
+func (b *Writer) flush() (err error) {
defer b.reset() // even in the presence of errors
defer handlePanic(&err, "Flush")
// format contents of buffer
b.format(0, 0, len(b.lines))
-
- return
+ return nil
}
var hbar = []byte("---\n")
// execution stops, but partial results may already have been written to
// the output writer.
// A template may be executed safely in parallel.
-func (t *Template) Execute(wr io.Writer, data interface{}) (err error) {
+func (t *Template) Execute(wr io.Writer, data interface{}) error {
+ return t.execute(wr, data)
+}
+
+func (t *Template) execute(wr io.Writer, data interface{}) (err error) {
defer errRecover(&err)
value := reflect.ValueOf(data)
state := &state{
// idealConstant is called to return the value of a number in a context where
// we don't know the type. In that case, the syntax of the number tells us
-// its type, and we use Go rules to resolve. Note there is no such thing as
+// its type, and we use Go rules to resolve. Note there is no such thing as
// a uint ideal constant in this situation - the value must be of int type.
func (s *state) idealConstant(constant *parse.NumberNode) reflect.Value {
// These are ideal constants but we don't know the type
switch {
case constant.IsComplex:
return reflect.ValueOf(constant.Complex128) // incontrovertible.
- case constant.IsFloat && !isHexConstant(constant.Text) && strings.IndexAny(constant.Text, ".eE") >= 0:
+ case constant.IsFloat && !isHexConstant(constant.Text) && strings.ContainsAny(constant.Text, ".eE"):
return reflect.ValueOf(constant.Float64)
case constant.IsInt:
n := int(constant.Int64)
)
// evalCall executes a function or method call. If it's a method, fun already has the receiver bound, so
-// it looks just like a function call. The arg list, if non-nil, includes (in the manner of the shell), arg[0]
+// it looks just like a function call. The arg list, if non-nil, includes (in the manner of the shell), arg[0]
// as the function itself.
func (s *state) evalCall(dot, fun reflect.Value, node parse.Node, name string, args []parse.Node, final reflect.Value) reflect.Value {
if args != nil {
// Indexing.
// index returns the result of indexing its first argument by the following
-// arguments. Thus "index x 1 2 3" is, in Go syntax, x[1][2][3]. Each
+// arguments. Thus "index x 1 2 3" is, in Go syntax, x[1][2][3]. Each
// indexed item must be a map, slice, or array.
func index(item interface{}, indices ...interface{}) (interface{}, error) {
v := reflect.ValueOf(item)
// HTMLEscapeString returns the escaped HTML equivalent of the plain text data s.
func HTMLEscapeString(s string) string {
// Avoid allocation if we can.
- if strings.IndexAny(s, `'"&<>`) < 0 {
+ if !strings.ContainsAny(s, `'"&<>`) {
return s
}
var b bytes.Buffer
package template
-// Tests for mulitple-template parsing and execution.
+// Tests for multiple-template parsing and execution.
import (
"bytes"
// accept consumes the next rune if it's from the valid set.
func (l *lexer) accept(valid string) bool {
- if strings.IndexRune(valid, l.next()) >= 0 {
+ if strings.ContainsRune(valid, l.next()) {
return true
}
l.backup()
// acceptRun consumes a run of runes from the valid set.
func (l *lexer) acceptRun(valid string) {
- for strings.IndexRune(valid, l.next()) >= 0 {
+ for strings.ContainsRune(valid, l.next()) {
}
l.backup()
}
// templates described in the argument string. The top-level template will be
// given the specified name. If an error is encountered, parsing stops and an
// empty map is returned with the error.
-func Parse(name, text, leftDelim, rightDelim string, funcs ...map[string]interface{}) (treeSet map[string]*Tree, err error) {
- treeSet = make(map[string]*Tree)
+func Parse(name, text, leftDelim, rightDelim string, funcs ...map[string]interface{}) (map[string]*Tree, error) {
+ treeSet := make(map[string]*Tree)
t := New(name)
t.text = text
- _, err = t.Parse(text, leftDelim, rightDelim, treeSet, funcs...)
- return
+ _, err := t.Parse(text, leftDelim, rightDelim, treeSet, funcs...)
+ return treeSet, err
}
// next returns the next token.
// Template:
// {{template stringValue pipeline}}
-// Template keyword is past. The name must be something that can evaluate
+// Template keyword is past. The name must be something that can evaluate
// to a string.
func (t *Tree) templateControl() Node {
const context = "template clause"
func TestNumberParse(t *testing.T) {
for _, test := range numberTests {
- // If fmt.Sscan thinks it's complex, it's complex. We can't trust the output
+ // If fmt.Sscan thinks it's complex, it's complex. We can't trust the output
// because imaginary comes out as a number.
var c complex128
typ := itemNumber
-// Copyright 2011 The Go Authors. All rights reserved.
+// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// compatibility with fixed-width Unix time formats.
//
// A decimal point followed by one or more zeros represents a fractional
-// second, printed to the given number of decimal places. A decimal point
+// second, printed to the given number of decimal places. A decimal point
// followed by one or more nines represents a fractional second, printed to
// the given number of decimal places, with trailing zeros removed.
// When parsing (only), the input may contain a fractional second
// -07 ±hh
// Replacing the sign in the format with a Z triggers
// the ISO 8601 behavior of printing Z instead of an
-// offset for the UTC zone. Thus:
+// offset for the UTC zone. Thus:
// Z0700 Z or ±hhmm
// Z07:00 Z or ±hh:mm
// Z07 Z or ±hh
b = append(b, "am"...)
}
case stdISO8601TZ, stdISO8601ColonTZ, stdISO8601SecondsTZ, stdISO8601ShortTZ, stdISO8601ColonSecondsTZ, stdNumTZ, stdNumColonTZ, stdNumSecondsTz, stdNumShortTZ, stdNumColonSecondsTZ:
- // Ugly special case. We cheat and take the "Z" variants
+ // Ugly special case. We cheat and take the "Z" variants
// to mean "the time zone as formatted for ISO 8601".
if offset == 0 && (std == stdISO8601TZ || std == stdISO8601ColonTZ || std == stdISO8601SecondsTZ || std == stdISO8601ShortTZ || std == stdISO8601ColonSecondsTZ) {
b = append(b, 'Z')
_, err := Parse(test.format, test.value)
if err == nil {
t.Errorf("expected error for %q %q", test.format, test.value)
- } else if strings.Index(err.Error(), test.expect) < 0 {
+ } else if !strings.Contains(err.Error(), test.expect) {
t.Errorf("expected error with %q for %q %q; got %s", test.expect, test.format, test.value, err)
}
}
// when is a helper function for setting the 'when' field of a runtimeTimer.
// It returns what the time will be, in nanoseconds, Duration d in the future.
-// If d is negative, it is ignored. If the returned value would be less than
+// If d is negative, it is ignored. If the returned value would be less than
// zero because of an overflow, MaxInt64 is returned.
func when(d Duration) int64 {
if d <= 0 {
return t
}
-// Stop turns off a ticker. After Stop, no more ticks will be sent.
+// Stop turns off a ticker. After Stop, no more ticks will be sent.
// Stop does not close the channel, to prevent a read from the channel succeeding
// incorrectly.
func (t *Ticker) Stop() {
// channel only. While Tick is useful for clients that have no need to shut down
// the Ticker, be aware that without a way to shut it down the underlying
// Ticker cannot be recovered by the garbage collector; it "leaks".
+// Unlike NewTicker, Tick will return nil if d <= 0.
func Tick(d Duration) <-chan Time {
if d <= 0 {
return nil
}
}
-// Test that a bug tearing down a ticker has been fixed. This routine should not deadlock.
+// Test that a bug tearing down a ticker has been fixed. This routine should not deadlock.
func TestTeardown(t *testing.T) {
Delta := 100 * Millisecond
if testing.Short() {
// A Time represents an instant in time with nanosecond precision.
//
// Programs using times should typically store and pass them as values,
-// not pointers. That is, time variables and struct fields should be of
-// type time.Time, not *time.Time. A Time value can be used by
+// not pointers. That is, time variables and struct fields should be of
+// type time.Time, not *time.Time. A Time value can be used by
// multiple goroutines simultaneously.
//
// Time instants can be compared using the Before, After, and Equal methods.
// 00:00:00 UTC, which would be 12-31-(-1) 19:00:00 in New York.
//
// The zero Time value does not force a specific epoch for the time
-// representation. For example, to use the Unix epoch internally, we
+// representation. For example, to use the Unix epoch internally, we
// could define that to distinguish a zero value from Jan 1 1970, that
// time would be represented by sec=-1, nsec=1e9. However, it does
// suggest a representation, namely using 1-1-1 00:00:00 UTC as the
// The Add and Sub computations are oblivious to the choice of epoch.
//
// The presentation computations - year, month, minute, and so on - all
-// rely heavily on division and modulus by positive constants. For
+// rely heavily on division and modulus by positive constants. For
// calendrical calculations we want these divisions to round down, even
// for negative values, so that the remainder is always positive, but
// Go's division (like most hardware division instructions) rounds to
-// zero. We can still do those computations and then adjust the result
+// zero. We can still do those computations and then adjust the result
// for a negative numerator, but it's annoying to write the adjustment
-// over and over. Instead, we can change to a different epoch so long
+// over and over. Instead, we can change to a different epoch so long
// ago that all the times we care about will be positive, and then round
-// to zero and round down coincide. These presentation routines already
+// to zero and round down coincide. These presentation routines already
// have to add the zone offset, so adding the translation to the
-// alternate epoch is cheap. For example, having a non-negative time t
+// alternate epoch is cheap. For example, having a non-negative time t
// means that we can write
//
// sec = t % 60
//
// The calendar runs on an exact 400 year cycle: a 400-year calendar
// printed for 1970-2469 will apply as well to 2370-2769. Even the days
-// of the week match up. It simplifies the computations to choose the
+// of the week match up. It simplifies the computations to choose the
// cycle boundaries so that the exceptional years are always delayed as
-// long as possible. That means choosing a year equal to 1 mod 400, so
+// long as possible. That means choosing a year equal to 1 mod 400, so
// that the first leap year is the 4th year, the first missed leap year
// is the 100th year, and the missed missed leap year is the 400th year.
// So we'd prefer instead to print a calendar for 2001-2400 and reuse it
// routines would then be invalid when displaying the epoch in time zones
// west of UTC, since it is year 0. It doesn't seem tenable to say that
// printing the zero time correctly isn't supported in half the time
-// zones. By comparison, it's reasonable to mishandle some times in
+// zones. By comparison, it's reasonable to mishandle some times in
// the year -292277022399.
//
// All this is opaque to clients of the API and can be changed if a
}
// A Duration represents the elapsed time between two instants
-// as an int64 nanosecond count. The representation limits the
+// as an int64 nanosecond count. The representation limits the
// largest representable duration to approximately 290 years.
type Duration int64
maxDuration Duration = 1<<63 - 1
)
-// Common durations. There is no definition for units of Day or larger
+// Common durations. There is no definition for units of Day or larger
// to avoid confusion across daylight savings time zone transitions.
//
// To count the number of units in a Duration, divide:
)
// String returns a string representing the duration in the form "72h3m0.5s".
-// Leading zero units are omitted. As a special case, durations less than one
+// Leading zero units are omitted. As a special case, durations less than one
// second format use a smaller unit (milli-, micro-, or nanoseconds) to ensure
-// that the leading digit is non-zero. The zero duration formats as 0,
+// that the leading digit is non-zero. The zero duration formats as 0,
// with no unit.
func (d Duration) String() string {
// Largest time is 2540400h10m10.000000000s
}
// daysBefore[m] counts the number of days in a non-leap year
-// before month m begins. There is an entry for m=12, counting
+// before month m begins. There is an entry for m=12, counting
// the number of days before January of next year (365).
var daysBefore = [...]int32{
0,
// UnmarshalJSON implements the json.Unmarshaler interface.
// The time is expected to be a quoted string in RFC 3339 format.
-func (t *Time) UnmarshalJSON(data []byte) (err error) {
+func (t *Time) UnmarshalJSON(data []byte) error {
// Fractional seconds are handled implicitly by Parse.
+ var err error
*t, err = Parse(`"`+RFC3339+`"`, string(data))
- return
+ return err
}
// MarshalText implements the encoding.TextMarshaler interface.
// UnmarshalText implements the encoding.TextUnmarshaler interface.
// The time is expected to be in RFC 3339 format.
-func (t *Time) UnmarshalText(data []byte) (err error) {
+func (t *Time) UnmarshalText(data []byte) error {
// Fractional seconds are handled implicitly by Parse.
+ var err error
*t, err = Parse(RFC3339, string(data))
- return
+ return err
}
// Unix returns the local Time corresponding to the given Unix time,
//
// A daylight savings time transition skips or repeats times.
// For example, in the United States, March 13, 2011 2:15am never occurred,
-// while November 6, 2011 1:15am occurred twice. In such cases, the
+// while November 6, 2011 1:15am occurred twice. In such cases, the
// choice of time zone, and therefore the time, is not well-defined.
// Date returns a time that is correct in one of the two zones involved
// in the transition, but it does not guarantee which.
// the subsequent tests fail.
func TestZoneData(t *testing.T) {
lt := Now()
- // PST is 8 hours west, PDT is 7 hours west. We could use the name but it's not unique.
+ // PST is 8 hours west, PDT is 7 hours west. We could use the name but it's not unique.
if name, off := lt.Zone(); off != -8*60*60 && off != -7*60*60 {
t.Errorf("Unable to find US Pacific time zone data for testing; time zone is %q offset %d", name, off)
t.Error("Likely problem: the time zone files have not been installed.")
return loadZoneData(buf)
}
-// There are 500+ zoneinfo files. Rather than distribute them all
+// There are 500+ zoneinfo files. Rather than distribute them all
// individually, we ship them in an uncompressed zip file.
// Used this way, the zip file format serves as a commonly readable
-// container for the individual small files. We choose zip over tar
+// container for the individual small files. We choose zip over tar
// because zip files have a contiguous table of contents, making
// individual file lookups faster, and because the per-file overhead
// in a zip file is considerably less than tar's 512 bytes.
}
// Test that we get the correct results for times before the first
-// transition time. To do this we explicitly check early dates in a
+// transition time. To do this we explicitly check early dates in a
// couple of specific timezones.
func TestFirstZone(t *testing.T) {
time.ForceZipFileForTesting(true)
// IsPrint reports whether the rune is defined as printable by Go. Such
// characters include letters, marks, numbers, punctuation, symbols, and the
// ASCII space character, from categories L, M, N, P, S and the ASCII space
-// character. This categorization is the same as IsGraphic except that the
+// character. This categorization is the same as IsGraphic except that the
// only spacing character is ASCII space, U+0020.
func IsPrint(r rune) bool {
if uint32(r) <= MaxLatin1 {
LatinOffset int // number of entries in R16 with Hi <= MaxLatin1
}
-// Range16 represents of a range of 16-bit Unicode code points. The range runs from Lo to Hi
+// Range16 represents of a range of 16-bit Unicode code points. The range runs from Lo to Hi
// inclusive and has the specified stride.
type Range16 struct {
Lo uint16
}
// Range32 represents of a range of Unicode code points and is used when one or
-// more of the values will not fit in 16 bits. The range runs from Lo to Hi
+// more of the values will not fit in 16 bits. The range runs from Lo to Hi
// inclusive and has the specified stride. Lo and Hi must always be >= 1<<16.
type Range32 struct {
Lo uint32
// code point to one code point) case conversion.
// The range runs from Lo to Hi inclusive, with a fixed stride of 1. Deltas
// are the number to add to the code point to reach the code point for a
-// different case for that character. They may be negative. If zero, it
+// different case for that character. They may be negative. If zero, it
// means the character is in the corresponding case. There is a special
// case representing sequences of alternating corresponding Upper and Lower
-// pairs. It appears with a fixed Delta of
+// pairs. It appears with a fixed Delta of
// {UpperLower, UpperLower, UpperLower}
// The constant UpperLower has an otherwise impossible delta value.
type CaseRange struct {
return r1
}
-// caseOrbit is defined in tables.go as []foldPair. Right now all the
+// caseOrbit is defined in tables.go as []foldPair. Right now all the
// entries fit in uint16, so use uint16. If that changes, compilation
// will fail (the constants in the composite literal will not fit in uint16)
// and the types here can change to uint32.
}
// SimpleFold iterates over Unicode code points equivalent under
-// the Unicode-defined simple case folding. Among the code points
+// the Unicode-defined simple case folding. Among the code points
// equivalent to rune (including rune itself), SimpleFold returns the
// smallest rune > r if one exists, or else the smallest rune >= 0.
//
return rune(caseOrbit[lo].To)
}
- // No folding specified. This is a one- or two-element
+ // No folding specified. This is a one- or two-element
// equivalence class containing rune and ToLower(rune)
// and ToUpper(rune) if they are different from rune.
if l := ToLower(r); l != r {
logger.Fatal("unknown category", name)
}
// We generate an UpperCase name to serve as concise documentation and an _UnderScored
- // name to store the data. This stops godoc dumping all the tables but keeps them
+ // name to store the data. This stops godoc dumping all the tables but keeps them
// available to clients.
// Cases deserving special comments
varDecl := ""
c._case = CaseTitle
}
// Some things such as roman numeral U+2161 don't describe themselves
- // as upper case, but have a lower case. Second-guess them.
+ // as upper case, but have a lower case. Second-guess them.
if c._case == CaseNone && ch.lowerCase != 0 {
c._case = CaseUpper
}
-// Copyright 2012 The Go Authors. All rights reserved.
+// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// the Unicode replacement code point U+FFFD.
func DecodeRune(r1, r2 rune) rune {
if surr1 <= r1 && r1 < surr2 && surr2 <= r2 && r2 < surr3 {
- return (r1-surr1)<<10 | (r2 - surr2) + 0x10000
+ return (r1-surr1)<<10 | (r2 - surr2) + surrSelf
}
return replacementChar
}
// If the rune is not a valid Unicode code point or does not need encoding,
// EncodeRune returns U+FFFD, U+FFFD.
func EncodeRune(r rune) (r1, r2 rune) {
- if r < surrSelf || r > maxRune || IsSurrogate(r) {
+ if r < surrSelf || r > maxRune {
return replacementChar, replacementChar
}
r -= surrSelf
n = 0
for _, v := range s {
switch {
- case v < 0, surr1 <= v && v < surr3, v > maxRune:
- v = replacementChar
- fallthrough
- case v < surrSelf:
+ case 0 <= v && v < surr1, surr3 <= v && v < surrSelf:
+ // normal rune
a[n] = uint16(v)
n++
- default:
+ case surrSelf <= v && v <= maxRune:
+ // needs surrogate sequence
r1, r2 := EncodeRune(v)
a[n] = uint16(r1)
a[n+1] = uint16(r2)
n += 2
+ default:
+ a[n] = uint16(replacementChar)
+ n++
}
}
- return a[0:n]
+ return a[:n]
}
// Decode returns the Unicode code point sequence represented
n := 0
for i := 0; i < len(s); i++ {
switch r := s[i]; {
+ case r < surr1, surr3 <= r:
+ // normal rune
+ a[n] = rune(r)
case surr1 <= r && r < surr2 && i+1 < len(s) &&
surr2 <= s[i+1] && s[i+1] < surr3:
// valid surrogate sequence
a[n] = DecodeRune(rune(r), rune(s[i+1]))
i++
- n++
- case surr1 <= r && r < surr3:
+ default:
// invalid surrogate sequence
a[n] = replacementChar
- n++
- default:
- // normal rune
- a[n] = rune(r)
- n++
}
+ n++
}
- return a[0:n]
+ return a[:n]
}
-// Copyright 2010 The Go Authors. All rights reserved.
+// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
}
}
}
+
+func BenchmarkDecodeValidASCII(b *testing.B) {
+ // "hello world"
+ data := []uint16{104, 101, 108, 108, 111, 32, 119, 111, 114, 108, 100}
+ for i := 0; i < b.N; i++ {
+ Decode(data)
+ }
+}
+
+func BenchmarkDecodeValidJapaneseChars(b *testing.B) {
+ // "日本語日本語日本語"
+ data := []uint16{26085, 26412, 35486, 26085, 26412, 35486, 26085, 26412, 35486}
+ for i := 0; i < b.N; i++ {
+ Decode(data)
+ }
+}
+
+func BenchmarkDecodeRune(b *testing.B) {
+ rs := make([]rune, 10)
+ // U+1D4D0 to U+1D4D4: MATHEMATICAL BOLD SCRIPT CAPITAL LETTERS
+ for i, u := range []rune{'𝓐', '𝓑', '𝓒', '𝓓', '𝓔'} {
+ rs[2*i], rs[2*i+1] = EncodeRune(u)
+ }
+
+ b.ResetTimer()
+ for i := 0; i < b.N; i++ {
+ for j := 0; j < 5; j++ {
+ DecodeRune(rs[2*j], rs[2*j+1])
+ }
+ }
+}
+
+func BenchmarkEncodeValidASCII(b *testing.B) {
+ data := []rune{'h', 'e', 'l', 'l', 'o'}
+ for i := 0; i < b.N; i++ {
+ Encode(data)
+ }
+}
+
+func BenchmarkEncodeValidJapaneseChars(b *testing.B) {
+ data := []rune{'日', '本', '語'}
+ for i := 0; i < b.N; i++ {
+ Encode(data)
+ }
+}
+
+func BenchmarkEncodeRune(b *testing.B) {
+ for i := 0; i < b.N; i++ {
+ for _, u := range []rune{'𝓐', '𝓑', '𝓒', '𝓓', '𝓔'} {
+ EncodeRune(u)
+ }
+ }
+}
// EncodeRune writes into p (which must be large enough) the UTF-8 encoding of the rune.
// It returns the number of bytes written.
func EncodeRune(p []byte, r rune) int {
- // Negative values are erroneous. Making it unsigned addresses the problem.
+ // Negative values are erroneous. Making it unsigned addresses the problem.
switch i := uint32(r); {
case i <= rune1Max:
p[0] = byte(r)
}
}
-// RuneCount returns the number of runes in p. Erroneous and short
+// RuneCount returns the number of runes in p. Erroneous and short
// encodings are treated as single runes of width 1 byte.
func RuneCount(p []byte) int {
np := len(p)
}
// RuneStart reports whether the byte could be the first byte of an encoded,
-// possibly invalid rune. Second and subsequent bytes always have the top two
+// possibly invalid rune. Second and subsequent bytes always have the top two
// bits set to 10.
func RuneStart(b byte) bool { return b&0xC0 != 0x80 }
package unsafe
// ArbitraryType is here for the purposes of documentation only and is not actually
-// part of the unsafe package. It represents the type of an arbitrary Go expression.
+// part of the unsafe package. It represents the type of an arbitrary Go expression.
type ArbitraryType int
-// Pointer represents a pointer to an arbitrary type. There are four special operations
+// Pointer represents a pointer to an arbitrary type. There are four special operations
// available for type Pointer that are not available for other types:
// - A pointer value of any type can be converted to a Pointer.
// - A Pointer can be converted to a pointer value of any type.
func Sizeof(x ArbitraryType) uintptr
// Offsetof returns the offset within the struct of the field represented by x,
-// which must be of the form structValue.field. In other words, it returns the
+// which must be of the form structValue.field. In other words, it returns the
// number of bytes between the start of the struct and the start of the field.
func Offsetof(x ArbitraryType) uintptr
// It is the same as the value returned by reflect.TypeOf(x).Align().
// As a special case, if a variable s is of struct type and f is a field
// within that struct, then Alignof(s.f) will return the required alignment
-// of a field of that type within a struct. This case is the same as the
+// of a field of that type within a struct. This case is the same as the
// value returned by reflect.TypeOf(s.f).FieldAlign().
func Alignof(x ArbitraryType) uintptr
T1: *(x.(*T1)), // ERROR "&T2 literal escapes to heap"
}
}
+
+func dotTypeEscape2() { // #13805
+ {
+ i := 0
+ var v int
+ var x interface{} = i // ERROR "i does not escape"
+ *(&v) = x.(int) // ERROR "&v does not escape"
+ }
+ {
+ i := 0
+ var x interface{} = i // ERROR "i does not escape"
+ sink = x.(int) // ERROR "x.\(int\) escapes to heap"
+
+ }
+ {
+ i := 0 // ERROR "moved to heap: i"
+ var x interface{} = &i // ERROR "&i escapes to heap"
+ sink = x.(*int) // ERROR "x.\(\*int\) escapes to heap"
+ }
+}
package main
func f() {
- v := 1 << 1025; // ERROR "overflow|stupid shift"
+ v := 1 << 1025; // ERROR "overflow|shift count too large"
_ = v
}
package a
import"" // ERROR "import path is empty"
-var? // ERROR "invalid declaration"
+var? // ERROR "illegal character U\+003F '\?'"
-var x int // ERROR "unexpected var"
+var x int // ERROR "unexpected var" "cannot declare name"
func main() {
}
--- /dev/null
+// compile
+
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package p
+
+func f_ssa(x int, p *int) {
+ if false {
+ y := x + 5
+ for {
+ *p = y
+ }
+ }
+}
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package a
+
+var S struct {
+ Str string `tag`
+}
+
+func F() string {
+ v := S
+ return v.Str
+}
--- /dev/null
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package b
+
+import "./a"
+
+func G() string {
+ return a.F()
+}
--- /dev/null
+// compiledir
+
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Inline function misses struct tags.
+
+package ignored
--- /dev/null
+// compile
+
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Mention of field with large offset in struct literal causes crash
+package p
+
+type T struct {
+ Slice [1 << 20][]int
+ Ptr *int
+}
+
+func New(p *int) *T {
+ return &T{Ptr: p}
+}
--- /dev/null
+// errorcheck
+
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package f
+
+import /* // ERROR "import path" */ `
+bogus`
+
+func f(x int /* // ERROR "unexpected semicolon"
+
+*/)
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Expects to see error messages on "p" exponents.
+// Expects to see error messages on 'p' exponents.
package main
x1 = 1.1 // float
x2 = 1e10 // float
x3 = 0x1e10 // integer (e is a hex digit)
- x4 = 0x1p10 // ERROR "malformed floating point constant"
- x5 = 1p10 // ERROR "malformed floating point constant"
- x6 = 0p0 // ERROR "malformed floating point constant"
)
+// 'p' exponents are invalid - the 'p' is not considered
+// part of a floating-point number, but introduces a new
+// (unexpected) name.
+//
+// Error recovery is not ideal and we use a new declaration
+// each time for the parser to recover.
+
+const x4 = 0x1p10 // ERROR "unexpected p10"
+const x5 = 1p10 // ERROR "unexpected p10"
+const x6 = 0p0 // ERROR "unexpected p0"
+
func main() {
fmt.Printf("%g %T\n", x1, x1)
fmt.Printf("%g %T\n", x2, x2)
// goto across declaration not okay
func _() {
goto L // ERROR "goto L jumps over declaration of x at LINE+1|goto jumps over declaration"
- x := 1 // GCCGO_ERROR "defined here"
+ x := 1 // GCCGO_ERROR "defined here"
_ = x
L:
}
x := 1
_ = x
}
- x := 1 // GCCGO_ERROR "defined here"
+ x := 1 // GCCGO_ERROR "defined here"
_ = x
L:
}
// error shows first offending variable
func _() {
goto L // ERROR "goto L jumps over declaration of x at LINE+1|goto jumps over declaration"
- x := 1 // GCCGO_ERROR "defined here"
+ x := 1 // GCCGO_ERROR "defined here"
_ = x
y := 1
_ = y
// goto not okay even if code path is dead
func _() {
goto L // ERROR "goto L jumps over declaration of x at LINE+1|goto jumps over declaration"
- x := 1 // GCCGO_ERROR "defined here"
+ x := 1 // GCCGO_ERROR "defined here"
_ = x
y := 1
_ = y
// goto into inner block not okay
func _() {
goto L // ERROR "goto L jumps into block starting at LINE+1|goto jumps into block"
- { // GCCGO_ERROR "block starts here"
+ { // GCCGO_ERROR "block starts here"
L:
}
}
// goto backward into inner block still not okay
func _() {
- { // GCCGO_ERROR "block starts here"
+ { // GCCGO_ERROR "block starts here"
L:
}
goto L // ERROR "goto L jumps into block starting at LINE-3|goto jumps into block"
goto L // ERROR "goto L jumps into block starting at LINE+1|goto jumps into block"
{
{
- { // GCCGO_ERROR "block starts here"
+ { // GCCGO_ERROR "block starts here"
L:
}
}
goto L // ERROR "goto L jumps into block starting at LINE+3|goto jumps into block"
x := 1
_ = x
- { // GCCGO_ERROR "block starts here"
+ { // GCCGO_ERROR "block starts here"
L:
}
}
}
func _() {
- goto L // ERROR "goto L jumps into block starting at LINE+1|goto jumps into block"
- if true { // GCCGO_ERROR "block starts here"
+ goto L // ERROR "goto L jumps into block starting at LINE+1|goto jumps into block"
+ if true { // GCCGO_ERROR "block starts here"
L:
}
}
func _() {
- goto L // ERROR "goto L jumps into block starting at LINE+1|goto jumps into block"
- if true { // GCCGO_ERROR "block starts here"
+ goto L // ERROR "goto L jumps into block starting at LINE+1|goto jumps into block"
+ if true { // GCCGO_ERROR "block starts here"
L:
} else {
}
func _() {
goto L // ERROR "goto L jumps into block starting at LINE+1|goto jumps into block"
if true {
- } else { // GCCGO_ERROR "block starts here"
+ } else { // GCCGO_ERROR "block starts here"
L:
}
}
func _() {
- if false { // GCCGO_ERROR "block starts here"
+ if false { // GCCGO_ERROR "block starts here"
L:
} else {
goto L // ERROR "goto L jumps into block starting at LINE-3|goto jumps into block"
func _() {
if true {
goto L // ERROR "goto L jumps into block starting at LINE+1|goto jumps into block"
- } else { // GCCGO_ERROR "block starts here"
+ } else { // GCCGO_ERROR "block starts here"
L:
}
}
func _() {
if true {
goto L // ERROR "goto L jumps into block starting at LINE+1|goto jumps into block"
- } else if false { // GCCGO_ERROR "block starts here"
+ } else if false { // GCCGO_ERROR "block starts here"
L:
}
}
func _() {
if true {
goto L // ERROR "goto L jumps into block starting at LINE+1|goto jumps into block"
- } else if false { // GCCGO_ERROR "block starts here"
+ } else if false { // GCCGO_ERROR "block starts here"
L:
} else {
}
if true {
goto L // ERROR "goto L jumps into block starting at LINE+1|goto jumps into block"
} else if false {
- } else { // GCCGO_ERROR "block starts here"
+ } else { // GCCGO_ERROR "block starts here"
L:
}
}
}
func _() {
- for { // GCCGO_ERROR "block starts here"
+ for { // GCCGO_ERROR "block starts here"
L:
}
goto L // ERROR "goto L jumps into block starting at LINE-3|goto jumps into block"
}
func _() {
- for { // GCCGO_ERROR "block starts here"
+ for { // GCCGO_ERROR "block starts here"
goto L
L1:
}
}
func _() {
- for i < n { // GCCGO_ERROR "block starts here"
+ for i < n { // GCCGO_ERROR "block starts here"
L:
}
goto L // ERROR "goto L jumps into block starting at LINE-3|goto jumps into block"
}
func _() {
- for i = 0; i < n; i++ { // GCCGO_ERROR "block starts here"
+ for i = 0; i < n; i++ { // GCCGO_ERROR "block starts here"
L:
}
goto L // ERROR "goto L jumps into block starting at LINE-3|goto jumps into block"
}
func _() {
- for i = range x { // GCCGO_ERROR "block starts here"
+ for i = range x { // GCCGO_ERROR "block starts here"
L:
}
goto L // ERROR "goto L jumps into block starting at LINE-3|goto jumps into block"
}
func _() {
- for i = range c { // GCCGO_ERROR "block starts here"
+ for i = range c { // GCCGO_ERROR "block starts here"
L:
}
goto L // ERROR "goto L jumps into block starting at LINE-3|goto jumps into block"
}
func _() {
- for i = range m { // GCCGO_ERROR "block starts here"
+ for i = range m { // GCCGO_ERROR "block starts here"
L:
}
goto L // ERROR "goto L jumps into block starting at LINE-3|goto jumps into block"
}
func _() {
- for i = range s { // GCCGO_ERROR "block starts here"
+ for i = range s { // GCCGO_ERROR "block starts here"
L:
}
goto L // ERROR "goto L jumps into block starting at LINE-3|goto jumps into block"
goto L // ERROR "goto L jumps into block starting at LINE+1|goto jumps into block"
switch i {
case 0:
- L: // GCCGO_ERROR "block starts here"
+ L: // GCCGO_ERROR "block starts here"
}
}
goto L // ERROR "goto L jumps into block starting at LINE+1|goto jumps into block"
switch i {
case 0:
- L: // GCCGO_ERROR "block starts here"
+ L: // GCCGO_ERROR "block starts here"
;
default:
}
switch i {
case 0:
default:
- L: // GCCGO_ERROR "block starts here"
+ L: // GCCGO_ERROR "block starts here"
}
}
default:
goto L // ERROR "goto L jumps into block starting at LINE+1|goto jumps into block"
case 0:
- L: // GCCGO_ERROR "block starts here"
+ L: // GCCGO_ERROR "block starts here"
}
}
func _() {
switch i {
case 0:
- L: // GCCGO_ERROR "block starts here"
+ L: // GCCGO_ERROR "block starts here"
;
default:
goto L // ERROR "goto L jumps into block starting at LINE-4|goto jumps into block"
goto L // ERROR "goto L jumps into block starting at LINE+2|goto jumps into block"
select {
case c <- 1:
- L: // GCCGO_ERROR "block starts here"
+ L: // GCCGO_ERROR "block starts here"
}
}
goto L // ERROR "goto L jumps into block starting at LINE+2|goto jumps into block"
select {
case c <- 1:
- L: // GCCGO_ERROR "block starts here"
+ L: // GCCGO_ERROR "block starts here"
;
default:
}
select {
case <-c:
default:
- L: // GCCGO_ERROR "block starts here"
+ L: // GCCGO_ERROR "block starts here"
}
}
default:
goto L // ERROR "goto L jumps into block starting at LINE+1|goto jumps into block"
case <-c:
- L: // GCCGO_ERROR "block starts here"
+ L: // GCCGO_ERROR "block starts here"
}
}
func _() {
select {
case <-c:
- L: // GCCGO_ERROR "block starts here"
+ L: // GCCGO_ERROR "block starts here"
;
default:
goto L // ERROR "goto L jumps into block starting at LINE-4|goto jumps into block"
for {
}
L2: // ERROR "label .*L2.* defined and not used"
- select {
- }
+ select {}
L3: // ERROR "label .*L3.* defined and not used"
switch {
}
default:
break L10
}
+
+ goto L10
+
+ goto go2 // ERROR "label go2 not defined"
}
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-
// Verify that erroneous labels are caught by the compiler.
// This set is caught by pass 2. That's why this file is label1.go.
// Does not compile.
break L2
}
if x == 1 {
- continue L2 // ERROR "invalid continue label .*L2"
+ continue L2 // ERROR "invalid continue label .*L2|continue is not in a loop"
}
goto L2
}
+ for {
+ if x == 1 {
+ continue L2 // ERROR "invalid continue label .*L2"
+ }
+ }
+
L3:
switch {
case x > 10:
break L3
}
if x == 12 {
- continue L3 // ERROR "invalid continue label .*L3"
+ continue L3 // ERROR "invalid continue label .*L3|continue is not in a loop"
}
goto L3
}
break L4 // ERROR "invalid break label .*L4"
}
if x == 14 {
- continue L4 // ERROR "invalid continue label .*L4"
+ continue L4 // ERROR "invalid continue label .*L4|continue is not in a loop"
}
if x == 15 {
goto L4
break L5 // ERROR "invalid break label .*L5"
}
if x == 17 {
- continue L5 // ERROR "invalid continue label .*L5"
+ continue L5 // ERROR "invalid continue label .*L5|continue is not in a loop"
}
if x == 18 {
goto L5
goto L1
}
}
+
+ continue // ERROR "continue is not in a loop"
+ for {
+ continue on // ERROR "continue label not defined: on"
+ }
+
+ break // ERROR "break is not in a loop"
+ for {
+ break dance // ERROR "break label not defined: dance"
+ }
+
+ for {
+ switch x {
+ case 1:
+ continue
+ }
+ }
}
)
func main() {
- test(" ") // old deprecated syntax
+ // test(" ") // old deprecated & removed syntax
test("=") // new syntax
}
+// +build !amd64
// errorcheck -0 -l -live -wb=0
// Copyright 2014 The Go Authors. All rights reserved.
+// +build !amd64
// errorcheck -0 -live -wb=0
// Copyright 2014 The Go Authors. All rights reserved.
type BigStruct struct {
X int
Y float64
- A [1<<20]int
+ A [1 << 20]int
Z string
}
}
var (
- intp *int
- arrayp *[10]int
- array0p *[0]int
- bigarrayp *[1<<26]int
- structp *Struct
+ intp *int
+ arrayp *[10]int
+ array0p *[0]int
+ bigarrayp *[1 << 26]int
+ structp *Struct
bigstructp *BigStruct
- emptyp *Empty
- empty1p *Empty1
+ emptyp *Empty
+ empty1p *Empty1
)
func f1() {
- _ = *intp // ERROR "nil check"
- _ = *arrayp // ERROR "nil check"
+ _ = *intp // ERROR "nil check"
+ _ = *arrayp // ERROR "nil check"
_ = *array0p // ERROR "nil check"
_ = *array0p // ERROR "nil check"
- _ = *intp // ERROR "nil check"
- _ = *arrayp // ERROR "nil check"
+ _ = *intp // ERROR "nil check"
+ _ = *arrayp // ERROR "nil check"
_ = *structp // ERROR "nil check"
- _ = *emptyp // ERROR "nil check"
- _ = *arrayp // ERROR "nil check"
+ _ = *emptyp // ERROR "nil check"
+ _ = *arrayp // ERROR "nil check"
}
func f2() {
var (
- intp *int
- arrayp *[10]int
- array0p *[0]int
- bigarrayp *[1<<20]int
- structp *Struct
+ intp *int
+ arrayp *[10]int
+ array0p *[0]int
+ bigarrayp *[1 << 20]int
+ structp *Struct
bigstructp *BigStruct
- emptyp *Empty
- empty1p *Empty1
+ emptyp *Empty
+ empty1p *Empty1
)
- _ = *intp // ERROR "nil check"
- _ = *arrayp // ERROR "nil check"
- _ = *array0p // ERROR "nil check"
- _ = *array0p // ERROR "nil check"
- _ = *intp // ERROR "nil check"
- _ = *arrayp // ERROR "nil check"
- _ = *structp // ERROR "nil check"
- _ = *emptyp // ERROR "nil check"
- _ = *arrayp // ERROR "nil check"
- _ = *bigarrayp // ERROR "nil check"
+ _ = *intp // ERROR "nil check"
+ _ = *arrayp // ERROR "nil check"
+ _ = *array0p // ERROR "nil check"
+ _ = *array0p // ERROR "nil check"
+ _ = *intp // ERROR "nil check"
+ _ = *arrayp // ERROR "nil check"
+ _ = *structp // ERROR "nil check"
+ _ = *emptyp // ERROR "nil check"
+ _ = *arrayp // ERROR "nil check"
+ _ = *bigarrayp // ERROR "nil check"
_ = *bigstructp // ERROR "nil check"
- _ = *empty1p // ERROR "nil check"
+ _ = *empty1p // ERROR "nil check"
}
func fx10k() *[10000]int
-var b bool
+var b bool
func f3(x *[10000]int) {
// Using a huge type and huge offsets so the compiler
// does not expect the memory hardware to fault.
_ = x[9999] // ERROR "nil check"
-
+
for {
if x[9999] != 0 { // ERROR "nil check"
break
}
}
-
- x = fx10k()
+
+ x = fx10k()
_ = x[9999] // ERROR "nil check"
if b {
_ = x[9999] // ERROR "nil check"
} else {
_ = x[9999] // ERROR "nil check"
- }
+ }
_ = x[9999] // ERROR "nil check"
- x = fx10k()
+ x = fx10k()
if b {
_ = x[9999] // ERROR "nil check"
} else {
_ = x[9999] // ERROR "nil check"
- }
+ }
_ = x[9999] // ERROR "nil check"
-
+
fx10k()
// This one is a bit redundant, if we figured out that
// x wasn't going to change across the function call.
_ = &x[9] // ERROR "nil check"
}
-func fx10() *[10]int
+func fx10() *[10]int
func f4(x *[10]int) {
// Most of these have no checks because a real memory reference follows,
// in the first unmapped page of memory.
_ = x[9] // ERROR "nil check"
-
+
for {
if x[9] != 0 { // ERROR "nil check"
break
}
}
-
- x = fx10()
+
+ x = fx10()
_ = x[9] // ERROR "nil check"
if b {
_ = x[9] // ERROR "nil check"
} else {
_ = x[9] // ERROR "nil check"
- }
+ }
_ = x[9] // ERROR "nil check"
- x = fx10()
+ x = fx10()
if b {
_ = x[9] // ERROR "nil check"
} else {
_ = &x[9] // ERROR "nil check"
- }
+ }
_ = x[9] // ERROR "nil check"
-
+
fx10()
_ = x[9] // ERROR "nil check"
-
+
x = fx10()
y := fx10()
_ = &x[9] // ERROR "nil check"
// Fails on ppc64x because of incomplete optimization.
// See issues 9058.
// Same reason for mips64x.
-// +build !ppc64,!ppc64le,!mips64,!mips64le
+// +build !ppc64,!ppc64le,!mips64,!mips64le,!amd64
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
--- /dev/null
+// errorcheck -0 -d=nil
+// Fails on ppc64x because of incomplete optimization.
+// See issues 9058.
+// +build !ppc64,!ppc64le,amd64
+
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Test that nil checks are removed.
+// Optimization is enabled.
+
+package p
+
+type Struct struct {
+ X int
+ Y float64
+}
+
+type BigStruct struct {
+ X int
+ Y float64
+ A [1 << 20]int
+ Z string
+}
+
+type Empty struct {
+}
+
+type Empty1 struct {
+ Empty
+}
+
+var (
+ intp *int
+ arrayp *[10]int
+ array0p *[0]int
+ bigarrayp *[1 << 26]int
+ structp *Struct
+ bigstructp *BigStruct
+ emptyp *Empty
+ empty1p *Empty1
+)
+
+func f1() {
+ _ = *intp // ERROR "generated nil check"
+
+ // This one should be removed but the block copy needs
+ // to be turned into its own pseudo-op in order to see
+ // the indirect.
+ _ = *arrayp // ERROR "generated nil check"
+
+ // 0-byte indirect doesn't suffice.
+ // we don't registerize globals, so there are no removed.* nil checks.
+ _ = *array0p // ERROR "generated nil check"
+ _ = *array0p // ERROR "removed nil check"
+
+ _ = *intp // ERROR "removed nil check"
+ _ = *arrayp // ERROR "removed nil check"
+ _ = *structp // ERROR "generated nil check"
+ _ = *emptyp // ERROR "generated nil check"
+ _ = *arrayp // ERROR "removed nil check"
+}
+
+func f2() {
+ var (
+ intp *int
+ arrayp *[10]int
+ array0p *[0]int
+ bigarrayp *[1 << 20]int
+ structp *Struct
+ bigstructp *BigStruct
+ emptyp *Empty
+ empty1p *Empty1
+ )
+
+ _ = *intp // ERROR "generated nil check"
+ _ = *arrayp // ERROR "generated nil check"
+ _ = *array0p // ERROR "generated nil check"
+ _ = *array0p // ERROR "removed.* nil check"
+ _ = *intp // ERROR "removed.* nil check"
+ _ = *arrayp // ERROR "removed.* nil check"
+ _ = *structp // ERROR "generated nil check"
+ _ = *emptyp // ERROR "generated nil check"
+ _ = *arrayp // ERROR "removed.* nil check"
+ _ = *bigarrayp // ERROR "generated nil check" ARM removed nil check before indirect!!
+ _ = *bigstructp // ERROR "generated nil check"
+ _ = *empty1p // ERROR "generated nil check"
+}
+
+func fx10k() *[10000]int
+
+var b bool
+
+func f3(x *[10000]int) {
+ // Using a huge type and huge offsets so the compiler
+ // does not expect the memory hardware to fault.
+ _ = x[9999] // ERROR "generated nil check"
+
+ for {
+ if x[9999] != 0 { // ERROR "removed nil check"
+ break
+ }
+ }
+
+ x = fx10k()
+ _ = x[9999] // ERROR "generated nil check"
+ if b {
+ _ = x[9999] // ERROR "removed.* nil check"
+ } else {
+ _ = x[9999] // ERROR "removed.* nil check"
+ }
+ _ = x[9999] // ERROR "removed nil check"
+
+ x = fx10k()
+ if b {
+ _ = x[9999] // ERROR "generated nil check"
+ } else {
+ _ = x[9999] // ERROR "generated nil check"
+ }
+ _ = x[9999] // ERROR "generated nil check"
+
+ fx10k()
+ // This one is a bit redundant, if we figured out that
+ // x wasn't going to change across the function call.
+ // But it's a little complex to do and in practice doesn't
+ // matter enough.
+ _ = x[9999] // ERROR "removed nil check"
+}
+
+func f3a() {
+ x := fx10k()
+ y := fx10k()
+ z := fx10k()
+ _ = &x[9] // ERROR "generated nil check"
+ y = z
+ _ = &x[9] // ERROR "removed.* nil check"
+ x = y
+ _ = &x[9] // ERROR "generated nil check"
+}
+
+func f3b() {
+ x := fx10k()
+ y := fx10k()
+ _ = &x[9] // ERROR "generated nil check"
+ y = x
+ _ = &x[9] // ERROR "removed.* nil check"
+ x = y
+ _ = &x[9] // ERROR "removed.* nil check"
+}
+
+func fx10() *[10]int
+
+func f4(x *[10]int) {
+ // Most of these have no checks because a real memory reference follows,
+ // and the offset is small enough that if x is nil, the address will still be
+ // in the first unmapped page of memory.
+
+ _ = x[9] // ERROR "removed nil check"
+
+ for {
+ if x[9] != 0 { // ERROR "removed nil check"
+ break
+ }
+ }
+
+ x = fx10()
+ _ = x[9] // ERROR "generated nil check" // bug would like to remove before indirect
+ if b {
+ _ = x[9] // ERROR "removed nil check"
+ } else {
+ _ = x[9] // ERROR "removed nil check"
+ }
+ _ = x[9] // ERROR "removed nil check"
+
+ x = fx10()
+ if b {
+ _ = x[9] // ERROR "generated nil check" // bug would like to remove before indirect
+ } else {
+ _ = &x[9] // ERROR "generated nil check"
+ }
+ _ = x[9] // ERROR "generated nil check" // bug would like to remove before indirect
+
+ fx10()
+ _ = x[9] // ERROR "removed nil check"
+
+ x = fx10()
+ y := fx10()
+ _ = &x[9] // ERROR "generated nil check"
+ y = x
+ _ = &x[9] // ERROR "removed[a-z ]* nil check"
+ x = y
+ _ = &x[9] // ERROR "removed[a-z ]* nil check"
+}
+
+func f5(p *float32, q *float64, r *float32, s *float64) float64 {
+ x := float64(*p) // ERROR "removed nil check"
+ y := *q // ERROR "removed nil check"
+ *r = 7 // ERROR "removed nil check"
+ *s = 9 // ERROR "removed nil check"
+ return x + y
+}
+
+type T [29]byte
+
+func f6(p, q *T) {
+ x := *p // ERROR "removed nil check"
+ *q = x // ERROR "removed nil check"
+}
// Instead of rewriting the test cases above, adjust
// the first stack frame to use up the extra bytes.
if i == 0 {
- size += 592 - 128
+ size += (720 - 128) - 128
// Noopt builds have a larger stackguard.
- // See ../cmd/dist/buildruntime.go:stackGuardMultiplier
+ // See ../src/cmd/dist/buildruntime.go:stackGuardMultiplier
+ // This increase is included in obj.StackGuard
for _, s := range strings.Split(os.Getenv("GO_GCFLAGS"), " ") {
if s == "-N" {
size += 720
--- /dev/null
+// +build amd64
+// errorcheck -0 -d=ssa/likelyadjust/debug=1
+
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Test that branches have some prediction properties.
+package foo
+
+func f(x, y, z int) int {
+ a := 0
+ for i := 0; i < x; i++ { // ERROR "Branch prediction rule stay in loop"
+ for j := 0; j < y; j++ { // ERROR "Branch prediction rule stay in loop"
+ a += j
+ }
+ for k := 0; k < z; k++ { // ERROR "Branch prediction rule stay in loop"
+ a -= x + y + z
+ }
+ }
+ return a
+}
+
+func g(x, y, z int) int {
+ a := 0
+ if y == 0 { // ERROR "Branch prediction rule default < call"
+ y = g(y, z, x)
+ } else {
+ y++
+ }
+ if y == x { // ERROR "Branch prediction rule default < call"
+ y = g(y, z, x)
+ } else {
+ }
+ if y == 2 { // ERROR "Branch prediction rule default < call"
+ z++
+ } else {
+ y = g(z, x, y)
+ }
+ if y+z == 3 { // ERROR "Branch prediction rule call < exit"
+ println("ha ha")
+ } else {
+ panic("help help help")
+ }
+ if x != 0 { // ERROR "Branch prediction rule default < ret"
+ for i := 0; i < x; i++ { // ERROR "Branch prediction rule stay in loop"
+ if x == 4 { // ERROR "Branch prediction rule stay in loop"
+ return a
+ }
+ for j := 0; j < y; j++ { // ERROR "Branch prediction rule stay in loop"
+ for k := 0; k < z; k++ { // ERROR "Branch prediction rule stay in loop"
+ a -= j * i
+ }
+ a += j
+ }
+ }
+ }
+ return a
+}
+
+func h(x, y, z int) int {
+ a := 0
+ for i := 0; i < x; i++ { // ERROR "Branch prediction rule stay in loop"
+ for j := 0; j < y; j++ { // ERROR "Branch prediction rule stay in loop"
+ a += j
+ if i == j { // ERROR "Branch prediction rule stay in loop"
+ break
+ }
+ a *= j
+ }
+ for k := 0; k < z; k++ { // ERROR "Branch prediction rule stay in loop"
+ a -= k
+ if i == k {
+ continue
+ }
+ a *= k
+ }
+ }
+ if a > 0 { // ERROR "Branch prediction rule default < call"
+ a = g(x, y, z)
+ } else {
+ a = -a
+ }
+ return a
+}
--- /dev/null
+// +build amd64
+// errorcheck -0 -d=ssa/phiopt/debug=3
+
+package main
+
+func f0(a bool) bool {
+ x := false
+ if a {
+ x = true
+ } else {
+ x = false
+ }
+ return x // ERROR "converted OpPhi to OpCopy$"
+}
+
+func f1(a bool) bool {
+ x := false
+ if a {
+ x = false
+ } else {
+ x = true
+ }
+ return x // ERROR "converted OpPhi to OpNot$"
+}
+
+func f2(a, b int) bool {
+ x := true
+ if a == b {
+ x = false
+ }
+ return x // ERROR "converted OpPhi to OpNot$"
+}
+
+func f3(a, b int) bool {
+ x := false
+ if a == b {
+ x = true
+ }
+ return x // ERROR "converted OpPhi to OpCopy$"
+}
+
+func main() {
+}
--- /dev/null
+// +build amd64
+// errorcheck -0 -d=ssa/prove/debug=3
+
+package main
+
+func f0(a []int) int {
+ a[0] = 1
+ a[0] = 1 // ERROR "Proved IsInBounds$"
+ a[6] = 1
+ a[6] = 1 // ERROR "Proved IsInBounds$"
+ a[5] = 1
+ a[5] = 1 // ERROR "Proved IsInBounds$"
+ return 13
+}
+
+func f1(a []int) int {
+ if len(a) <= 5 {
+ return 18
+ }
+ a[0] = 1
+ a[0] = 1 // ERROR "Proved IsInBounds$"
+ a[6] = 1
+ a[6] = 1 // ERROR "Proved IsInBounds$"
+ a[5] = 1 // ERROR "Proved constant IsInBounds$"
+ a[5] = 1 // ERROR "Proved IsInBounds$"
+ return 26
+}
+
+func f2(a []int) int {
+ for i := range a {
+ a[i] = i
+ a[i] = i // ERROR "Proved IsInBounds$"
+ }
+ return 34
+}
+
+func f3(a []uint) int {
+ for i := uint(0); i < uint(len(a)); i++ {
+ a[i] = i // ERROR "Proved IsInBounds$"
+ }
+ return 41
+}
+
+func f4a(a, b, c int) int {
+ if a < b {
+ if a == b { // ERROR "Disproved Eq64$"
+ return 47
+ }
+ if a > b { // ERROR "Disproved Greater64$"
+ return 50
+ }
+ if a < b { // ERROR "Proved Less64$"
+ return 53
+ }
+ if a == b { // ERROR "Disproved Eq64$"
+ return 56
+ }
+ if a > b {
+ return 59
+ }
+ return 61
+ }
+ return 63
+}
+
+func f4b(a, b, c int) int {
+ if a <= b {
+ if a >= b {
+ if a == b { // ERROR "Proved Eq64$"
+ return 70
+ }
+ return 75
+ }
+ return 77
+ }
+ return 79
+}
+
+func f4c(a, b, c int) int {
+ if a <= b {
+ if a >= b {
+ if a != b { // ERROR "Disproved Neq64$"
+ return 73
+ }
+ return 75
+ }
+ return 77
+ }
+ return 79
+}
+
+func f4d(a, b, c int) int {
+ if a < b {
+ if a < c {
+ if a < b { // ERROR "Proved Less64$"
+ if a < c { // ERROR "Proved Less64$"
+ return 87
+ }
+ return 89
+ }
+ return 91
+ }
+ return 93
+ }
+ return 95
+}
+
+func f4e(a, b, c int) int {
+ if a < b {
+ if b > a { // ERROR "Proved Greater64$"
+ return 101
+ }
+ return 103
+ }
+ return 105
+}
+
+func f4f(a, b, c int) int {
+ if a <= b {
+ if b > a {
+ if b == a { // ERROR "Disproved Eq64$"
+ return 112
+ }
+ return 114
+ }
+ if b >= a { // ERROR "Proved Geq64$"
+ if b == a { // ERROR "Proved Eq64$"
+ return 118
+ }
+ return 120
+ }
+ return 122
+ }
+ return 124
+}
+
+func f5(a, b uint) int {
+ if a == b {
+ if a <= b { // ERROR "Proved Leq64U$"
+ return 130
+ }
+ return 132
+ }
+ return 134
+}
+
+// These comparisons are compile time constants.
+func f6a(a uint8) int {
+ if a < a { // ERROR "Disproved Less8U$"
+ return 140
+ }
+ return 151
+}
+
+func f6b(a uint8) int {
+ if a < a { // ERROR "Disproved Less8U$"
+ return 140
+ }
+ return 151
+}
+
+func f6x(a uint8) int {
+ if a > a { // ERROR "Disproved Greater8U$"
+ return 143
+ }
+ return 151
+}
+
+func f6d(a uint8) int {
+ if a <= a { // ERROR "Proved Leq8U$"
+ return 146
+ }
+ return 151
+}
+
+func f6e(a uint8) int {
+ if a >= a { // ERROR "Proved Geq8U$"
+ return 149
+ }
+ return 151
+}
+
+func f7(a []int, b int) int {
+ if b < len(a) {
+ a[b] = 3
+ if b < len(a) { // ERROR "Proved Less64$"
+ a[b] = 5 // ERROR "Proved IsInBounds$"
+ }
+ }
+ return 161
+}
+
+func f8(a, b uint) int {
+ if a == b {
+ return 166
+ }
+ if a > b {
+ return 169
+ }
+ if a < b { // ERROR "Proved Less64U$"
+ return 172
+ }
+ return 174
+}
+
+func main() {
+}
numParallel = flag.Int("n", runtime.NumCPU(), "number of parallel tests to run")
summary = flag.Bool("summary", false, "show summary of results")
showSkips = flag.Bool("show_skips", false, "show skipped tests")
+ runSkips = flag.Bool("run_skips", false, "run skipped tests (ignore skip and build tags)")
linkshared = flag.Bool("linkshared", false, "")
updateErrors = flag.Bool("update_errors", false, "update error messages in test file based on compiler output")
runoutputLimit = flag.Int("l", defaultRunOutputLimit(), "number of parallel runoutput tests to run")
// shouldTest looks for build tags in a source file and returns
// whether the file should be used according to the tags.
func shouldTest(src string, goos, goarch string) (ok bool, whyNot string) {
+ if *runSkips {
+ return true, ""
+ }
for _, line := range strings.Split(src, "\n") {
line = strings.TrimSpace(line)
if strings.HasPrefix(line, "//") {
args = args[1:]
}
case "skip":
+ if *runSkips {
+ break
+ }
t.action = "skip"
return
default:
}
useTmp := true
+ ssaMain := false
runcmd := func(args ...string) ([]byte, error) {
cmd := exec.Command(args[0], args[1:]...)
var buf bytes.Buffer
if useTmp {
cmd.Dir = t.tempDir
cmd.Env = envForDir(cmd.Dir)
+ } else {
+ cmd.Env = os.Environ()
+ }
+ if ssaMain && os.Getenv("GOARCH") == "amd64" {
+ cmd.Env = append(cmd.Env, "GOSSAPKG=main")
}
err := cmd.Run()
if err != nil {
case "run":
useTmp = false
+ ssaMain = true
cmd := []string{"go", "run"}
if *linkshared {
cmd = append(cmd, "-linkshared")
t.err = fmt.Errorf("write tempfile:%s", err)
return
}
+ ssaMain = true
cmd = []string{"go", "run"}
if *linkshared {
cmd = append(cmd, "-linkshared")
+// +build !amd64
// errorcheck -0 -d=append,slice
// Copyright 2015 The Go Authors. All rights reserved.
func f16(x []T8, y T8) []T8 {
return append(x, y) // ERROR "write barrier"
}
+
+func t1(i interface{}) **int {
+ // From issue 14306, make sure we have write barriers in a type switch
+ // where the assigned variable escapes.
+ switch x := i.(type) { // ERROR "write barrier"
+ case *int:
+ return &x
+ }
+ switch y := i.(type) { // no write barrier here
+ case **int:
+ return y
+ }
+ return nil
+}