Rename StoreConst to ValAndOff so we can use it for other ops.
Make ValAndOff print nicely.
Add some notes & checks related to my aborted attempt to
implement combined CMP+load ops.
Change-Id: I2f901d12d42bc5a82879af0334806aa184a97e27
Reviewed-on: https://go-review.googlesource.com/18947
Run-TryBot: David Chase <drchase@google.com>
Reviewed-by: David Chase <drchase@google.com>
Add new constant-flags opcodes. These can be generated from
comparisons that we know the result of, like x&31 < 32.
Constant-fold the constant-flags opcodes into all flag users.
Reorder some CMPxconst args so they read in the comparison direction.
Reorg deadcode removal a bit - it needs to remove the OpCopy ops it
generates when strength-reducing Phi ops. So it needs to splice out all
the dead blocks and do a copy elimination before it computes live
values.
Change-Id: Ie922602033592ad8212efe4345394973d3b94d9f
Reviewed-on: https://go-review.googlesource.com/18267
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: David Chase <drchase@google.com>
With the separate flagalloc pass, it should be fine to
allow CSE of control values. The worst that can happen
is that the comparison gets un-CSEd by flagalloc.
Fix bug in flagalloc where flag restores were getting
clobbered by rematerialization during register allocation.
Change-Id: If476cf98b69973e8f1a8eb29441136dd12fab8ad
Reviewed-on: https://go-review.googlesource.com/17760
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
Make sure that when a pointer value is live across a function
call, we save it as a pointer. (And similarly a uintptr
live across a function call should not be saved as a pointer.)
Add a nasty test case.
This is probably what is preventing the merge from master
to dev.ssa. Signs point to something like this bug happening
in mallocgc.
Change-Id: Ib23fa1251b8d1c50d82c6a448cb4a4fc28219029
Reviewed-on: https://go-review.googlesource.com/16830
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Some optimizations of things I've seen looking at generated code.
(x+y)-x == y
x-0 == x
The ptr portion of the constant string "" can be nil.
Also update TODO with recent changes.
Change-Id: I02c41ca2f9e9e178bf889058d3e083b446672dbe
Reviewed-on: https://go-review.googlesource.com/16771
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Be more consistent about this. There's no reason to do the
pointer arithmetic on a different type, as sizeof(int) >=
sizeof(ptr) on all of our platforms. It simplifies our
rewrite rules also, except for a few that need duplication.
Add some more constant folding to get constant indexing and
slicing to fold down to nothing.
Change-Id: I3e56cdb14b3dc1a6a0514f0333e883f92c19e3c7
Reviewed-on: https://go-review.googlesource.com/16586
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: David Chase <drchase@google.com>
The single value rewrite function is too big. Some compilers
fail on it (out of memory, branch offset too large). Break it
up into a rewrite function per op.
Change-Id: Iede697c8a1a3a22b485cd0dc85d3e233160c89c2
Reviewed-on: https://go-review.googlesource.com/16347
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Todd Neal <todd@tneal.org>
Introduce opcodes that store a constant value.
AuxInt now needs to hold both the value to be stored and the
constant offset at which to store it. Introduce a StoreConst
type to help encode/decode these parts to/from an AuxInt.
Change-Id: I1631883abe035cff4b16368683e1eb3d2ccb674d
Reviewed-on: https://go-review.googlesource.com/16170
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Replace REP MOVSB with all the copying techniques used by the
old compiler. Copy in chunks, DUFFCOPY, etc.
Introduces MOVO opcodes and an Int128 type to move around
16 bytes at a time.
Change-Id: I1e73e68ca1d8b3dd58bb4af2f4c9e5d9bf13a502
Reviewed-on: https://go-review.googlesource.com/16174
Reviewed-by: Todd Neal <todd@tneal.org>
Run-TryBot: Keith Randall <khr@golang.org>
Use faulting loads instead of test/jeq to do nil checks.
Fold nil checks into a following load/store if possible.
Makes binaries about 2% smaller.
Change-Id: I54af0f0a93c853f37e34e0ce7e3f01dd2ac87f64
Reviewed-on: https://go-review.googlesource.com/16287
Reviewed-by: David Chase <drchase@google.com>
Modified GOSSA{HASH.PKG} environment variable filters to
make it easier to make/run with all SSA for testing.
Disable attempts at SSA for architectures that are not
amd64 (avoid spurious errors/unimplementeds.)
Removed easy out for unimplemented features.
Add convert op for proper liveness in presence of uintptr
to/from unsafe.Pointer conversions.
Tweaked stack sizes to get a pass on windows;
1024 instead 768, was observed to pass at least once.
Change-Id: Ida3800afcda67d529e3b1cf48ca4a3f0fa48b2c5
Reviewed-on: https://go-review.googlesource.com/16201
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Chase <drchase@google.com>
getg reads from memory, so it should really have a
memory arg. It is critical in functions which call setg
to make sure getg gets ordered correctly with setg.
Change-Id: Ief4875421f741fc49c07b0e1f065ce2535232341
Reviewed-on: https://go-review.googlesource.com/16100
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: David Chase <drchase@google.com>
Some rewrite rules need to make sure the rewrite target ends up
in a specific block. For example:
(MOVBQSX (MOVBload [off] {sym} ptr mem)) ->
@v.Args[0].Block (MOVBQSXload <v.Type> [off] {sym} ptr mem)
The MOVBQSXload op needs to be in the same block as the MOVBload
(to ensure exactly one memory is live at basic block boundaries).
Change-Id: Ibe49a4183ca91f6c859cba8135927f01d176e064
Reviewed-on: https://go-review.googlesource.com/15804
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Cleaned up first-block-in-function code.
Added cases for |PHEAP for PPARAM and PAUTO.
Made PPARAMOUT act more like PAUTO for purposes
of address generation and vardef placement.
Added cases for OCLOSUREVAR and Ops for getting closure
pointer. Closure ops are scheduled at top of entry block
to capture DX.
Wrote test that seems to show proper behavior for addressed
parameters, locals, and returns.
Change-Id: Iee93ebf9e3d9f74cfb4d1c1da8038eb278d8a857
Reviewed-on: https://go-review.googlesource.com/14650
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Chase <drchase@google.com>
There's no need for special ops for panicindex and panicslice.
Just use regular runtime calls.
Change-Id: I71b9b73f4f1ebce1220fdc1e7b7f65cfcf4b7bae
Reviewed-on: https://go-review.googlesource.com/14726
Reviewed-by: David Chase <drchase@google.com>
OCALLINTER, as well as ODEFER/OPROC with OCALLMETH/OCALLINTER.
Move all the call logic to its own routine, a lot of the
code is shared.
Change-Id: Ieac59596165e434cc6d1d7b5e46b78957e9c5ed3
Reviewed-on: https://go-review.googlesource.com/14464
Reviewed-by: Todd Neal <todd@tneal.org>
Reviewed-by: David Chase <drchase@google.com>
Load-and-sign-extend opcodes were being generated in the
wrong block, leading to having more than one memory variable
live at once. Fix the rules + add a test.
Change-Id: Iadf80e55ea901549c15c628ae295c2d0f1f64525
Reviewed-on: https://go-review.googlesource.com/14591
Reviewed-by: Todd Neal <todd@tneal.org>
Run-TryBot: Todd Neal <todd@tneal.org>
TODO: for now, just function calls. Do method and interface calls.
Change-Id: Ib262dfa31cb753996cde899beaad4dc2e66705ac
Reviewed-on: https://go-review.googlesource.com/14035
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Store floats in AuxInt to reduce allocations.
Change-Id: I101e6322530b4a0b2ea3591593ad022c992e8df8
Reviewed-on: https://go-review.googlesource.com/14320
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Store bools in AuxInt to reduce allocations.
Change-Id: Ibd26db67fca5e1e2803f53d7ef094897968b704b
Reviewed-on: https://go-review.googlesource.com/14276
Reviewed-by: Keith Randall <khr@golang.org>
Modified to use the truncating conversion.
Fixes reflect.
Change-Id: I47bf3200abc2d2c662939a2a2351e2ff84168f4a
Reviewed-on: https://go-review.googlesource.com/14167
Reviewed-by: David Chase <drchase@google.com>
Still to do:
details, more testing corner cases. (e.g. negative zero)
Includes small cleanups for previous CL.
Note: complex division is currently done in the runtime,
so the division code here is apparently not yet necessary
and also not tested. Seems likely better to open code
division and expose the widening/narrowing to optimization.
Complex64 multiplication and division is done in wide
format to avoid cancellation errors; for division, this
also happens to be compatible with pre-SSA practice
(which uses a single complex128 division function).
It would-be-nice to widen for complex128 multiplication
intermediates as well, but that is trickier to implement
without a handy wider-precision format.
Change-Id: I595a4300f68868fb7641852a54674c6b2b78855e
Reviewed-on: https://go-review.googlesource.com/14028
Reviewed-by: Keith Randall <khr@golang.org>
Specifying types in rewrites for all subexpressions gets verbose
quickly. Allow opcodes to specify a default type which is used when
none is supplied explicitly.
Provide default types for a few easy opcodes. There are probably more
we can do, but this is a good start.
Change-Id: Iedc2a1a423cc3e2d4472640433982f9aa76a9f18
Reviewed-on: https://go-review.googlesource.com/14128
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Add a new function and generic operation to handle
bounds checking for slices. Unlike the index
bounds checking the index can be equal to the upper
bound.
Do gc-friendly slicing that generates proper code for
0-length result slices.
This is a takeover of Alexandru's original change,
(https://go-review.googlesource.com/#/c/12764/)
submittable now that the decompose phase is in.
Change-Id: I17d164cf42ed7839f84ca949c6ad3289269c9160
Reviewed-on: https://go-review.googlesource.com/13903
Reviewed-by: David Chase <drchase@google.com>
Basic ops, no particular optimization in the pattern
matching yet (e.g. x!=x for Nan detection, x cmp constant,
etc.)
Change-Id: I0043564081d6dc0eede876c4a9eb3c33cbd1521c
Reviewed-on: https://go-review.googlesource.com/13704
Reviewed-by: Keith Randall <khr@golang.org>
MOVXload and MOVXstore opcodes have both an auxint offset
and an aux offset (a symbol name, like a local or arg or global).
Make sure we keep those values during rewrites.
Change-Id: Ic9fd61bf295b5d1457784c281079a4fb38f7ad3b
Reviewed-on: https://go-review.googlesource.com/13849
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Implement index check panics (and slice check panics, for when
we need those).
Clean up nil check. Now that the new regalloc is in we can use
the register we just tested as the address 0 destination.
Remove jumps after panic calls, they are unreachable.
Change-Id: Ifee6e510cdea49cc7c7056887e4f06c67488d491
Reviewed-on: https://go-review.googlesource.com/13687
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Adds support for high multiply which is used by the frontend when
rewriting const division. The frontend currently only does this for 8,
16, and 32 bit integer arithmetic.
Change-Id: I9b6c6018f3be827a50ee6c185454ebc79b3094c8
Reviewed-on: https://go-review.googlesource.com/13696
Reviewed-by: Keith Randall <khr@golang.org>
Added F32 and F64 load, store, and addition.
Added F32 and F64 multiply.
Added F32 and F64 subtraction and division.
Added X15 to "clobber" for FP sub/div
Added FP constants
Added separate FP test in gc/testdata
Change-Id: Ifa60dbad948a40011b478d9605862c4b0cc9134c
Reviewed-on: https://go-review.googlesource.com/13612
Reviewed-by: Keith Randall <khr@golang.org>
Using the type of the store argument is not safe, it may change
during rewriting, giving us the wrong store width.
(Store ptr (Trunc32to16 val) mem)
This should be a 2-byte store. But we have the rule:
(Trunc32to16 x) -> x
So if the Trunc rewrite happens before the Store -> MOVW rewrite,
then the Store thinks that the value it is storing is 4 bytes
in size and uses a MOVL. Bad things ensue.
Fix this by encoding the store width explicitly in the auxint field.
In general, we can't rely on the type of arguments, as they may
change during rewrites. The type of the op itself (as used by
the Load rules) is still ok to use.
Change-Id: I9e2359e4f657bb0ea0e40038969628bf0f84e584
Reviewed-on: https://go-review.googlesource.com/13636
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Mul8 is lowered to MULW, but the rules for constant
folding do not handle the fact that the operands
are int8.
Change-Id: I2c336686d86249393a8079a471c6ff74e6228f3d
Reviewed-on: https://go-review.googlesource.com/13642
Reviewed-by: Keith Randall <khr@golang.org>
Generating logging code every time causes large
diffs for small changes.
Since the intent is to use this for debugging only,
generate logging code only when requested.
Committed generated code will be logging free.
Change-Id: I9ef9e29c88b76c2557bad4c6b424b9db1255ec8b
Reviewed-on: https://go-review.googlesource.com/13623
Reviewed-by: Keith Randall <khr@golang.org>
This makes it easier to investigate and
understand rewrite behavior.
Change-Id: I790e8964922caf98362ce8a6d6972f52d83eefa8
Reviewed-on: https://go-review.googlesource.com/13588
Reviewed-by: Keith Randall <khr@golang.org>
Introduce pseudo-ops PanicMem and LoweredPanicMem.
PanicMem could be rewritten directly into MOVL
during lowering, but then we couldn't log nil checks.
With this change, runnable nil check tests pass:
GOSSAPKG=main go run run.go -- nil*.go
Compiler output nil check tests fail:
GOSSAPKG=p go run run.go -- nil*.go
This is due to several factors:
* SSA has improved elimination of unnecessary nil checks.
* SSA is missing elimination of implicit nil checks.
* SSA is missing extra logging about why nil checks were removed.
I'm not sure how best to resolve these failures,
particularly in a world in which the two backends
will live side by side for some time.
For now, punt on the problem.
Change-Id: Ib2ca6824551671f92e0e1800b036f5ca0905e2a3
Reviewed-on: https://go-review.googlesource.com/13474
Reviewed-by: Keith Randall <khr@golang.org>
Hardcoded the limit on constants only allowed.
Change-Id: Idb9b07b4871db7a752a79e492671e9b41207b956
Reviewed-on: https://go-review.googlesource.com/13257
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Rather than require an explicit Copy on the RHS of rewrite rules,
use rulegen magic to add it.
The advantages to handling this in rulegen are:
* simpler rules
* harder to accidentally miss a Copy
Change-Id: I46853bade83bdf517eee9495bf5a553175277b53
Reviewed-on: https://go-review.googlesource.com/13242
Reviewed-by: Keith Randall <khr@golang.org>
The lowering rules were missing the non-64 bit case.
SBBLcarrymask can be folded to a int32 integer whose
type has a smaller bit size. Without the new AND rules
the following would be generated:
v19 = MOVLconst <uint8> [-1] : SI
v20 = ANDB <uint8> v18 v19 : DI
which is obviously a NOP.
Fixes#12022
Change-Id: I5f4209f78edc0f118e5b9b2908739f09cefebca4
Reviewed-on: https://go-review.googlesource.com/13301
Reviewed-by: Keith Randall <khr@golang.org>