Added F32 and F64 load, store, and addition.
Added F32 and F64 multiply.
Added F32 and F64 subtraction and division.
Added X15 to "clobber" for FP sub/div
Added FP constants
Added separate FP test in gc/testdata
Change-Id: Ifa60dbad948a40011b478d9605862c4b0cc9134c
Reviewed-on: https://go-review.googlesource.com/13612
Reviewed-by: Keith Randall <khr@golang.org>
Using the type of the store argument is not safe, it may change
during rewriting, giving us the wrong store width.
(Store ptr (Trunc32to16 val) mem)
This should be a 2-byte store. But we have the rule:
(Trunc32to16 x) -> x
So if the Trunc rewrite happens before the Store -> MOVW rewrite,
then the Store thinks that the value it is storing is 4 bytes
in size and uses a MOVL. Bad things ensue.
Fix this by encoding the store width explicitly in the auxint field.
In general, we can't rely on the type of arguments, as they may
change during rewrites. The type of the op itself (as used by
the Load rules) is still ok to use.
Change-Id: I9e2359e4f657bb0ea0e40038969628bf0f84e584
Reviewed-on: https://go-review.googlesource.com/13636
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Mul8 is lowered to MULW, but the rules for constant
folding do not handle the fact that the operands
are int8.
Change-Id: I2c336686d86249393a8079a471c6ff74e6228f3d
Reviewed-on: https://go-review.googlesource.com/13642
Reviewed-by: Keith Randall <khr@golang.org>
This is an initial implementation.
There are many rough edges and TODOs,
which will hopefully be polished out
with use.
Fixes#12071.
Change-Id: I1d6fd5a343063b5200623bceef2c2cfcc885794e
Reviewed-on: https://go-review.googlesource.com/13472
Reviewed-by: Keith Randall <khr@golang.org>
We need to move the memory variable update back to before endBlock
so that all successors use the right memory value.
See https://go-review.googlesource.com/13560
Change-Id: Id72e5526c56e5e070b933d3b28dc503a5a2978dc
Reviewed-on: https://go-review.googlesource.com/13586
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Introduce pseudo-ops PanicMem and LoweredPanicMem.
PanicMem could be rewritten directly into MOVL
during lowering, but then we couldn't log nil checks.
With this change, runnable nil check tests pass:
GOSSAPKG=main go run run.go -- nil*.go
Compiler output nil check tests fail:
GOSSAPKG=p go run run.go -- nil*.go
This is due to several factors:
* SSA has improved elimination of unnecessary nil checks.
* SSA is missing elimination of implicit nil checks.
* SSA is missing extra logging about why nil checks were removed.
I'm not sure how best to resolve these failures,
particularly in a world in which the two backends
will live side by side for some time.
For now, punt on the problem.
Change-Id: Ib2ca6824551671f92e0e1800b036f5ca0905e2a3
Reviewed-on: https://go-review.googlesource.com/13474
Reviewed-by: Keith Randall <khr@golang.org>
We were not recording function calls as
changing the state of memory.
As a result, the scheduler was not aware that
storing values to the stack in order to make a
function call must happen *after* retrieving
results from the stack from a just-completed
function call.
This fixes the container/ring tests.
This was my first experience debugging an issue
using the HTML output. I'm feeling quite
pleased with it.
Change-Id: I9e8276846be9fd7a60422911b11816c5175e3d0a
Reviewed-on: https://go-review.googlesource.com/13560
Reviewed-by: Keith Randall <khr@golang.org>
Hardcoded the limit on constants only allowed.
Change-Id: Idb9b07b4871db7a752a79e492671e9b41207b956
Reviewed-on: https://go-review.googlesource.com/13257
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Implement ITAB, selecting the itable field of an interface.
Soften the lowering check to allow lowerings that leave
generic but dead ops behind. (The ITAB lowering does this.)
Change-Id: Icc84961dd4060d143602f001311aa1d8be0d7fc0
Reviewed-on: https://go-review.googlesource.com/13144
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
I find myself always adding this in temporarily.
Make it permanent.
Change-Id: I1646b3930a07d0ea01840736ccd449b7fd24f06e
Reviewed-on: https://go-review.googlesource.com/13141
Reviewed-by: Keith Randall <khr@golang.org>
This is helpful when debugging generated code.
Change-Id: I268efa3593a03bb2c4e9f07d9034c004cd40df41
Reviewed-on: https://go-review.googlesource.com/13099
Reviewed-by: Keith Randall <khr@golang.org>
This fixes the crypto/subtle tests.
Change-Id: Ie6e721eec3481f67f13de1bfbd7988e227793148
Reviewed-on: https://go-review.googlesource.com/13000
Reviewed-by: Keith Randall <khr@golang.org>
The only types that remain in the ssa package
are special compiler-only types.
Change-Id: If957abf128ec0778910d67666c297f97f183b7ee
Reviewed-on: https://go-review.googlesource.com/12933
Reviewed-by: Keith Randall <khr@golang.org>
From compiling go there were 260 functions where XOR was needed.
Much of the required changes for implementing XOR were already
done in 12813.
Change-Id: I5a68aa028f5ed597bc1d62cedbef3620753dfe82
Reviewed-on: https://go-review.googlesource.com/12901
Reviewed-by: Keith Randall <khr@golang.org>
The existing backend simply elides OCONVNOP.
There's no reason for us to do any differently.
Rather than insert ConvNops and then rewrite them
away, stop creating them in the first place.
Change-Id: I4bcbe2229fcebd189ae18df24f2c612feb6e215e
Reviewed-on: https://go-review.googlesource.com/12810
Reviewed-by: Keith Randall <khr@golang.org>
Convert shift ops to also encode the size of the shift amount.
Change signed right shift from using CMOV to using bit twiddles.
It is a little bit better (5 instructions instead of 4, but fewer
bytes and slightly faster code). It's also a bit faster than
the 4-instruction branch version, even with a very predictable
branch. As tested on my machine, YMMV.
Implement OCOM while we are here.
Change-Id: I8ca12dd62fae5d626dc0e6da5d4bbd34fd9640d2
Reviewed-on: https://go-review.googlesource.com/12867
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Lots and lots of ops!
Also XOR for good measure.
Add a pass to the compiler generator to check that all of the
architecture-specific opcodes are handled by genValue. We will
catch any missing ones if we come across them during compilation,
but probably better to catch them statically.
Change-Id: Ic4adfbec55c8257f88117bc732fa664486262868
Reviewed-on: https://go-review.googlesource.com/12813
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
From compiling go there were 761 functions where OR was needed.
Change-Id: Ied8bf59cec50a3175273387bc7416bd042def6d8
Reviewed-on: https://go-review.googlesource.com/12766
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
With this, all non-float, non-complex
binary ops found in the standard library
are implemented.
Change-Id: I6087f115229888c0dce10ab35db3fd36a0e0a8b1
Reviewed-on: https://go-review.googlesource.com/12799
Reviewed-by: Keith Randall <khr@golang.org>
Together with teaching SSA to generate static data,
this fixes the encoding/pem and hash/adler32 tests.
Change-Id: I75f81f6c995dcb9c6d99bd3acda94a4feea8b87b
Reviewed-on: https://go-review.googlesource.com/12791
Reviewed-by: Keith Randall <khr@golang.org>
The existing backend recognizes special
assignment statements as being implementable
with static data rather than code.
Unfortunately, it assumes that it is in the middle
of codegen; it emits data and modifies the AST.
This does not play well with SSA's two-phase
bootstrapping approach, in which we attempt to
compile code but fall back to the existing backend
if something goes wrong.
To work around this:
* Add the ability to inquire about static data
without side-effects.
* Save the static data required for a function.
* Emit that static data during SSA codegen.
Change-Id: I2e8a506c866ea3e27dffb597095833c87f62d87e
Reviewed-on: https://go-review.googlesource.com/12790
Reviewed-by: Keith Randall <khr@golang.org>
For integer types less than a machine register, we have to decide
what the invariants are for the high bits of the register. We used
to set the high bits to the correct extension (sign or zero, as
determined by the type) of the low bits.
This CL makes the compiler ignore the high bits of the register
altogether (they are junk).
On this plus side, this means ops that generate subword results don't
have to worry about correctly extending them. On the minus side,
ops that consume subword arguments have to deal with the input
registers not being correctly extended.
For x86, this tradeoff is probably worth it. Almost all opcodes
have versions that use only the correct subword piece of their
inputs. (The one big exception is array indexing.) Not many opcodes
can correctly sign extend on output.
For other architectures, the tradeoff is probably not so clear, as
they don't have many subword-safe opcodes (e.g. 16-bit compare,
ignoring the high 16/48 bits). Fortunately we can decide whether
we do this per-architecture.
For the machine-independent opcodes, we pretend that the "register"
size is equal to the type width, so sign extension is immaterial.
Opcodes that care about the signedness of the input (e.g. compare,
right shift) have two different variants.
Change-Id: I465484c5734545ee697afe83bc8bf4b53bd9df8d
Reviewed-on: https://go-review.googlesource.com/12600
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
The only slice/interface comparisons that reach
the backend are comparisons to nil.
Funcs, maps, and channels are references types,
so pointer equality is enough.
Change-Id: I60a71da46a36202e9bd62ed370ab7d7f2e2800e7
Reviewed-on: https://go-review.googlesource.com/12715
Reviewed-by: Keith Randall <khr@golang.org>
Before this patch there was only partial support for ANDQconst
which was not lowered. This patch added support for AND operations
for all bit sizes and signs.
Change-Id: I3a6b2cddfac5361b27e85fcd97f7f3537ebfbcb6
Reviewed-on: https://go-review.googlesource.com/12761
Reviewed-by: Keith Randall <khr@golang.org>
This mimics the way the old backend
compiles OCALLMETH.
Change-Id: I635c8e7a48c8b5619bd837f78fa6eeba83a57b2f
Reviewed-on: https://go-review.googlesource.com/12549
Reviewed-by: Keith Randall <khr@golang.org>
Some of these were right; others weren't.
Fixes 'GOGC=off GOSSAPKG=mime go test -a mime'.
The right long term fix is probably to teach the
register allocator about in-place instructions.
In the meantime, all the tests that we can run
now pass.
Change-Id: I8e37b00a5f5e14f241b427d45d5f5cc1064883a2
Reviewed-on: https://go-review.googlesource.com/12664
Reviewed-by: Keith Randall <khr@golang.org>
Prior to this, we were smashing our own stack,
which caused the crypto/sha256 tests to fail.
Change-Id: I7dd94cf466d175b3be0cd65f9c4fe8b1223081fe
Reviewed-on: https://go-review.googlesource.com/12660
Reviewed-by: Daniel Morsing <daniel.morsing@gmail.com>
Reviewed-by: Keith Randall <khr@golang.org>
This prevents panics while attempting to generate code
for the runtime package. Now:
<unknown line number>: internal compiler error: localOffset of non-LocalSlot value: v10 = ADDQconst <*m> [256] v22
Change-Id: I20ed6ec6aae2c91183b8c826b8ebcc98e8ceebff
Reviewed-on: https://go-review.googlesource.com/12655
Reviewed-by: Keith Randall <khr@golang.org>
This generates more efficient code.
Before:
0x003a 00058 (rr.go:7) LEAQ go.string.hdr."="(SB), BX
0x0041 00065 (rr.go:7) LEAQ 16(BX), BP
0x0045 00069 (rr.go:7) MOVQ BP, 16(SP)
After:
0x003a 00058 (rr.go:7) LEAQ go.string."="(SB), BX
0x0041 00065 (rr.go:7) MOVQ BX, 16(SP)
It also matches the existing backend
and is more robust to other changes,
such as CL 11698, which I believe broke
the current code.
This CL fixes the encoding/base64 tests, as run with:
GOGC=off GOSSAPKG=base64 go test -a encoding/base64
Change-Id: I3c475bed1dd3335cc14e13309e11d23f0ed32c17
Reviewed-on: https://go-review.googlesource.com/12654
Reviewed-by: Keith Randall <khr@golang.org>
These temporary environment variables make it
possible to enable using SSA-generated code
for a particular function or package without
having to rebuild the compiler.
This makes it possible to start bulk testing
SSA generated code.
First, bump up the default stack size
(_StackMin in runtime/stack2.go) to something
large like 32768, because without stackmaps
we can't grow stacks.
Then run something like:
for pkg in `go list std`
do
GOGC=off GOSSAPKG=`basename $pkg` go test -a $pkg
done
When a test fails, you can re-run those tests,
selectively enabling one function after another,
until you find the one that is causing trouble.
Doing this right now yields some interesting results:
* There are several packages for which we generate
some code and whose tests pass. Yay!
* We can generate code for encoding/base64, but
tests there fail, so there's a bug to fix.
* Attempting to build the runtime yields a panic during codegen:
panic: interface conversion: ssa.Location is nil, not *ssa.LocalSlot
* The top unimplemented codegen items are (simplified):
59 genValue not implemented: REPMOVSB
18 genValue not implemented: REPSTOSQ
14 genValue not implemented: SUBQ
9 branch not implemented: If v -> b b. Control: XORQconst <bool> [1]
8 genValue not implemented: MOVQstoreidx8
4 branch not implemented: If v -> b b. Control: SETG <bool>
3 branch not implemented: If v -> b b. Control: SETLE <bool>
2 load flags not implemented: LoadReg8 <flags>
2 genValue not implemented: InvertFlags <flags>
1 store flags not implemented: StoreReg8 <flags>
1 branch not implemented: If v -> b b. Control: SETGE <bool>
Change-Id: Ib64809ac0c917e25bcae27829ae634c70d290c7f
Reviewed-on: https://go-review.googlesource.com/12547
Reviewed-by: Keith Randall <khr@golang.org>
Use width-and-signed-specific multiply opcodes.
Implement OMUL.
A few other cleanups.
Fixes#11467
Change-Id: Ib0fe80a1a9b7208dbb8a2b6b652a478847f5d244
Reviewed-on: https://go-review.googlesource.com/12540
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Add label and goto checks and improve test coverage.
Implement OSWITCH and OSELECT.
Implement OBREAK and OCONTINUE.
Allow generation of code in dead blocks.
Change-Id: Ibebb7c98b4b2344f46d38db7c9dce058c56beaac
Reviewed-on: https://go-review.googlesource.com/12445
Reviewed-by: Keith Randall <khr@golang.org>