Includes hmul (all widths)
compare for boolean result and simplifications
shift operations plus changes/additions for implementation
(ORN, ADDME, ADDC)
Also fixed a backwards-operand CMP.
Change-Id: Id723c4e25125c38e0d9ab9ec9448176b75f4cdb4
Reviewed-on: https://go-review.googlesource.com/25410
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Support the following:
- Shifts. ARM64 machine instructions only use lowest 6 bits of the
shift (i.e. mod 64). Use conditional selection instruction to
ensure Go semantics.
- Zero/Move. Alignment is ensured.
- Hmul, Avg64u, Sqrt.
- reserve R18 (platform register in ARM64 ABI) and R29 (frame pointer
in ARM64 ABI).
Everything compiles, all.bash passed (with non-SSA test disabled).
Change-Id: Ia8ed58dae5cbc001946f0b889357b258655078b1
Reviewed-on: https://go-review.googlesource.com/25290
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Make tuple types and their SelectX ops fully generic.
These ops no longer need to be lowered.
Regalloc understands them and their tuple-generating arguments.
We can now have opcodes returning arbitrary pairs of results.
(And it would be easy to move to >2 results if needed.)
Update arm implementation to the new standard.
Implement just enough in 386 port to do 64-bit add.
Change-Id: I370ed5aacce219c82e1954c61d1f63af76c16f79
Reviewed-on: https://go-review.googlesource.com/24976
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
NaCl code runs in sandbox and there are restrictions for its
instruction uses
(https://developer.chrome.com/native-client/reference/sandbox_internals/arm-32-bit-sandbox).
Like the legacy backend, on NaCl,
- don't use R9, which is used as NaCl's "thread pointer".
- don't use Duff's device.
- don't use indexed load/stores.
- the assembler rewrites DIV/MOD to runtime calls, which on NaCl
clobbers R12, so R12 is marked as clobbered for DIV/MOD.
- other restrictions are satisfied by the assembler.
Enable SSA specific tests on nacl/arm, and disable non-SSA ones.
Updates #15365.
Change-Id: I9262693ec6756b89ca29d3ae4e52a96fe5403b02
Reviewed-on: https://go-review.googlesource.com/24859
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
If a spill is used to satisfy a merge edge (in shuffle), don't sink
it out of loop.
This is found in the following code (on ARM) where there is a stack
Phi (v268) inside a loop (b36 -> ... -> b47 -> b38 -> b36).
(before shuffle)
b36: <- b34 b38
...
v268 = Phi <int> v410 v360 : autotmp_198[int]
...
... -> b47
b47: <- b44
...
v360 = ... : R6
v230 = StoreReg <int> v360 : autotmp_198[int]
v261 = CMPconst <flags> [0] v360
EQ v261 -> b49 b38 (unlikely)
b38: <- b47
...
Plain -> b36
During shuffle, v230 (as spill of v360) is found to satisfy v268, but
it didn't record its use in shuffle, and v230 is sunk out of the loop
(to b49), which leads to bad value in v268.
This seems never happened on AMD64 (in make.bash), until 4 registers
are removed.
Change-Id: I01dfc28ae461e853b36977c58bcfc0669e556660
Reviewed-on: https://go-review.googlesource.com/24858
Reviewed-by: David Chase <drchase@google.com>
Provide better diagnostic messages.
Use an int for numRegs comparisons,
to avoid asking whether a uint8 is > 255.
Change-Id: I33ae193ce292b24b369865abda3902c3207d7d3f
Reviewed-on: https://go-review.googlesource.com/24135
Reviewed-by: Keith Randall <khr@golang.org>
Use hardware g register (R10) for GetG, allow g to appear at LHS of
some ops.
Progress on SSA backend for ARM. Now everything compiles and runs.
Updates #15365.
Change-Id: Icdf93585579faa86cc29b1e17ab7c90f0119fc4e
Reviewed-on: https://go-review.googlesource.com/23952
Reviewed-by: David Chase <drchase@google.com>
- 64x signed right shift was wrong for shift larger than 0x80000000.
- for Lsh-followed-by-Rsh, the intermediate value should be full int
width, so when it is spilled MOVW should be used.
- use RET for RetJmp, so the assembler can take case of restoring LR
for non-leaf case.
- reserve R9 in dynlink mode. R9 is used for GOT by the assembler.
Progress on SSA backend for ARM. Still not complete.
Updates #15365.
Change-Id: I3caca256b92ff7cf96469da2feaf4868a592efc5
Reviewed-on: https://go-review.googlesource.com/23793
Reviewed-by: David Chase <drchase@google.com>
Auto-generate register masks and load them through Config.
Passed toolstash -cmp on AMD64.
Tests phi_ssa.go and regalloc_ssa.go in cmd/compile/internal/gc/testdata
passed on ARM.
Updates #15365.
Change-Id: I393924d68067f2dbb13dab82e569fb452c986593
Reviewed-on: https://go-review.googlesource.com/23292
Reviewed-by: David Chase <drchase@google.com>
This has a minor performance cost, but far less than is being gained by SSA.
As an experiment, enable it during the Go 1.7 beta.
Having frame pointers on by default makes Linux's perf, Intel VTune,
and other profilers much more useful, because it lets them gather a
stack trace efficiently on profiling events.
(It doesn't help us that much, since when we walk the stack we usually
need to look up PC-specific information as well.)
Fixes#15840.
Change-Id: I4efd38412a0de4a9c87b1b6e5d11c301e63f1a2a
Reviewed-on: https://go-review.googlesource.com/23451
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Fix hardcoded flag register mask in ssa/flagalloc.go by auto-generating
the mask.
Also fix a mistake (in previous CL) about conditional branches.
Progress on SSA backend for ARM. Still not complete. Now "container/ring"
package compiles and tests passed.
Updates #15365.
Change-Id: Id7c8805c30dbb8107baedb485ed0f71f59ed6ea8
Reviewed-on: https://go-review.googlesource.com/23093
Reviewed-by: Keith Randall <khr@golang.org>
Run live vars test only on ssa builds.
We can't just drop KeepAlive ops during regalloc. We need
to replace them with copies.
Change-Id: Ib4b3b1381415db88fdc2165fc0a9541b73ad9759
Reviewed-on: https://go-review.googlesource.com/23225
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Introduce a KeepAlive op which makes sure that its argument is kept
live until the KeepAlive. Use KeepAlive to mark pointer input
arguments as live after each function call and at each return.
We do this change only for pointer arguments. Those are the
critical ones to handle because they might have finalizers.
Doing compound arguments (slices, structs, ...) is more complicated
because we would need to track field liveness individually (we do
that for auto variables now, but inputs requires extra trickery).
Turn off the automatic marking of args as live. That way, when args
are explicitly nulled, plive will know that the original argument is
dead.
The KeepAlive op will be the eventual implementation of
runtime.KeepAlive.
Fixes#15277
Change-Id: I5f223e65d99c9f8342c03fbb1512c4d363e903e5
Reviewed-on: https://go-review.googlesource.com/22365
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
This adds a sparse method for locating nearest ancestors
in a dominator tree, and checks blocks with more than one
predecessor for differences and inserts phi functions where
there are.
Uses reversed post order to cut number of passes, running
it from first def to last use ("last use" for paramout and
mem is end-of-program; last use for a phi input from a
backedge is the source of the back edge)
Includes a cutover from old algorithm to new to avoid paying
large constant factor for small programs. This keeps normal
builds running at about the same time, while not running
over-long on large machine-generated inputs.
Add "phase" flags for ssa/build -- ssa/build/stats prints
number of blocks, values (before and after linking references
and inserting phis, so expansion can be measured), and their
product; the product governs the cutover, where a good value
seems to be somewhere between 1 and 5 million.
Among the files compiled by make.bash, this is the shape of
the tail of the distribution for #blocks, #vars, and their
product:
#blocks #vars product
max 6171 28180 173,898,780
99.9% 1641 6548 10,401,878
99% 463 1909 873,721
95% 152 639 95,235
90% 84 359 30,021
The old algorithm is indeed usually fastest, for 99%ile
values of usually.
The fix to LookupVarOutgoing
( https://go-review.googlesource.com/#/c/22790/ )
deals with some of the same problems addressed by this CL,
but on at least one bug ( #15537 ) this change is still
a significant help.
With this CL:
/tmp/gopath$ rm -rf pkg bin
/tmp/gopath$ time go get -v -gcflags -memprofile=y.mprof \
github.com/gogo/protobuf/test/theproto3/combos/...
...
real 4m35.200s
user 13m16.644s
sys 0m36.712s
and pprof reports 3.4GB allocated in one of the larger profiles
With tip:
/tmp/gopath$ rm -rf pkg bin
/tmp/gopath$ time go get -v -gcflags -memprofile=y.mprof \
github.com/gogo/protobuf/test/theproto3/combos/...
...
real 10m36.569s
user 25m52.286s
sys 4m3.696s
and pprof reports 8.3GB allocated in the same larger profile
With this CL, most of the compilation time on the benchmarked
input is spent in register/stack allocation (cumulative 53%)
and in the sparse lookup algorithm itself (cumulative 20%).
Fixes#15537.
Change-Id: Ia0299dda6a291534d8b08e5f9883216ded677a00
Reviewed-on: https://go-review.googlesource.com/22342
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
In regalloc, a sparse map is preallocated for later use by
spill-in-loop sinking. However, variables (spills) are added
during register allocation before spill sinking, and a map
query involving any of these new variables will index out of
bounds in the map.
To fix:
1) fix the queries to use s.orig[v.ID].ID instead, to ensure
proper indexing. Note that s.orig will be nil for values
that are not eligible for spilling (like memory and flags).
2) add a test.
Fixes#15585.
Change-Id: I8f2caa93b132a0f2a9161d2178320d5550583075
Reviewed-on: https://go-review.googlesource.com/22911
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
:= is the wrong thing here. The new variable masks the old
variable so we allocate the slice afresh each time around the loop.
Change-Id: I759c30e1bfa88f40decca6dd7d1e051e14ca0844
Reviewed-on: https://go-review.googlesource.com/22679
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
The cached copy's ID is sometimes outside the bounds of the orig array.
There's no reason to start at the cached copy and work backwards
to the original value. We already have the original value ID at
all the callsites.
Fixes noopt build
Change-Id: I313508a1917e838a87e8cc83b2ef3c2e4a8db304
Reviewed-on: https://go-review.googlesource.com/22355
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Improve forward-looking desired register calculations.
It is now inter-block and handles a bunch more cases.
Fixes#14504Fixes#14828Fixes#15254
Change-Id: Ic240fa0ec6a779d80f577f55c8a6c4ac8c1a940a
Reviewed-on: https://go-review.googlesource.com/22160
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
This is a fix for the ssacheck builder
http://build.golang.org/log/baa00f70c34e41186051cfe90568de3d91f115d7
after CL 21307 for sinking spills down loop exits
https://go-review.googlesource.com/#/c/21037/
The fix is to reuse (move) the original spill, thus preserving
the definition of the variable and its use count. Original and
copy both use the same stack slot, but ssacheck needs to see
a definition for the variable itself.
Fixes#15279.
Change-Id: I286285490193dc211b312d64dbc5a54867730bd6
Reviewed-on: https://go-review.googlesource.com/21995
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
For call-free inner loops.
Revised statistics:
85 inner loop spills sunk
341 inner loop spills remaining
1162 inner loop spills that were candidates for sinking
ended up completely register allocated
119 inner loop spills could have been sunk were used in
"shuffling" at the bottom of the loop.
1 inner loop spill not sunk because the register assigned
changed between def and exit,
Understanding how to make an inner loop definition not be
a candidate for from-memory shuffling (to force the shuffle
code to choose some other value) should pick up some of the
119 other spills disqualified for this reason.
Modified the stats printing based on feedback from Austin.
Change-Id: If3fb9b5d5a028f42ccc36c4e3d9e0da39db5ca60
Reviewed-on: https://go-review.googlesource.com/21037
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Instead of being a hint, resultInArg0 is now enforced by regalloc.
This allows us to delete all the code from amd64/ssa.go which
deals with converting from a semantically three-address instruction
into some copies plus a two-address instruction.
Change-Id: Id4f39a80be4b678718bfd42a229f9094ab6ecd7c
Reviewed-on: https://go-review.googlesource.com/21816
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Be more careful about inserting instrumentation in racewalk.
If the node being instrumented is an OAS, and it has a non-
empty Ninit, then append instrumentation to the Ninit list
rather than letting it be inserted before the OAS (and the
compilation of its init list). This deals with the case that
the Ninit list defines a variable used in the RHS of the OAS.
Fixes#15091.
Change-Id: Iac91696d9104d07f0bf1bd3499bbf56b2e1ef073
Reviewed-on: https://go-review.googlesource.com/21771
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: David Chase <drchase@google.com>
Some minor scoping cleanups found by a very old version of grind.
Change-Id: I1d373817586445fc87e38305929097b652696fdd
Reviewed-on: https://go-review.googlesource.com/21064
Run-TryBot: Dave Cheney <dave@cheney.net>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Start working on arm port. Gets close to correct
code for fibonacci:
func fib(n int) int {
if n < 2 {
return n
}
return fib(n-1) + fib(n-2)
}
Still a lot to do, but this is a good starting point.
Cleaned up some arch-specific dependencies in regalloc.
Change-Id: I4301c6c31a8402168e50dcfee8bcf7aee73ea9d5
Reviewed-on: https://go-review.googlesource.com/21000
Reviewed-by: David Chase <drchase@google.com>
This seems to help the problem reported in #14606; this
change seems to produce about a 4% improvement (mostly
for the 128-8192 shards).
Fixes#14789.
Change-Id: I1bd52c82d4ca81d9d5e9ab371fdfc860d7e8af50
Reviewed-on: https://go-review.googlesource.com/20660
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Keep track of how many uses each Value has. Each appearance in
Value.Args and in Block.Control counts once.
The number of uses of a value is generically useful to
constrain rewrite rules. For instance, we might want to
prevent merging index operations into loads if the same
index expression is used lots of times.
But I have one use in particular for which the use count is required.
We must make sure we don't combine ops with loads if the load has
more than one use. Otherwise, we may split a single load
into multiple loads and that breaks perceived behavior in
the presence of races. In particular, the load of m.state
in sync/mutex.go:Lock can't be done twice. (I have a separate
CL which triggers the mutex failure. This CL has a test which
demonstrates a similar failure.)
Change-Id: Icaafa479239f48632a069d0c3f624e6ebc6b1f0e
Reviewed-on: https://go-review.googlesource.com/20790
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Todd Neal <todd@tneal.org>
x86 has a lot of instructions that require the output to be in the same
register as one of the inputs. When allocating the output register,
allocate the same register as the input if it is available.
Improves the performance of golang.org/x/crypto/sha3 by
10% (from 6% slower than 1.6 to 4% faster).
Fixes#14745
Change-Id: I4d81785240c9368e4dc75107b45c959d200df8e6
Reviewed-on: https://go-review.googlesource.com/20488
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Change the existing flags from compile time consts to be configurable
from the command line.
Change-Id: I4aab4bf3dfcbdd8e2b5a2ff51af95c2543967769
Reviewed-on: https://go-review.googlesource.com/20560
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Todd Neal <todd@tneal.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Make sure we do any just-before-return cleanup on all paths out of a
function, including when recovering. Each exit path should include
deferreturn (if there are any defers) and then the exit
code (e.g. copying heap-escaping return values back to the stack).
Introduce a Defer SSA block type which has two outgoing edges - one the
fallthrough edge (the defer was queued successfully) and one which
immediately returns (the defer had a successful recover() call and
normal execution should resume at the return point).
Fixes#14725
Change-Id: Iad035c9fd25ef8b7a74dafbd7461cf04833d981f
Reviewed-on: https://go-review.googlesource.com/20486
Reviewed-by: David Chase <drchase@google.com>
The tree's pretty inconsistent about single space vs double space
after a period in documentation. Make it consistently a single space,
per earlier decisions. This means contributors won't be confused by
misleading precedence.
This CL doesn't use go/doc to parse. It only addresses // comments.
It was generated with:
$ perl -i -npe 's,^(\s*// .+[a-z]\.) +([A-Z]),$1 $2,' $(git grep -l -E '^\s*//(.+\.) +([A-Z])')
$ go test go/doc -update
Change-Id: Iccdb99c37c797ef1f804a94b22ba5ee4b500c4f7
Reviewed-on: https://go-review.googlesource.com/20022
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Dave Day <djd@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
In regalloc, make LoadReg instructions use the line number
of their *use*, not their *source*. This reduces the
tendency of debugger stepping to "jump around" the program.
Change-Id: I59e2eeac4dca9168d8af3a93effbc5bdacac2881
Reviewed-on: https://go-review.googlesource.com/19836
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Spotted a minor source of excess allocation in the register
allocator. Rearranged the dominator tree code to pull its
scratch memory from a reused buffer attached to Config.
Change-Id: I6da6e7b112f7d3eb1fd00c58faa8214cdea44e38
Reviewed-on: https://go-review.googlesource.com/19450
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Add the aux type to opcodes.
Add rematerializeable as a flag.
Change-Id: I906e19281498f3ee51bb136299bf26e13a54b2ec
Reviewed-on: https://go-review.googlesource.com/19088
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Todd Neal <todd@tneal.org>
Converted working slices of pointer into slices of pointer
index. Half the size (on 64-bit machine) and no pointers
to trace if GC occurs while they're live.
TODO - could expose slice mapping ID->*Block; some dom
clients also construct these.
Minor optimization in regalloc that cuts allocation count.
Minor optimization in compile.go that cuts calls to Sprintf.
Change-Id: I28f0bfed422b7344af333dc52ea272441e28e463
Reviewed-on: https://go-review.googlesource.com/19104
Run-TryBot: Todd Neal <todd@tneal.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Todd Neal <todd@tneal.org>
For each value that needs to be in a fixed register at the end of the
block, and try to pick that fixed register when the instruction
generating that value is scheduled (or restored from a spill).
Just used for end-of-block register requirements for now.
Fixed-register instruction requirements (e.g. shift in ecx) can be
added later. Also two-instruction constraints (input reg == output
reg) might be recorded in a similar manner.
Change-Id: I59916e2e7f73657bb4fc3e3b65389749d7a23fa8
Reviewed-on: https://go-review.googlesource.com/18774
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: David Chase <drchase@google.com>
The OpSB hack didn't quite work. We need to really
CSE these ops to make regalloc happy.
Change-Id: I9f4d7bfb0929407c84ee60c9e25ff0c0fbea84af
Reviewed-on: https://go-review.googlesource.com/19083
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
It is one of the slowest compiler phases right now, and we
run two of them.
Instead of using a map to make the initial partition, use a sort.
It is much less memory intensive.
Do a few optimizations to avoid work for size-1 equivalence classes.
Implement -N.
Change-Id: I1d2d85d3771abc918db4dd7cc30b0b2d854b15e1
Reviewed-on: https://go-review.googlesource.com/19024
Reviewed-by: David Chase <drchase@google.com>
The x86 backend automatically rewrites MOV $0, AX to
XOR AX, AX. That rewrite isn't ok when the flags register
is live across the MOV. Keep track of which moves care
about preserving flags, then disable this rewrite for them.
On x86, Prog.Mark was being used to hold the length of the
instruction. We already store that in Prog.Isize, so no
need to store it in Prog.Mark also. This frees up Prog.Mark
to hold a bitmask on x86 just like all the other architectures.
Update #12405
Change-Id: Ibad8a8f41fc6222bec1e4904221887d3cc3ca029
Reviewed-on: https://go-review.googlesource.com/18861
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
In code that does:
var x, z int32
var y int64
z = phi(x, int32(y))
We silently drop the int32 cast because truncation is a no-op.
The phi operation needs to make sure it uses the size of the
phi, not the size of its arguments, when generating spills.
Change-Id: I1f7baf44f019256977a46fdd3dad1972be209042
Reviewed-on: https://go-review.googlesource.com/18390
Reviewed-by: David Chase <drchase@google.com>
Forgot to reset these masks before each merge edge is processed.
Change-Id: I2f593189b63f50a1cd12b2dd4645ca7b9614f1f3
Reviewed-on: https://go-review.googlesource.com/18223
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: David Chase <drchase@google.com>
Reorder how register & stack allocation is done. We used to allocate
registers, then fix up merge edges, then allocate stack slots. This
lead to lots of unnecessary copies on merge edges:
v2 = LoadReg v1
v3 = StoreReg v2
If v1 and v3 are allocated to the same stack slot, then this code is
unnecessary. But at regalloc time we didn't know the homes of v1 and
v3.
To fix this problem, allocate all the stack slots before fixing up the
merge edges. That way, we know what stack slots values use so we know
what copies are required.
Use a good technique for shuffling values around on merge edges.
Improves performance of the go1 TimeParse benchmark by ~12%
Change-Id: I731f43e4ff1a7e0dc4cd4aa428fcdb97812b86fa
Reviewed-on: https://go-review.googlesource.com/17915
Reviewed-by: David Chase <drchase@google.com>
Spilling/restoring flag values is a pain to do during regalloc.
Instead, allocate the flag register in a separate pass. Regalloc then
operates normally on any flag recomputation instructions.
Change-Id: Ia1c3d9e6eff678861193093c0b48a00f90e4156b
Reviewed-on: https://go-review.googlesource.com/17694
Reviewed-by: David Chase <drchase@google.com>
Use a more precise computation of next use. It properly
detects lifetime holes and deallocates values during those holes.
It also uses a more precise version of distance to next use which
affects which values get spilled.
Change-Id: I49eb3ebe2d2cb64842ecdaa7fb4f3792f8afb90b
Reviewed-on: https://go-review.googlesource.com/16760
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: David Chase <drchase@google.com>