45 Commits

Author SHA1 Message Date
Matthew Dempsky
118b3fe7bb cmd/compile/internal/gc: refactor ACALL Prog creation
This abstracts creation of ACALL Progs into package gc. The main
benefit of this today is we can refactor away a lot of common
boilerplate code.

Later, once liveness analysis happens on the SSA graph, this will also
provide an easy insertion point for emitting the PCDATA Progs
immediately before call instructions.

Passes toolstash-check -all.

Change-Id: Ia15108ace97201cd84314f1ca916dfeb4f09d61c
Reviewed-on: https://go-review.googlesource.com/38081
Reviewed-by: Keith Randall <khr@golang.org>
2017-03-13 21:04:16 +00:00
Matthew Dempsky
08d8d5c986 cmd/compile/internal/ssa: replace {Defer,Go}Call with StaticCall
Passes toolstash-check -all.

Change-Id: Icf8b75364e4761a5e56567f503b2c1cb17382ed2
Reviewed-on: https://go-review.googlesource.com/38080
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-03-13 19:44:36 +00:00
Matthew Dempsky
5e4a958351 cmd/compile: refactor portable SSA Op handling
Several SSA ops will always behave identically regardless of target
architecture, so handle those within gc/ssa.go instead.

Passes toolstash-check -all.

Change-Id: I54d514e80ab86723e44332a5a38e3054cbca8c5d
Reviewed-on: https://go-review.googlesource.com/37931
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2017-03-07 21:17:03 +00:00
Matthew Dempsky
02e36f8c87 cmd/compile/internal/ssa: remove Hmul{8,16}{,u} ops
Change-Id: I90865921584ae4bdfb6c220d439b14593d72b6f9
Reviewed-on: https://go-review.googlesource.com/37752
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2017-03-03 20:47:36 +00:00
Josh Bleecher Snyder
2183135554 cmd/compile: recognize bit test patterns on amd64
Updates #18943

Change-Id: If3080d6133bb6d2710b57294da24c90251ab4e08
Reviewed-on: https://go-review.googlesource.com/36329
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-03-01 00:36:04 +00:00
Ilya Tocar
21c71d7788 cmd/compile/internal/ssa: combine load + op on AMD64
On AMD64 Most operation can have one operand in memory.
Combine load and dependand operation into one new operation,
where possible. I've seen no significant performance changes on go1,
but this allows to remove ~1.8kb code from go tool. And in math package
I see e. g.:

Remainder-6            70.0ns ± 0%   64.6ns ± 0%   -7.76%  (p=0.000 n=9+1
Change-Id: I88b8602b1d55da8ba548a34eb7da4b25d59a297e
Reviewed-on: https://go-review.googlesource.com/36793
Run-TryBot: Ilya Tocar <ilya.tocar@intel.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-02-17 22:21:49 +00:00
Matthew Dempsky
6a29440dcc cmd/compile/internal/gc: remove more backend Sym uses
Removes all external uses of Linksym and Pkglookup, which are the only
two exported functions that return Syms.

Also add Duffcopy and Duffzero since they're used often enough across
SSA backends.

Passes toolstash -cmp.

Change-Id: I8d3fd048ad5cd676fc46378f09a917569ffc9b2c
Reviewed-on: https://go-review.googlesource.com/36418
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2017-02-06 23:25:44 +00:00
Matthew Dempsky
5f374ea8fb cmd/compile/internal/gc: stop exporting *gc.Sym-typed globals
The arch-specific SSA backends now no longer use gc.Sym either.

Passes toolstash -cmp.

Change-Id: Ic13b934b92a1b89b4b79c6c4796ab0a137608163
Reviewed-on: https://go-review.googlesource.com/36416
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2017-02-06 22:45:49 +00:00
Matthew Dempsky
87c475c227 cmd/compile/internal/ssa: use obj.LSym instead of gc.Sym
Gc's Sym type represents a package-qualified identifier, which is a
frontend concept and doesn't belong in SSA. Bonus: we can replace some
interface{} types with *obj.LSym.

Passes toolstash -cmp.

Change-Id: I456eb9957207d80f99f6eb9b8eab4a1f3263e9ed
Reviewed-on: https://go-review.googlesource.com/36415
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2017-02-06 22:45:34 +00:00
Robert Griesemer
cfd17f51c8 [dev.inline] cmd/compile/internal/ssa: rename various fields from Line to Pos
This is a mostly mechanical rename followed by manual fixes where necessary.

Change-Id: Ie5c670b133db978f15dc03e50dc2da0c80fc8842
Reviewed-on: https://go-review.googlesource.com/34137
Reviewed-by: David Lazar <lazard@golang.org>
2016-12-08 21:36:52 +00:00
Robert Griesemer
eab3707d6d [dev.inline] cmd/compile: rename various fields from Lineno to Pos
Various minor adjustments.

Change-Id: Iedfb97989f7bedaa3e9e8993b167e05f162434a7
Reviewed-on: https://go-review.googlesource.com/34136
Reviewed-by: David Lazar <lazard@golang.org>
2016-12-08 21:35:18 +00:00
Robert Griesemer
82d0caea2c [dev.inline] cmd/internal/src: make Pos implementation abstract
Adjust cmd/compile accordingly.

This will make it easier to replace the underlying implementation.

Change-Id: I33645850bb18c839b24785b6222a9e028617addb
Reviewed-on: https://go-review.googlesource.com/34133
Reviewed-by: David Lazar <lazard@golang.org>
2016-12-08 21:31:28 +00:00
Ilya Tocar
8589049d5c cmd/compile/internal/amd64: break dependency for CVTS[LQ]2S[DS]
CVTSL2SS, CVTSQ2SS, CVTSL2SD, CVTSQ2SD preserve upper part of xmm register,
introducing false dependency on a previous value.
Break it by xoring destination with itself.
Increases size of go executable by 320 bytes, but shows nice improvement on go1.
Also fixes performance degradation introduced by 1.7.

name                     old time/op    new time/op    delta
BinaryTree17-4              2.20s ± 1%     2.19s ± 0%  -0.36%        (p=0.000 n=18+16)
Fannkuch11-4                2.44s ± 1%     2.45s ± 2%  +0.47%        (p=0.030 n=20+20)
FmtFprintfEmpty-4          40.9ns ± 7%    40.5ns ± 1%    ~           (p=0.531 n=20+16)
FmtFprintfString-4          111ns ± 2%     111ns ± 1%    ~           (p=0.510 n=18+19)
FmtFprintfInt-4            98.3ns ± 3%    99.3ns ± 1%  +1.01%        (p=0.003 n=20+18)
FmtFprintfIntInt-4          148ns ± 3%     147ns ± 1%    ~           (p=0.919 n=20+17)
FmtFprintfPrefixedInt-4     149ns ± 1%     152ns ± 0%  +1.73%        (p=0.000 n=19+17)
FmtFprintfFloat-4           231ns ± 0%     231ns ± 1%    ~           (p=0.678 n=18+19)
FmtManyArgs-4               667ns ± 1%     672ns ± 1%  +0.73%        (p=0.005 n=20+20)
GobDecode-4                5.60ms ± 0%    5.61ms ± 0%  +0.24%        (p=0.000 n=20+20)
GobEncode-4                4.74ms ± 0%    4.73ms ± 1%  -0.20%        (p=0.002 n=20+20)
Gzip-4                      199ms ± 0%     199ms ± 1%  +0.35%        (p=0.000 n=19+20)
Gunzip-4                   31.8ms ± 1%    31.5ms ± 1%  -0.89%        (p=0.000 n=20+20)
HTTPClientServer-4         38.1µs ± 1%    38.0µs ± 1%    ~           (p=0.117 n=19+18)
JSONEncode-4               14.2ms ± 1%    13.4ms ± 0%  -5.73%        (p=0.000 n=20+20)
JSONDecode-4               42.7ms ± 0%    42.7ms ± 1%  +0.18%        (p=0.019 n=18+19)
Mandelbrot200-4            3.26ms ± 0%    2.99ms ± 0%  -8.38%        (p=0.000 n=19+19)
GoParse-4                  2.76ms ± 1%    2.76ms ± 1%    ~           (p=0.583 n=20+20)
RegexpMatchEasy0_32-4      69.5ns ± 0%    69.6ns ± 0%  +0.10%        (p=0.017 n=16+17)
RegexpMatchEasy0_1K-4       703ns ± 0%     708ns ± 3%  +0.65%        (p=0.000 n=17+18)
RegexpMatchEasy1_32-4      68.2ns ± 1%    68.2ns ± 2%    ~           (p=0.094 n=18+20)
RegexpMatchEasy1_1K-4       288ns ± 1%     288ns ± 0%    ~           (p=0.403 n=17+18)
RegexpMatchMedium_32-4      104ns ± 2%     103ns ± 1%    ~           (p=0.110 n=20+16)
RegexpMatchMedium_1K-4     31.7µs ± 3%    31.7µs ± 3%    ~           (p=0.091 n=19+20)
RegexpMatchHard_32-4       1.59µs ± 2%    1.58µs ± 2%    ~           (p=0.083 n=20+20)
RegexpMatchHard_1K-4       48.1µs ± 3%    47.9µs ± 2%    ~           (p=0.461 n=20+19)
Revcomp-4                   344ms ± 0%     345ms ± 0%  +0.08%        (p=0.009 n=18+17)
Template-4                 44.8ms ± 1%    44.7ms ± 1%    ~           (p=0.277 n=20+20)
TimeParse-4                 258ns ± 0%     258ns ± 0%    ~     (all samples are equal)
TimeFormat-4                275ns ± 0%     273ns ± 0%  -0.64%        (p=0.000 n=20+18)

name                     old speed      new speed      delta
GobDecode-4               137MB/s ± 0%   137MB/s ± 0%  -0.24%        (p=0.000 n=20+20)
GobEncode-4               162MB/s ± 0%   162MB/s ± 0%  +0.20%        (p=0.002 n=20+20)
Gzip-4                   97.6MB/s ± 0%  97.3MB/s ± 1%  -0.35%        (p=0.000 n=19+20)
Gunzip-4                  610MB/s ± 1%   615MB/s ± 1%  +0.89%        (p=0.000 n=20+20)
JSONEncode-4              136MB/s ± 1%   145MB/s ± 0%  +6.08%        (p=0.000 n=20+20)
JSONDecode-4             45.5MB/s ± 0%  45.4MB/s ± 1%  -0.17%        (p=0.017 n=18+19)
GoParse-4                21.0MB/s ± 1%  21.0MB/s ± 1%    ~           (p=0.578 n=20+20)
RegexpMatchEasy0_32-4     460MB/s ± 0%   460MB/s ± 0%  -0.09%        (p=0.031 n=16+17)
RegexpMatchEasy0_1K-4    1.46GB/s ± 0%  1.45GB/s ± 3%  -0.64%        (p=0.000 n=17+18)
RegexpMatchEasy1_32-4     469MB/s ± 0%   469MB/s ± 2%  +0.06%        (p=0.043 n=18+20)
RegexpMatchEasy1_1K-4    3.55GB/s ± 1%  3.55GB/s ± 0%    ~           (p=0.057 n=17+18)
RegexpMatchMedium_32-4   9.61MB/s ± 2%  9.64MB/s ± 2%    ~           (p=0.856 n=20+20)
RegexpMatchMedium_1K-4   32.3MB/s ± 3%  32.3MB/s ± 3%    ~           (p=0.085 n=19+20)
RegexpMatchHard_32-4     20.1MB/s ± 2%  20.2MB/s ± 2%    ~           (p=0.086 n=20+20)
RegexpMatchHard_1K-4     21.3MB/s ± 3%  21.4MB/s ± 2%    ~           (p=0.578 n=20+20)
Revcomp-4                 738MB/s ± 0%   737MB/s ± 0%  -0.08%        (p=0.009 n=18+17)
Template-4               43.3MB/s ± 1%  43.4MB/s ± 1%    ~           (p=0.274 n=20+20)

Fixes #16982

Change-Id: If574d66f39f4183a9b1d5ffff0339909cc73f59d
Reviewed-on: https://go-review.googlesource.com/31490
Run-TryBot: Ilya Tocar <ilya.tocar@intel.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-10-20 14:07:23 +00:00
Cherry Zhang
2756d56c89 cmd/compile: intrinsify math/big.mulWW, divWW on AMD64
Change-Id: I59f7afa7a5803d19f8b21fe70fc85ef997bb3a85
Reviewed-on: https://go-review.googlesource.com/30542
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-10-11 16:07:46 +00:00
Matthew Dempsky
8c24bff52b cmd/compile: layout stack frame during SSA
Identify live stack variables during SSA and compute the stack frame
layout earlier so that we can emit instructions with the correct
offsets upfront.

Passes toolstash/buildall.

Change-Id: I191100dba274f1e364a15bdcfdc1d1466cdd1db5
Reviewed-on: https://go-review.googlesource.com/30216
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-10-04 17:07:36 +00:00
Keith Randall
833ed7c431 cmd/compile: reorganize SSA register numbering
Teach SSA about the cmd/internal/obj/$ARCH register numbering.
It can then return that numbering when requested.  Each architecture
now does not need to know anything about the internal SSA numbering
of registers.

Change-Id: I34472a2736227c15482e60994eebcdd2723fa52d
Reviewed-on: https://go-review.googlesource.com/29249
Reviewed-by: David Chase <drchase@google.com>
2016-09-16 19:01:55 +00:00
Keith Randall
3134ab3c2d cmd/compile: redo nil checks
Get rid of BlockCheck. Josh goaded me into it, and I went
down a rabbithole making it happen.

NilCheck now panics if the pointer is nil and returns void, as before.
BlockCheck is gone, and NilCheck is no longer a Control value for
any block. It just exists (and deadcode knows not to throw it away).

I rewrote the nilcheckelim pass to handle this case.  In particular,
there can now be multiple NilCheck ops per block.

I moved all of the arch-dependent nil check elimination done as
part of ssaGenValue into its own proper pass, so we don't have to
duplicate that code for every architecture.

Making the arch-dependent nil check its own pass means I needed
to add a bunch of flags to the opcode table so I could write
the code without arch-dependent ops everywhere.

Change-Id: I419f891ac9b0de313033ff09115c374163416a9f
Reviewed-on: https://go-review.googlesource.com/29120
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-09-15 02:42:13 +00:00
Keith Randall
167e381f40 cmd/compile: make ssa compilation unconditional
Rip out the code that allows SSA to be used conditionally.

No longer exists:
 ssa=0 flag
 GOSSAHASH
 GOSSAPKG
 SSATEST

GOSSAFUNC now only controls the printing of the IR/html.

Still need to rip out all of the old backend.  It should no longer be
callable after this CL.

Update #16357

Change-Id: Ib30cc18fba6ca52232c41689ba610b0a94aa74f5
Reviewed-on: https://go-review.googlesource.com/29155
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2016-09-14 17:38:04 +00:00
Keith Randall
c345a3913f cmd/compile: get rid of BlockCall
No need for it, we can treat calls as (mostly) normal values
that take a memory and return a memory.

Lowers the number of basic blocks needed to represent a function.
"go test -c net/http" uses 27% fewer basic blocks.
Probably doesn't affect generated code much, but should help
various passes whose running time and/or space depends on
the number of basic blocks.

Fixes #15631

Change-Id: I0bf21e123f835e2cfa382753955a4f8bce03dfa6
Reviewed-on: https://go-review.googlesource.com/28950
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2016-09-12 23:27:02 +00:00
Michael Pratt
41e1c42028 cmd/compile: refactor out KeepAlive
Reduce the duplication in every arch by moving the code into package gc.

Change-Id: Ia111add8316492571825431ecd4f0154c8792ae1
Reviewed-on: https://go-review.googlesource.com/28481
Run-TryBot: Michael Pratt <mpratt@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2016-09-04 04:02:12 +00:00
Keith Randall
0c6c3d1de7 cmd/compile: fix noopt build
Atomic add rules were depending on CSE to combine duplicate atomic ops.
With -N, CSE doesn't run.

Redo the rules for atomic add so there's only one atomic op.
Introduce an add-to-first-part-of-tuple pseudo-ops to make the atomic add result correct.

Change-Id: Ib132247051abe5f80fefad6c197db8df8ee06427
Reviewed-on: https://go-review.googlesource.com/27991
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2016-08-28 18:54:52 +00:00
Keith Randall
84aac622a4 cmd/compile: intrinsify the rest of runtime/internal/atomic for amd64
Atomic swap, add/and/or, compare and swap.

Also works on amd64p32.

Change-Id: Idf2d8f3e1255f71deba759e6e75e293afe4ab2ba
Reviewed-on: https://go-review.googlesource.com/27813
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2016-08-28 16:31:08 +00:00
Keith Randall
320ddcf834 cmd/compile: inline atomics from runtime/internal/atomic on amd64
Inline atomic reads and writes on amd64.  There's no reason
to pay the overhead of a call for these.

To keep atomic loads from being reordered, we make them
return a <value,memory> tuple.

Change the meaning of resultInArg0 for tuple-generating ops
to mean the first part of the result tuple, not the second.
This means we can always put the store part of the tuple last,
matching how arguments are laid out.  This requires reordering
the outputs of add32carry and sub32carry and their descendents
in various architectures.

benchmark                    old ns/op     new ns/op     delta
BenchmarkAtomicLoad64-8      2.09          0.26          -87.56%
BenchmarkAtomicStore64-8     7.54          5.72          -24.14%

TBD (in a different CL): Cas, Or8, ...

Change-Id: I713ea88e7da3026c44ea5bdb56ed094b20bc5207
Reviewed-on: https://go-review.googlesource.com/27641
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2016-08-25 20:09:04 +00:00
Keith Randall
3e270ab80b cmd/compile: clean up ctz ops
Now that we have ops that can return 2 results, have BSF return a result
and flags.  We can then get rid of the redundant comparison and use CMOV
instead of CMOVconst ops.

Get rid of a bunch of the ops we don't use.  Ctz{8,16}, plus all the Clzs,
and CMOVNEs.  I don't think we'll ever use them, and they would be easy
to add back if needed.

Change-Id: I8858a1d017903474ea7e4002fc76a6a86e7bd487
Reviewed-on: https://go-review.googlesource.com/27630
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-08-23 23:45:12 +00:00
Keith Randall
69a755b602 [dev.ssa] cmd/compile: port SSA backend to amd64p32
It's not a new backend, just a PtrSize==4 modification
of the existing AMD64 backend.

Change-Id: Icc63521a5cf4ebb379f7430ef3f070894c09afda
Reviewed-on: https://go-review.googlesource.com/25586
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-08-09 15:48:26 +00:00
Keith Randall
d2286ea284 [dev.ssa] Merge remote-tracking branch 'origin/master' into mergebranch
Semi-regular merge from tip into dev.ssa.

Change-Id: Iadb60e594ef65a99c0e1404b14205fa67c32a9e9
2016-08-04 10:08:20 -07:00
Cherry Zhang
111d590f86 cmd/compile: fix possible spill of invalid pointer with DUFFZERO on AMD64
SSA compiler on AMD64 may spill Duff-adjusted address as scalar. If
the object is on stack and the stack moves, the spilled address become
invalid.

Making the spill pointer-typed does not work. The Duff-adjusted address
points to the memory before the area to be zeroed and may be invalid.
This may cause stack scanning code panic.

Fix it by doing Duff-adjustment in genValue, so the intermediate value
is not seen by the reg allocator, and will not be spilled.

Add a test to cover both cases. As it depends on allocation, it may
be not always triggered.

Fixes #16515.

Change-Id: Ia81d60204782de7405b7046165ad063384ede0db
Reviewed-on: https://go-review.googlesource.com/25309
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-07-29 01:09:55 +00:00
Keith Randall
cf92e3845f [dev.ssa] cmd/compile: use 2-result divide op
We now allow Values to have 2 outputs.  Use that ability for amd64.
This allows x,y := a/b,a%b to use just a single divide instruction.

Update #6815

Change-Id: Id70bcd20188a2dd8445e631a11d11f60991921e4
Reviewed-on: https://go-review.googlesource.com/25004
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: David Chase <drchase@google.com>
2016-07-18 19:41:05 +00:00
Josh Bleecher Snyder
41a7dca272 [dev.ssa] cmd/compile: unify and check LoweredGetClosurePtr
The comments were mostly duplicated; unify them.
Add a check that the required invariant holds.

Change-Id: I42fe09dcd1fac76d3c4e191f7a58c591c5ce429b
Reviewed-on: https://go-review.googlesource.com/24719
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: David Chase <drchase@google.com>
2016-07-04 01:29:28 +00:00
Josh Bleecher Snyder
22d1318e7b [dev.ssa] cmd/compile: refactor out CheckLoweredPhi
This will be used verbatim in other architectures.

Change-Id: I307891ae597d797fd45f296b6a38ffe9fac6b975
Reviewed-on: https://go-review.googlesource.com/24155
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-06-16 14:34:28 +00:00
Keith Randall
3572c6418b cmd/compile: keep pointer input arguments live throughout function
Introduce a KeepAlive op which makes sure that its argument is kept
live until the KeepAlive.  Use KeepAlive to mark pointer input
arguments as live after each function call and at each return.

We do this change only for pointer arguments.  Those are the
critical ones to handle because they might have finalizers.
Doing compound arguments (slices, structs, ...) is more complicated
because we would need to track field liveness individually (we do
that for auto variables now, but inputs requires extra trickery).

Turn off the automatic marking of args as live.  That way, when args
are explicitly nulled, plive will know that the original argument is
dead.

The KeepAlive op will be the eventual implementation of
runtime.KeepAlive.

Fixes #15277

Change-Id: I5f223e65d99c9f8342c03fbb1512c4d363e903e5
Reviewed-on: https://go-review.googlesource.com/22365
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
2016-05-18 19:25:27 +00:00
Keith Randall
4fa050024f cmd/compile: enable constant-time CFG editing
Provide indexes along with block pointers for Preds
and Succs arrays.  This allows us to splice edges in
and out of those arrays in constant time.

Fixes worst-case O(n^2) behavior in deadcode and fuse.

benchmark                     old ns/op      new ns/op     delta
BenchmarkFuse1-8              2065           2057          -0.39%
BenchmarkFuse10-8             9408           9073          -3.56%
BenchmarkFuse100-8            105238         76277         -27.52%
BenchmarkFuse1000-8           3982562        1026750       -74.22%
BenchmarkFuse10000-8          301220329      12824005      -95.74%
BenchmarkDeadCode1-8          1588           1566          -1.39%
BenchmarkDeadCode10-8         4333           4250          -1.92%
BenchmarkDeadCode100-8        32031          32574         +1.70%
BenchmarkDeadCode1000-8       590407         468275        -20.69%
BenchmarkDeadCode10000-8      17822890       5000818       -71.94%
BenchmarkDeadCode100000-8     1388706640     78021127      -94.38%
BenchmarkDeadCode200000-8     5372518479     168598762     -96.86%

Change-Id: Iccabdbb9343fd1c921ba07bbf673330a1c36ee17
Reviewed-on: https://go-review.googlesource.com/22589
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-05-05 15:58:59 +00:00
Keith Randall
cd956576ae cmd/compile: make vet happy with ssa code
Fixes #15488

Change-Id: I054eb1e1c859de315e3cdbdef5428682bce693fd
Reviewed-on: https://go-review.googlesource.com/22609
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-04-29 18:49:23 +00:00
Keith Randall
9e3c68f1e0 cmd/compile: get rid of most byte and word insns for amd64
Now that we're using 32-bit ops for 8/16-bit logical operations
(to avoid partial register stalls), there's really no need to
keep track of the 8/16-bit ops at all.  Convert everything we
can to 32-bit ops.

This CL is the obvious stuff.  I might think a bit more about
whether we can get rid of weirder stuff like HMULWU.

The only downside to this CL is that we lose some information
about constants.  If we had source like:
  var a byte = ...
  a += 128
  a += 128
We will convert that to a += 256, when we could get rid of the
add altogether.  This seems like a fairly unusual scenario and
I'm happy with forgoing that optimization.

Change-Id: Ia7c1e5203d0d110807da69ed646535194a3efba1
Reviewed-on: https://go-review.googlesource.com/22382
Reviewed-by: Todd Neal <todd@tneal.org>
2016-04-23 16:30:27 +00:00
Keith Randall
f8fc3710fd cmd/compile: handle mem copies in amd64 backend
Fixes noopt builder.

Change-Id: If13373b2597f0fcc9b1b2f9c860f2bd043e43c6c
Reviewed-on: https://go-review.googlesource.com/22338
Reviewed-by: Keith Randall <khr@golang.org>
2016-04-21 17:04:01 +00:00
Keith Randall
0004f34cef cmd/compile: regalloc enforces 2-address instructions
Instead of being a hint, resultInArg0 is now enforced by regalloc.
This allows us to delete all the code from amd64/ssa.go which
deals with converting from a semantically three-address instruction
into some copies plus a two-address instruction.

Change-Id: Id4f39a80be4b678718bfd42a229f9094ab6ecd7c
Reviewed-on: https://go-review.googlesource.com/21816
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2016-04-10 23:20:38 +00:00
Ilya Tocar
036d09d5bf cmd/compile/internal/amd64: Use 32-bit operands for byte operations
We already generate ADDL for byte operations, reflect this in code.
This also allows inc/dec for +-1 operation, which are 1-byte shorter,
and enables lea for 3-operand addition/subtraction.

Change-Id: Ibfdfee50667ca4cd3c28f72e3dece0c6d114d3ae
Reviewed-on: https://go-review.googlesource.com/21251
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Ilya Tocar <ilya.tocar@intel.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-04-05 15:15:19 +00:00
Keith Randall
9f66636c93 cmd/compile: don't put SP in index slot
For idx1 ops, SP can appear in the index slot.
Swap SP into the base register slot so we can encode
the instruction.

Fixes #15053

Change-Id: I19000cc9d6c86c7611743481e6e2cb78b1ef04eb
Reviewed-on: https://go-review.googlesource.com/21384
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Alexandru Moșoi <alexandru@mosoi.ro>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-03-31 22:10:47 +00:00
Keith Randall
af517da2f9 cmd/compile: Add more idx1 load/store instructions
Helpful for indexed loads and stores when the stride is not equal to
the size being loaded/stored.

Update #7927

Change-Id: I8714dd4c7b18a96a611bf5647ee21f753d723945
Reviewed-on: https://go-review.googlesource.com/21346
Run-TryBot: Todd Neal <todd@tneal.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Todd Neal <todd@tneal.org>
2016-03-31 17:30:40 +00:00
Alexandru Moșoi
dc5a7682f0 cmd/compile: use inc/dec for bytes, too
Change-Id: Ib2890ab1983cbef7c1c1ee5a10204ba3ace19b53
Reviewed-on: https://go-review.googlesource.com/21312
Run-TryBot: Alexandru Moșoi <alexandru@mosoi.ro>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-03-30 20:18:16 +00:00
Keith Randall
7fc5621991 cmd/compile: define high bits of AuxInt
Previously if we were only using the low bits of AuxInt,
the high bits were ignored and could be junk.  This CL
changes that behavior to define the high bits to be the
sign-extended version of the low bits for all cases.

There are 2 main benefits:
- Deterministic representation.  This helps with CSE.
  (Const8 [0x1]) and (Const8 [0x101]) used to be the same "value"
  but CSE couldn't see them as such.
- Testability.  We can check that all ops leave AuxInt in a state
  consistent with the new rule.  In the old scheme, it was hard
  to check whether a rule correctly used only the low-order bits.
Side benefits:
- ==0 and !=0 tests are easier.

Drawbacks:
- This differs from the runtime representation in registers,
  where it is important that we allow upper bits to be undefined
  (so we're not sign/zero-extending all the time).
- Ops that treat AuxInt as unsigned (shifts, mostly) need to be
  a bit more careful.

Change-Id: I9a685ff27e36dc03287c9ab1cecd6c0b4045c819
Reviewed-on: https://go-review.googlesource.com/21256
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2016-03-30 04:48:28 +00:00
Matthew Dempsky
da19a0cff4 cmd/compile: fix plan9-amd64 build
The previous rules to combine indexed loads produced addresses like:

    From: obj.Addr{
        Type:   TYPE_MEM,
        Reg:    REG_CX,
        Name:   NAME_AUTO,
        Offset: 121,
        ...
    }

which are erroneous because NAME_AUTO implies a base register of
REG_SP, and cmd/internal/obj/x86 makes many assumptions to this
effect.  Note that previously we were also producing an extra "ADDQ
SP, CX" instruction, so indexing off of SP was already handled.

The approach taken by this CL to address the problem is to instead
produce addresses like:

    From: obj.Addr{
        Type:   TYPE_MEM,
        Reg:    REG_SP,
        Name:   NAME_AUTO,
        Offset: 121,
        Index:  REG_CX,
        Scale:  1,
    }

and to omit the "ADDQ SP, CX" instruction.

Downside to this approach is it requires adding a lot of new
MOV[WLQ]loadidx1 instructions that nearly duplicate functionality of
the existing MOV[WLQ]loadidx[248] instructions, but with a different
Scale.

Fixes #15001.

Change-Id: Iad9a1a41e5e2552f8d22e3ba975e4ea0862dffd2
Reviewed-on: https://go-review.googlesource.com/21245
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-03-29 03:22:06 +00:00
David Chase
8eec2bbfbc cmd/compile: added some intrinsics to SSA back end
One intrinsic was needed to help get the very best
performance out of a future GC; as long as that one was
being added, I also added Bswap since that is sometimes
a handy thing to have.  I had intended to fill out the
bit-scan intrinsic family, but the mismatch between the
"scan forward" instruction and "count leading zeroes"
was large enough to cause me to leave it out -- it poses
a dilemma that I'd rather dodge right now.

These intrinsics are not exposed for general use.
That's a separate issue requiring an API proposal change
( https://github.com/golang/proposal )

All intrinsics are tested, both that they are substituted
on the appropriate architecture, and that they produce the
expected result.

Change-Id: I5848037cfd97de4f75bdc33bdd89bba00af4a8ee
Reviewed-on: https://go-review.googlesource.com/20564
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-03-28 16:29:59 +00:00
Keith Randall
68e86e6dfa cmd/compile: MOVBload and MOVBQZXload are the same op
No need to have both ops when they do the same thing.
Just declare MOVBload to zero extend and we can get rid
of MOVBQZXload.  Same for W and L.

Kind of a followon cleanup for https://go-review.googlesource.com/c/19506/
Should enable an easier fix for #14920

Change-Id: I7cfac909a8ba387f433a6ae75c050740ebb34d42
Reviewed-on: https://go-review.googlesource.com/21004
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-03-23 00:28:01 +00:00
Michael Pratt
a4e31d42ee cmd/compile: remove amd64 code from package gc and the core gen tool
Parts of the SSA compiler in package gc contain amd64-specific code,
most notably Prog generation. Move this code into package amd64, so that
other architectures can be added more easily.

In package gc, this change is just moving code. There are no functional
changes or even any larger structural changes beyond changing function
names (mostly for export).

In the cmd/compile/internal/ssa/gen tool, more information is included
in arch to remove the AMD64-specific behavior in the main portion of the
tool. The generated opGen.go is identical.

Change-Id: I8eb37c6e6df6de1b65fa7dab6f3bc32c29daf643
Reviewed-on: https://go-review.googlesource.com/20609
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-03-14 16:59:03 +00:00