Since switching to SSA, the only remaining use for the Ullman field
was in tracking whether or not an expression contained a function
call. Give it a new name and encode it in our fancy new bitset field.
Passes toolstash-check.
Change-Id: I95b7f9cb053856320c0d66efe14996667e6011c2
Reviewed-on: https://go-review.googlesource.com/37721
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
In order to generate accurate tracebacks, the runtime needs to know the
inlined call stack for a given PC. This creates two tables per function
for this purpose. The first table is the inlining tree (stored in the
function's funcdata), which has a node containing the file, line, and
function name for every inlined call. The second table is a PC-value
table that maps each PC to a node in the inlining tree (or -1 if the PC
is not the result of inlining).
To give the appearance that inlining hasn't happened, the runtime also
needs the original source position information of inlined AST nodes.
Previously the compiler plastered over the line numbers of inlined AST
nodes with the line number of the call. This meant that the PC-line
table mapped each PC to line number of the outermost call in its inlined
call stack, with no way to access the innermost line number.
Now the compiler retains line numbers of inlined AST nodes and writes
the innermost source position information to the PC-line and PC-file
tables. Some tools and tests expect to see outermost line numbers, so we
provide the OutermostLine function for displaying line info.
To keep track of the inlined call stack for an AST node, we extend the
src.PosBase type with an index into a global inlining tree. Every time
the compiler inlines a call, it creates a node in the global inlining
tree for the call, and writes its index to the PosBase of every inlined
AST node. The parent of this node is the inlining tree index of the
call. -1 signifies no parent.
For each function, the compiler creates a local inlining tree and a
PC-value table mapping each PC to an index in the local tree. These are
written to an object file, which is read by the linker. The linker
re-encodes these tables compactly by deduplicating function names and
file names.
This change increases the size of binaries by 4-5%. For example, this is
how the go1 benchmark binary is impacted by this change:
section old bytes new bytes delta
.text 3.49M ± 0% 3.49M ± 0% +0.06%
.rodata 1.12M ± 0% 1.21M ± 0% +8.21%
.gopclntab 1.50M ± 0% 1.68M ± 0% +11.89%
.debug_line 338k ± 0% 435k ± 0% +28.78%
Total 9.21M ± 0% 9.58M ± 0% +4.01%
Updates #19348.
Change-Id: Ic4f180c3b516018138236b0c35e0218270d957d3
Reviewed-on: https://go-review.googlesource.com/37231
Run-TryBot: David Lazar <lazard@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Add Set3 function to complement existing Set1 and Set2 functions.
Consistently use Set1, Set2 and Set3 for []*Node instead of Set where applicable.
Add SetFirst and SetSecond for setting elements of []*Node to mirror
First and Second for accessing elements in []*Node.
Replace uses of Index by First and Second and
SetIndex with SetFirst and SetSecond where applicable.
Passes toolstash -cmp.
Change-Id: I8255aae768cf245c8f93eec2e9efa05b8112b4e5
Reviewed-on: https://go-review.googlesource.com/37430
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Currently the conversion from constant divides to multiplies is mostly
done during the walk pass. This is suboptimal because SSA can
determine that the value being divided by is constant more often
(e.g. after inlining).
Change-Id: If1a9b993edd71be37396b9167f77da271966f85f
Reviewed-on: https://go-review.googlesource.com/37015
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Currently, whether we need a write barrier is simply a property of the
pointer slot being written to.
The only optimization we currently apply using the value being written
is that pointers to stack variables can omit write barriers because
they're only written to stack slots... but we already omit write
barriers for all writes to the stack anyway.
Passes toolstash -cmp.
Change-Id: I7f16b71ff473899ed96706232d371d5b2b7ae789
Reviewed-on: https://go-review.googlesource.com/37109
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Instead we can just call needwritebarrier when constructing the SSA
representation.
Change-Id: I6fefaad49daada9cdb3050f112889e49dca0047b
Reviewed-on: https://go-review.googlesource.com/34566
Reviewed-by: Cherry Zhang <cherryyz@google.com>
We shouldn't use CONVNOP for conversions between two different
nonempty interface types, because we want to update the itab
in those situations.
Fixes#18595
After this CL, we are guaranteed that itabs are unique, that is
there is only one itab per compile-time-type/concrete type pair.
See also the tests in CL 35115 and 35116 which make sure this
invariant holds even for shared libraries and plugins.
Unique itabs are required for CL 34810 (faster type switch code).
R=go1.9
Change-Id: Id27d2e01ded706680965e4cb69d7c7a24ac2161b
Reviewed-on: https://go-review.googlesource.com/35119
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
This reduces compilation time for the program
in #18602 from 7 hours to 30 min.
Updates #14781
Updates #18602
Change-Id: I3c4af878a08920e6373d3b3b0c4453ee002e32eb
Reviewed-on: https://go-review.googlesource.com/35113
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
XPos is a compact (8 instead of 16 bytes on a 64bit machine) source
position representation. There is a 1:1 correspondence between each
XPos and each regular Pos, translated via a global table.
In some sense this brings back the LineHist, though positions can
track line and column information; there is a O(1) translation
between the representations (no binary search), and the translation
is factored out.
The size increase with the prior change is brought down again and
the compiler speed is in line with the master repo (measured on
the same "quiet" machine as for prior change):
name old time/op new time/op delta
Template 256ms ± 1% 262ms ± 2% ~ (p=0.063 n=5+4)
Unicode 132ms ± 1% 135ms ± 2% ~ (p=0.063 n=5+4)
GoTypes 891ms ± 1% 871ms ± 1% -2.28% (p=0.016 n=5+4)
Compiler 3.84s ± 2% 3.89s ± 2% ~ (p=0.413 n=5+4)
MakeBash 47.1s ± 1% 46.2s ± 2% ~ (p=0.095 n=5+5)
name old user-ns/op new user-ns/op delta
Template 309M ± 1% 314M ± 2% ~ (p=0.111 n=5+4)
Unicode 165M ± 1% 172M ± 9% ~ (p=0.151 n=5+5)
GoTypes 1.14G ± 2% 1.12G ± 1% ~ (p=0.063 n=5+4)
Compiler 5.00G ± 1% 4.96G ± 1% ~ (p=0.286 n=5+4)
Change-Id: Icc570cc60ab014d8d9af6976f1f961ab8828cc47
Reviewed-on: https://go-review.googlesource.com/34506
Run-TryBot: Robert Griesemer <gri@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This replaces the src.Pos LineHist-based position tracking with
the syntax.Pos implementation and updates all uses.
The LineHist table is not used anymore - the respective code is still
there but should be removed eventually. CL forthcoming.
Passes toolstash -cmp when comparing to the master repo (with the
exception of a couple of swapped assembly instructions, likely due
to different instruction scheduling because the line-based sorting
has changed; though this is won't affect correctness).
The sizes of various important compiler data structures have increased
significantly (see the various sizes_test.go files); this is probably
the reason for an increase of compilation times (to be addressed). Here
are the results of compilebench -count 5, run on a "quiet" machine (no
apps running besides a terminal):
name old time/op new time/op delta
Template 256ms ± 1% 280ms ±15% +9.54% (p=0.008 n=5+5)
Unicode 132ms ± 1% 132ms ± 1% ~ (p=0.690 n=5+5)
GoTypes 891ms ± 1% 917ms ± 2% +2.88% (p=0.008 n=5+5)
Compiler 3.84s ± 2% 3.99s ± 2% +3.95% (p=0.016 n=5+5)
MakeBash 47.1s ± 1% 47.2s ± 2% ~ (p=0.841 n=5+5)
name old user-ns/op new user-ns/op delta
Template 309M ± 1% 326M ± 2% +5.18% (p=0.008 n=5+5)
Unicode 165M ± 1% 168M ± 4% ~ (p=0.421 n=5+5)
GoTypes 1.14G ± 2% 1.18G ± 1% +3.47% (p=0.008 n=5+5)
Compiler 5.00G ± 1% 5.16G ± 1% +3.12% (p=0.008 n=5+5)
Change-Id: I241c4246cdff627d7ecb95cac23060b38f9775ec
Reviewed-on: https://go-review.googlesource.com/34273
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Various minor adjustments.
Change-Id: Iedfb97989f7bedaa3e9e8993b167e05f162434a7
Reviewed-on: https://go-review.googlesource.com/34136
Reviewed-by: David Lazar <lazard@golang.org>
Adjust cmd/compile accordingly.
This will make it easier to replace the underlying implementation.
Change-Id: I33645850bb18c839b24785b6222a9e028617addb
Reviewed-on: https://go-review.googlesource.com/34133
Reviewed-by: David Lazar <lazard@golang.org>
This is a step toward chosing a different position representation.
By introducing an explicit type, it will be easier to make the
transition step-wise while ensuring everything keeps running.
This has been reviewed via https://go-review.googlesource.com/#/c/34025/.
Change-Id: Ibceddcd62d8f346321ac3250e3940e9c436ed684
Reviewed-on: https://go-review.googlesource.com/34132
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Lazar <lazard@golang.org>
func f() {
g()
}
We mistakenly don't add a frame pointer for f. This means f
isn't seen when walking the frame pointer linked list. That
matters for kernel-gathered profiles, and is an impediment for
issues like #16638.
To fix, allocate a stack frame even for otherwise frameless functions
like f. It is a bit tricky because we need to avoid some runtime
internals that really, really don't want one.
No test at the moment, as only kernel CPU profiles would catch it.
Tests will come with the implementation of #16638.
Fixes#18103
Change-Id: I411206cc9de4c8fdd265bee2e4fa61d161ad1847
Reviewed-on: https://go-review.googlesource.com/33754
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
when compiling f(a, b, c), we do something like:
*(SP+0) = eval(a)
*(SP+8) = eval(b)
*(SP+16) = eval(c)
call f
If one of those evaluations is later determined to unconditionally panic
(say eval(b) in this example), then the call is deadcode eliminated. But
any previous argument write (*(SP+0)=... here) is still around. Becuase
we only compute the size of the outarg area for calls which are still
around at the end of optimization, the space needed for *(SP+0)=v is not
accounted for and thus the outarg area may be too small.
The fix is to make sure that we evaluate any potentially panicing
operation before we write any of the args to the stack. It turns out
that fix is pretty easy, as we already have such a mechanism available
for function args. We just need to extend it to possibly panicing args
as well.
The resulting code (if b and c can panic, but a can't) is:
tmpb = eval(b)
*(SP+16) = eval(c)
*(SP+0) = eval(a)
*(SP+8) = tmpb
call f
This change tickled a bug in how we find the arguments for intrinsic
calls, so that latent bug is fixed up as well.
Update #16760.
Change-Id: I0bf5edf370220f82bc036cf2085ecc24f356d166
Reviewed-on: https://go-review.googlesource.com/32551
Reviewed-by: Cherry Zhang <cherryyz@google.com>
This adds a //go:notinheap pragma for declarations of types that must
not be heap allocated. We ensure these rules by disallowing new(T),
make([]T), append([]T), or implicit allocation of T, by disallowing
conversions to notinheap types, and by propagating notinheap to any
struct or array that contains notinheap elements.
The utility of this pragma is that we can eliminate write barriers for
writes to pointers to go:notinheap types, since the write barrier is
guaranteed to be a no-op. This will let us mark several scheduler and
memory allocator structures as go:notinheap, which will let us
disallow write barriers in the scheduler and memory allocator much
more thoroughly and also eliminate some problematic hybrid write
barriers.
This also makes go:nowritebarrierrec and go:yeswritebarrierrec much
more powerful. Currently we use go:nowritebarrier all over the place,
but it's almost never what you actually want: when write barriers are
illegal, they're typically illegal for a whole dynamic scope. Partly
this is because go:nowritebarrier has been around longer, but it's
also because go:nowritebarrierrec couldn't be used in situations that
had no-op write barriers or where some nested scope did allow write
barriers. go:notinheap eliminates many no-op write barriers and
go:yeswritebarrierrec makes it possible to opt back in to write
barriers, so these two changes will let us use go:nowritebarrierrec
far more liberally.
This updates #13386, which is about controlling pointers from non-GC'd
memory to GC'd memory. That would require some additional pragma (or
pragmas), but could build on this pragma.
Change-Id: I6314f8f4181535dd166887c9ec239977b54940bd
Reviewed-on: https://go-review.googlesource.com/30939
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
aindex is overkill when it's only ever used with known integer
constants, so just use typArray directly instead.
Change-Id: I43fc14e604172df859b3ad9d848d219bbe48e434
Reviewed-on: https://go-review.googlesource.com/30979
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
var x *X = ...
defer x.foo()
As part of the defer, we need to calculate &(*X).foo·f. This expression
is the address of the static closure that will call (*X).foo when a
pointer to that closure is used in a call/defer/go. This pointer is not
currently properly typed in SSA. It is a pointer type, but the base
type is nil, not a proper type.
This turns out not to be a problem currently because we never use the
type of these SSA values. But I'm trying to change that (to be able to
spill them) in CL 28391. To fix, use uint8 as the fake type of the
closure.
Change-Id: Ieee388089c9af398ed772ee8c815122c347cb633
Reviewed-on: https://go-review.googlesource.com/29444
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Follow up to CL 29134. Generated with gofmt -r 'Nod -> nod', plus
three manual adjustments to the comments in syntax/parser.go
Change-Id: I02920f7ab10c70b6e850457b42d5fe35f1f3821a
Reviewed-on: https://go-review.googlesource.com/29136
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
After the removal of the old backend many types are no longer referenced
outside internal/gc. Make these functions private so that tools like
honnef.co/go/unused can spot when they become dead code. In doing so
this CL identified several previously public helpers which are no longer
used, so removes them.
This should be the last of the public functions.
Change-Id: I7e9c4e72f86f391b428b9dddb6f0d516529706c3
Reviewed-on: https://go-review.googlesource.com/29134
Run-TryBot: Dave Cheney <dave@cheney.net>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
After the removal of the old backend many types are no longer referenced
outside internal/gc. Make these functions private so that tools like
honnef.co/go/unused can spot when they become dead code. In doing so
this CL identified several previously public helpers which are no longer
used, so removes them.
Change-Id: Idc2d485f493206de9d661bd3cb0ecb4684177b32
Reviewed-on: https://go-review.googlesource.com/29133
Run-TryBot: Dave Cheney <dave@cheney.net>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
- also consistently use %v instead of %s when we have a (gc) Formatter
- rewrite done automatically using Formats test in -u (update) mode
- manual update of format strings that were not single string constants
- updated fmt.go, fmt_test.go accordingly
- fmt_test: permit "%T" always
Change-Id: I8f0704286aba5704600ad0c4a4484005b79b905d
Reviewed-on: https://go-review.googlesource.com/28954
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
As cmd/internal/obj is coordinating the definition of GOOS, GOARCH,
etc across the compiler and linker, turn its functions into globals
and use them everywhere.
Change-Id: I5db5addda3c6b6435c37fd5581c7c3d9a561f492
Reviewed-on: https://go-review.googlesource.com/28854
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Does not pass toolstash -cmp due to changed export data,
but the cmd/go binary (which doesn't contain export data)
is bit-for-bit identical.
Change-Id: I6b12f9de18cf7da528e9207dccbf8f08c969f142
Reviewed-on: https://go-review.googlesource.com/26753
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>