88 Commits

Author SHA1 Message Date
Austin Clements
7de15e362b runtime: atomically set span state and use as publication barrier
When everything is working correctly, any pointer the garbage
collector encounters can only point into a fully initialized heap
span, since the span must have been initialized before that pointer
could escape the heap allocator and become visible to the GC.

However, in various cases, we try to be defensive against bad
pointers. In findObject, this is just a sanity check: we never expect
to find a bad pointer, but programming errors can lead to them. In
spanOfHeap, we don't necessarily trust the pointer and we're trying to
check if it really does point to the heap, though it should always
point to something. Conservative scanning takes this to a new level,
since it can only guess that a word may be a pointer and verify this.

In all of these cases, we have a problem that the span lookup and
check can race with span initialization, since the span becomes
visible to lookups before it's fully initialized.

Furthermore, we're about to start initializing the span without the
heap lock held, which is going to introduce races where accesses were
previously protected by the heap lock.

To address this, this CL makes accesses to mspan.state atomic, and
ensures that the span is fully initialized before setting the state to
mSpanInUse. All loads are now atomic, and in any case where we don't
trust the pointer, it first atomically loads the span state and checks
that it's mSpanInUse, after which it will have synchronized with span
initialization and can safely check the other span fields.

For #10958, #24543, but a good fix in general.

Change-Id: I518b7c63555b02064b98aa5f802c92b758fef853
Reviewed-on: https://go-review.googlesource.com/c/go/+/203286
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
2019-10-31 17:09:50 +00:00
Clément Chigot
6453337494 runtime: initialize netpoll in TestNetpollBreak
Netpoll must be always be initialized when TestNetpollBreak is launched.
However, when it is run in standalone, it won't be the case, so it must
be forced.

Updates: #27707

Change-Id: I28147f3834f3d6aca982c6a298feadc09b55f66e
Reviewed-on: https://go-review.googlesource.com/c/go/+/204058
Run-TryBot: Clément Chigot <clement.chigot%atos.net@gtempaccount.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2019-10-29 17:38:23 +00:00
Ian Lance Taylor
4ec51894ee runtime: force testing calls of netpoll to run on system stack
Fixes #35053

Change-Id: I31853d434610880044c169e0c1e9732f97ff1bdb
Reviewed-on: https://go-review.googlesource.com/c/go/+/202444
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David du Colombier <0intro@gmail.com>
2019-10-22 08:46:40 +00:00
Ian Lance Taylor
50f4896b72 runtime: add netpollBreak
The new netpollBreak function can be used to interrupt a blocking netpoll.
This function is not currently used; it will be used by later CLs.

Updates #27707

Change-Id: I5cb936609ba13c3c127ea1368a49194fc58c9f4d
Reviewed-on: https://go-review.googlesource.com/c/go/+/171824
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
2019-10-21 16:37:45 +00:00
Austin Clements
f18109d7e3 runtime: grow the heap incrementally
Currently, we map and grow the heap a whole arena (64MB) at a time.
Unfortunately, in order to fix #32828, we need to switch from
scavenging inline with allocation back to scavenging on heap growth,
but heap-growth scavenging happens in large jumps because we grow the
heap in large jumps.

In order to prepare for better heap-growth scavenging, this CL
separates mapping more space for the heap from actually "growing" it
(tracking the new space with spans). Instead, growing the heap keeps
track of the "current arena" it's growing into. It track that with new
spans as needed, and only maps more arena space when the current arena
is inadequate. The effect to the user is the same, but this will let
us scavenge on much smaller increments of heap growth.

There are two slightly subtleties to this change:

1. If an allocation requires mapping a new arena and that new arena
   isn't contiguous with the current arena, we don't want to lose the
   unused space in the current arena, so we have to immediately track
   that with a span.

2. The mapped space must be accounted as released and idle, even
   though it isn't actually tracked in a span.

For #32828, since this makes heap-growth scavenging far more
effective, especially at small heap sizes. For example, this change is
necessary for TestPhysicalMemoryUtilization to pass once we remove
inline scavenging.

Change-Id: I300e74a0534062467e4ce91cdc3508e5ef9aa73a
Reviewed-on: https://go-review.googlesource.com/c/go/+/189957
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
2019-09-25 22:17:21 +00:00
Cherry Zhang
2bcbe6a4b6 runtime: add a test for getg with thread switch
With gccgo, if we generate getg inlined, the backend may cache
the address of the TLS variable, which will become invalid after
a thread switch.

Currently there is no known bug for this. But if we didn't
implement this carefully, we may get subtle bugs. This CL adds a
test that will fail loudly if this is wrong. (See also
https://go.googlesource.com/gofrontend/+/refs/heads/master/libgo/runtime/proc.c#333
and an incorrect attempt CL 185337.)

Note: at least on Linux/AMD64, even with an incorrect
implementation, this only fails if the test is compiled with
-fPIC, which is not the default setting for gccgo test suite. So
some manual work is needed. Maybe we could extend the test suite
to run the runtime test with more settings (e.g. PIC and static).

Change-Id: I459a3b4c31f09b9785c0eca19b7756f80e8ef54c
Reviewed-on: https://go-review.googlesource.com/c/go/+/186357
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Than McIntosh <thanm@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2019-07-16 20:53:01 +00:00
Michael Anthony Knyszek
7ed7669c0d runtime: ensure mheap lock stack growth invariant is maintained
Currently there's an invariant in the runtime wherein the heap lock
can only be acquired on the system stack, otherwise a self-deadlock
could occur if the stack grows while the lock is held.

This invariant is upheld and documented in a number of situations (e.g.
allocManual, freeManual) but there are other places where the invariant
is either not maintained at all which risks self-deadlock (e.g.
setGCPercent, gcResetMarkState, allocmcache) or is maintained but
undocumented (e.g. gcSweep, readGCStats_m).

This change adds go:systemstack to any function that acquires the heap
lock or adds a systemstack(func() { ... }) around the critical section,
where appropriate. It also documents the invariant on (*mheap).lock
directly and updates repetitive documentation to refer to that comment.

Fixes #32105.

Change-Id: I702b1290709c118b837389c78efde25c51a2cafb
Reviewed-on: https://go-review.googlesource.com/c/go/+/177857
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Austin Clements <austin@google.com>
2019-05-24 15:34:57 +00:00
Michael Anthony Knyszek
f4a5ae5594 runtime: track the number of free unscavenged huge pages
This change tracks the number of potential free and unscavenged huge
pages which will be used to inform the rate at which scavenging should
occur.

For #30333.

Change-Id: I47663e5ffb64cac44ffa10db158486783f707479
Reviewed-on: https://go-review.googlesource.com/c/go/+/170860
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-05-06 20:59:20 +00:00
Michael Anthony Knyszek
a62b5723be runtime: scavenge huge spans first
This change adds two new treap iteration types: one for large
unscavenged spans (contain at least one huge page) and one for small
unscavenged spans. This allows us to scavenge the huge spans first by
first iterating over the large ones, then the small ones.

Also, since we now depend on physHugePageSize being a power of two,
ensure that that's the case when it's retrieved from the OS.

For #30333.

Change-Id: I51662740205ad5e4905404a0856f5f2b2d2a5680
Reviewed-on: https://go-review.googlesource.com/c/go/+/174399
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-05-06 20:57:39 +00:00
Michael Anthony Knyszek
fa8470a8cd runtime: make treap iteration more efficient
This change introduces a treapIterFilter type which represents the
power set of states described by a treapIterType.

This change then adds a treapIterFilter field to each treap node
indicating the types of spans that live in that subtree. The field is
maintained via the same mechanism used to maintain maxPages. This allows
pred, succ, start, and end to be judicious about which subtrees it will
visit, ensuring that iteration avoids traversing irrelevant territory.

Without this change, repeated scavenging attempts can end up being N^2
as the scavenger walks over what it already scavenged before finding new
spans available for scavenging.

Finally, this change also only scavenges a span once it is removed from
the treap. There was always an invariant that spans owned by the treap
may not be mutated in-place, but with this change violating that
invariant can cause issues with scavenging.

For #30333.

Change-Id: I8040b997e21c94a8d3d9c8c6accfe23cebe0c3d3
Reviewed-on: https://go-review.googlesource.com/c/go/+/174878
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-05-06 20:50:16 +00:00
Michael Anthony Knyszek
9baa4301cf runtime: merge all treaps into one implementation
This change modifies the treap implementation to support holding all
spans in a single treap, instead of keeping them all in separate treaps.

This improves ergonomics for nearly all treap-related callsites.
With that said, iteration is now more expensive, but it never occurs on
the fast path, only on scavenging-related paths.

This change opens up the opportunity for further optimizations, such as
splitting spans without treap removal (taking treap removal off the span
allocator's critical path) as well as improvements to treap iteration
(building linked lists for each iteration type and managing them on
insert/removal, since those operations should be less frequent).

For #30333.

Change-Id: I3dac97afd3682a37fda09ae8656a770e1369d0a9
Reviewed-on: https://go-review.googlesource.com/c/go/+/174398
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-05-06 20:19:44 +00:00
Michael Anthony Knyszek
40036a99a0 runtime: change the span allocation policy to first-fit
This change modifies the treap implementation to be address-ordered
instead of size-ordered, and further augments it so it may be used for
allocation. It then modifies the find method to implement a first-fit
allocation policy.

This change to the treap implementation consequently makes it so that
spans are scavenged in highest-address-first order without any
additional changes to the scavenging code. Because the treap itself is
now address ordered, and the scavenging code iterates over it in
reverse, the highest address is now chosen instead of the largest span.

This change also renames the now wrongly-named "scavengeLargest" method
on mheap to just "scavengeLocked" and also fixes up logic in that method
which made assumptions about size.

For #30333.

Change-Id: I94b6f3209211cc1bfdc8cdaea04152a232cfbbb4
Reviewed-on: https://go-review.googlesource.com/c/go/+/164101
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-05-01 14:50:40 +00:00
Michael Anthony Knyszek
d13a9312f5 runtime: add tests for runtime mTreap
This change exports the runtime mTreap in export_test.go and then adds a
series of tests which check that the invariants of the treap are
maintained under different operations. These tests also include tests
for the treap iterator type.

Also, we note that the find() operation on the treap never actually was
best-fit, so the tests just ensure that it returns an appropriately
sized span.

For #30333.

Change-Id: If81f7c746dda6677ebca925cb0a940134701b894
Reviewed-on: https://go-review.googlesource.com/c/go/+/164100
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-04-10 22:00:53 +00:00
Andrei Vagin
4166ff42c0 runtime: preempt a goroutine which calls a lot of short system calls
A goroutine should be preempted if it runs for 10ms without blocking.
We found that this doesn't work for goroutines which call short system calls.

For example, the next program can stuck for seconds without this fix:

$ cat main.go
package main

import (
	"runtime"
	"syscall"
)

func main() {
	runtime.GOMAXPROCS(1)
	c := make(chan int)
	go func() {
		c <- 1
		for {
			t := syscall.Timespec{
				Nsec: 300,
			}
			if true {
				syscall.Nanosleep(&t, nil)
			}
		}
	}()
	<-c
}

$ time go run main.go

real	0m8.796s
user	0m0.367s
sys	0m0.893s

Updates #10958

Change-Id: Id3be54d3779cc28bfc8b33fe578f13778f1ae2a2
Reviewed-on: https://go-review.googlesource.com/c/go/+/170138
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-04-09 07:45:26 +00:00
Michael Anthony Knyszek
4b3f04c63b runtime: make mTreap iterator bidirectional
This change makes mTreap's iterator type, treapIter, bidirectional
instead of unidirectional. This change helps support moving the find
operation on a treap to return an iterator instead of a treapNode, in
order to hide the details of the treap when accessing elements.

For #28479.

Change-Id: I5dbea4fd4fb9bede6e81bfd089f2368886f98943
Reviewed-on: https://go-review.googlesource.com/c/156918
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-01-10 18:15:48 +00:00
Michael Anthony Knyszek
3651476075 runtime: add iterator abstraction for mTreap
This change adds the treapIter type which provides an iterator
abstraction for walking over an mTreap. In particular, the mTreap type
now has iter() and rev() for iterating both forwards (smallest to
largest) and backwards (largest to smallest). It also has an erase()
method for erasing elements at the iterator's current position.

For #28479.

While the expectation is that this change will slow down Go programs,
the impact on Go1 and Garbage is negligible.

Go1:     https://perf.golang.org/search?q=upload:20181214.6
Garbage: https://perf.golang.org/search?q=upload:20181214.11

Change-Id: I60dbebbbe73cbbe7b78d45d2093cec12cc0bc649
Reviewed-on: https://go-review.googlesource.com/c/151537
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
2018-12-17 23:28:18 +00:00
Keith Randall
df2bb9817b runtime: during map delete, update entries after new last element
When we delete an element, and it was the last element in the bucket,
update the slots between the new last element and the old last element
with the marker that says "no more elements beyond here".

Change-Id: I8efeeddf4c9b9fc491c678f84220a5a5094c9c76
Reviewed-on: https://go-review.googlesource.com/c/142438
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2018-11-13 21:24:57 +00:00
Michael Anthony Knyszek
61d40c8abc runtime: extend ReadMemStatsSlow to re-compute HeapReleased
This change extends the test function ReadMemStatsSlow to re-compute
the HeapReleased statistic such that it is checked in testing to be
consistent with the bookkeeping done in the runtime.

Change-Id: I49f5c2620f5731edea8e9f768744cf997dcd7c22
Reviewed-on: https://go-review.googlesource.com/c/142397
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-10-30 15:28:07 +00:00
Austin Clements
78561c4ae9 runtime: handle g0 stack overflows gracefully
Currently, if the runtime overflows the g0 stack on Windows, it leads
to an infinite recursion:

1. Something overflows the g0 stack bounds and calls morestack.

2. morestack determines it's on the g0 stack and hence cannot grow the
stack, so it calls badmorestackg0 (which prints "fatal: morestack on
g0") followed by abort.

3. abort performs an INT $3, which turns into a Windows
_EXCEPTION_BREAKPOINT exception.

4. This enters the Windows sigtramp, which ensures we're on the g0
stack and calls exceptionhandler.

5. exceptionhandler has a stack check prologue, so it determines that
it's out of stack and calls morestack.

6. goto 2

Fix this by making the exception handler avoid stack checks until it
has ruled out an abort and by blowing away the stack bounds in
lastcontinuehandler before we print the final fatal traceback (which
itself involves a lot of stack bounds checks).

Fixes #21382.

Change-Id: Ie66e91f708e18d131d97f22b43f9ac26f3aece5a
Reviewed-on: https://go-review.googlesource.com/120857
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
2018-07-07 14:44:11 +00:00
Ian Lance Taylor
f43aa1df70 runtime: throw if the runtime panics with out of bounds index
If the runtime code panics due to a bad index or slice expression,
then throw instead of panicing. This will skip calls to recover and dump
the entire runtime stack trace. The runtime should never panic due to
an out of bounds index, and this will help with debugging if it does.

For #24991
Updates #25201

Change-Id: I85a9feded8f0de914ee1558425931853223c0514
Reviewed-on: https://go-review.googlesource.com/121515
Reviewed-by: Austin Clements <austin@google.com>
2018-06-29 21:29:17 +00:00
Austin Clements
c5ed10f3be runtime: support for debugger function calls
This adds a mechanism for debuggers to safely inject calls to Go
functions on amd64. Debuggers must participate in a protocol with the
runtime, and need to know how to lay out a call frame, but the runtime
support takes care of the details of handling live pointers in
registers, stack growth, and detecting the trickier conditions when it
is unsafe to inject a user function call.

Fixes #21678.
Updates derekparker/delve#119.

Change-Id: I56d8ca67700f1f77e19d89e7fc92ab337b228834
Reviewed-on: https://go-review.googlesource.com/109699
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2018-05-22 15:55:05 +00:00
Martin Möhrmann
4ebc67d334 runtime: remove hmap field from maptypes
The hmap field in the maptype is only used by the runtime to check the sizes of
the hmap structure created by the compiler and runtime agree.

Comments are already present about the hmap structure definitions in the
compiler and runtime needing to be in sync.

Add a test that checks the runtimes hmap size is as expected to detect
when the compilers and runtimes hmap sizes diverge instead of checking
this at runtime when a map is created.

Change-Id: I974945ebfdb66883a896386a17bbcae62a18cf2a
Reviewed-on: https://go-review.googlesource.com/91796
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2018-05-06 05:46:06 +00:00
Cherry Zhang
22f4280b9a runtime: remove the dummy arg of getcallersp
getcallersp is intrinsified, and so the dummy arg is no longer
needed. Remove it, as well as a few dummy args that are solely
to feed getcallersp.

Change-Id: Ibb6c948ff9c56537042b380ac3be3a91b247aaa6
Reviewed-on: https://go-review.googlesource.com/109596
Run-TryBot: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-04-26 18:57:20 +00:00
Cherry Zhang
f83e421268 cmd/internal/obj/arm, runtime: delete old ARM softfloat code
CL 106735 changed to the new softfloat support on GOARM=5.

ARM assembly code that uses FP instructions not guarded on GOARM,
if any, will break. The easiest way to fix is probably to use Go
implementation on GOARM=5, like

	MOVB	runtime·goarm(SB), R11
	CMP	$5, R11
	BEQ	arm5
	... FP instructions ...
	RET
arm5:
	CALL or JMP to Go implementation

Change-Id: I52fc76fac9c854ebe7c6c856c365fba35d3f560a
Reviewed-on: https://go-review.googlesource.com/107475
Run-TryBot: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-04-17 18:27:55 +00:00
Josh Bleecher Snyder
031f71efdf runtime: add TestSizeof
Borrowed from cmd/compile, TestSizeof ensures
that the size of important types doesn't change unexpectedly.
It also helps reviewers see the impact of intended changes.

Change-Id: If57955f0c3e66054de3f40c6bba585b88694c7be
Reviewed-on: https://go-review.googlesource.com/99837
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-03-09 17:03:25 +00:00
Austin Clements
51ae88ee2f runtime: remove non-reserved heap logic
Currently large sysReserve calls on some OSes don't actually reserve
the memory, but just check that it can be reserved. This was important
when we called sysReserve to "reserve" many gigabytes for the heap up
front, but now that we map memory in small increments as we need it,
this complication is no longer necessary.

This has one curious side benefit: currently, on Linux, allocations
that are large enough to be rejected by mmap wind up freezing the
application for a long time before it panics. This happens because
sysReserve doesn't reserve the memory, so sysMap calls mmap_fixed,
which calls mmap, which fails because the mapping is too large.
However, mmap_fixed doesn't inspect *why* mmap fails, so it falls back
to probing every page in the desired region individually with mincore
before performing an (otherwise dangerous) MAP_FIXED mapping, which
will also fail. This takes a long time for a large region. Now this
logic is gone, so the mmap failure leads to an immediate panic.

Updates #10460.

Change-Id: I8efe88c611871cdb14f99fadd09db83e0161ca2e
Reviewed-on: https://go-review.googlesource.com/85888
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2018-02-15 21:12:24 +00:00
Austin Clements
2b415549b8 runtime: use sparse mappings for the heap
This replaces the contiguous heap arena mapping with a potentially
sparse mapping that can support heap mappings anywhere in the address
space.

This has several advantages over the current approach:

* There is no longer any limit on the size of the Go heap. (Currently
  it's limited to 512GB.) Hence, this fixes #10460.

* It eliminates many failures modes of heap initialization and
  growing. In particular it eliminates any possibility of panicking
  with an address space conflict. This can happen for many reasons and
  even causes a low but steady rate of TSAN test failures because of
  conflicts with the TSAN runtime. See #16936 and #11993.

* It eliminates the notion of "non-reserved" heap, which was added
  because creating huge address space reservations (particularly on
  64-bit) led to huge process VSIZE. This was at best confusing and at
  worst conflicted badly with ulimit -v. However, the non-reserved
  heap logic is complicated, can race with other mappings in non-pure
  Go binaries (e.g., #18976), and requires that the entire heap be
  either reserved or non-reserved. We currently maintain the latter
  property, but it's quite difficult to convince yourself of that, and
  hence difficult to keep correct. This logic is still present, but
  will be removed in the next CL.

* It fixes problems on 32-bit where skipping over parts of the address
  space leads to mapping huge (and never-to-be-used) metadata
  structures. See #19831.

This also completely rewrites and significantly simplifies
mheap.sysAlloc, which has been a source of many bugs. E.g., #21044,
 #20259, #18651, and #13143 (and maybe #23222).

This change also makes it possible to allocate individual objects
larger than 512GB. As a result, a few tests that expected huge
allocations to fail needed to be changed to make even larger
allocations. However, at the moment attempting to allocate a humongous
object may cause the program to freeze for several minutes on Linux as
we fall back to probing every page with addrspace_free. That logic
(and this failure mode) will be removed in the next CL.

Fixes #10460.
Fixes #22204 (since it rewrites the code involved).

This slightly slows down compilebench and the x/benchmarks garbage
benchmark.

name       old time/op     new time/op     delta
Template       184ms ± 1%      185ms ± 1%    ~     (p=0.065 n=10+9)
Unicode       86.9ms ± 3%     86.3ms ± 1%    ~     (p=0.631 n=10+10)
GoTypes        599ms ± 0%      602ms ± 0%  +0.56%  (p=0.000 n=10+9)
Compiler       2.87s ± 1%      2.89s ± 1%  +0.51%  (p=0.002 n=9+10)
SSA            7.29s ± 1%      7.25s ± 1%    ~     (p=0.182 n=10+9)
Flate          118ms ± 2%      118ms ± 1%    ~     (p=0.113 n=9+9)
GoParser       147ms ± 1%      148ms ± 1%  +1.07%  (p=0.003 n=9+10)
Reflect        401ms ± 1%      404ms ± 1%  +0.71%  (p=0.003 n=10+9)
Tar            175ms ± 1%      175ms ± 1%    ~     (p=0.604 n=9+10)
XML            209ms ± 1%      210ms ± 1%    ~     (p=0.052 n=10+10)

(https://perf.golang.org/search?q=upload:20171231.4)

name                       old time/op  new time/op  delta
Garbage/benchmem-MB=64-12  2.23ms ± 1%  2.25ms ± 1%  +0.84%  (p=0.000 n=19+19)

(https://perf.golang.org/search?q=upload:20171231.3)

Relative to the start of the sparse heap changes (starting at and
including "runtime: fix various contiguous bitmap assumptions"),
overall slowdown is roughly 1% on GC-intensive benchmarks:

name        old time/op     new time/op     delta
Template        183ms ± 1%      185ms ± 1%  +1.32%  (p=0.000 n=9+9)
Unicode        84.9ms ± 2%     86.3ms ± 1%  +1.65%  (p=0.000 n=9+10)
GoTypes         595ms ± 1%      602ms ± 0%  +1.19%  (p=0.000 n=9+9)
Compiler        2.86s ± 0%      2.89s ± 1%  +0.91%  (p=0.000 n=9+10)
SSA             7.19s ± 0%      7.25s ± 1%  +0.75%  (p=0.000 n=8+9)
Flate           117ms ± 1%      118ms ± 1%  +1.10%  (p=0.000 n=10+9)
GoParser        146ms ± 2%      148ms ± 1%  +1.48%  (p=0.002 n=10+10)
Reflect         398ms ± 1%      404ms ± 1%  +1.51%  (p=0.000 n=10+9)
Tar             173ms ± 1%      175ms ± 1%  +1.17%  (p=0.000 n=10+10)
XML             208ms ± 1%      210ms ± 1%  +0.62%  (p=0.011 n=10+10)
[Geo mean]      369ms           373ms       +1.17%

(https://perf.golang.org/search?q=upload:20180101.2)

name                       old time/op  new time/op  delta
Garbage/benchmem-MB=64-12  2.22ms ± 1%  2.25ms ± 1%  +1.51%  (p=0.000 n=20+19)

(https://perf.golang.org/search?q=upload:20180101.3)

Change-Id: I5daf4cfec24b252e5a57001f0a6c03f22479d0f0
Reviewed-on: https://go-review.googlesource.com/85887
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2018-02-15 21:12:23 +00:00
Martin Möhrmann
fbfc2031a6 cmd/compile: specialize map creation for small hint sizes
Handle make(map[any]any) and make(map[any]any, hint) where
hint <= BUCKETSIZE special to allow for faster map initialization
and to improve binary size by using runtime calls with fewer arguments.

Given hint is smaller or equal to BUCKETSIZE in which case
overLoadFactor(hint, 0)  is false and no buckets would be allocated by makemap:
* If hmap needs to be allocated on the stack then only hmap's hash0
  field needs to be initialized and no call to makemap is needed.
* If hmap needs to be allocated on the heap then a new special
  makehmap function will allocate hmap and intialize hmap's
  hash0 field.

Reduces size of the godoc by ~36kb.

AMD64
name         old time/op    new time/op    delta
NewEmptyMap    16.6ns ± 2%     5.5ns ± 2%  -66.72%  (p=0.000 n=10+10)
NewSmallMap    64.8ns ± 1%    56.5ns ± 1%  -12.75%  (p=0.000 n=9+10)

Updates #6853

Change-Id: I624e90da6775afaa061178e95db8aca674f44e9b
Reviewed-on: https://go-review.googlesource.com/61190
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-11-02 17:03:45 +00:00
Austin Clements
15d6ab69fb runtime: make systemstack tail call if already switched
Currently systemstack always calls its argument, even if we're already
on the system stack. Unfortunately, traceback with _TraceJump stops at
the first systemstack it sees, which often cuts off runtime stacks
early in profiles.

Fix this by performing a tail call if we're already on the system
stack. This eliminates it from the traceback entirely, so it won't
stop prematurely (or all get mushed into a single node in the profile
graph).

Change-Id: Ibc69e8765e899f8d3806078517b8c7314da196f4
Reviewed-on: https://go-review.googlesource.com/74050
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
2017-10-30 16:33:55 +00:00
Austin Clements
c85b12b579 runtime: make LockOSThread/UnlockOSThread nested
Currently, there is a single bit for LockOSThread, so two calls to
LockOSThread followed by one call to UnlockOSThread will unlock the
thread. There's evidence (#20458) that this is almost never what
people want or expect and it makes these APIs very hard to use
correctly or reliably.

Change this so LockOSThread/UnlockOSThread can be nested and the
calling goroutine will not be unwired until UnlockOSThread has been
called as many times as LockOSThread has. This should fix the vast
majority of incorrect uses while having no effect on the vast majority
of correct uses.

Fixes #20458.

Change-Id: I1464e5e9a0ea4208fbb83638ee9847f929a2bacb
Reviewed-on: https://go-review.googlesource.com/45752
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-10-05 19:50:23 +00:00
Keith Randall
dbe3522c7f runtime: fix hashmap load factor computation
overLoadFactor wasn't really doing what it says it does.
It was reporting overOrEqualToLoadFactor.  That's actually what we
want when adding an entry to a map, but it isn't what we want when
constructing a map in the first place.

The impetus for this change is that if you make a map with a hint
of exactly 8 (which happens, for example, with the unitMap in
time/format.go), we allocate 2 buckets for it instead of 1.

Instead, make overLoadFactor really report when it is > the max
allowed load factor, not >=.  Adjust the callers who want to ensure
that the map is no more than the max load factor after an insertion
by adding a +1 to the current (pre-addition) size.

Change-Id: Ie8d85344800a9a870036b637b1031ddd9e4b93f9
Reviewed-on: https://go-review.googlesource.com/61053
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2017-09-02 05:44:23 +00:00
Martin Möhrmann
2a56b023af runtime: specialize memhash32 and memhash64
AMD64 with AES support disabled:
name                old time/op    new time/op    delta
MapPopulate/1         78.0ns ± 1%    75.5ns ± 1%   -3.17%  (p=0.000 n=10+9)
MapPopulate/10         764ns ± 2%     673ns ± 2%  -11.91%  (p=0.000 n=10+10)
MapPopulate/100       9.52µs ± 1%    8.54µs ± 1%  -10.37%  (p=0.000 n=8+10)
MapPopulate/1000       116µs ± 2%     103µs ± 1%  -10.40%  (p=0.000 n=10+8)
MapPopulate/10000     1.01ms ± 1%    0.90ms ± 1%  -10.70%  (p=0.000 n=10+10)
MapPopulate/100000    9.81ms ± 1%    8.67ms ± 2%  -11.54%  (p=0.000 n=10+10)

386 with AES support disabled:
name                old time/op    new time/op    delta
MapPopulate/1         95.3ns ± 1%    90.6ns ± 1%  -4.95%  (p=0.000 n=10+9)
MapPopulate/10         983ns ± 2%     912ns ± 1%  -7.18%  (p=0.000 n=10+10)
MapPopulate/100       11.9µs ± 2%    11.2µs ± 1%  -6.01%  (p=0.000 n=10+10)
MapPopulate/1000       140µs ± 1%     131µs ± 1%  -6.19%  (p=0.000 n=10+10)
MapPopulate/10000     1.26ms ± 2%    1.18ms ± 1%  -5.93%  (p=0.000 n=9+10)
MapPopulate/100000    12.1ms ± 2%    11.4ms ± 1%  -5.48%  (p=0.000 n=10+10)

Fixes #21539

Change-Id: Ice128c947c9a6a294800d6a5250d82045eb70b55
Reviewed-on: https://go-review.googlesource.com/59352
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-08-28 19:55:27 +00:00
Austin Clements
80832974ac runtime: make rwmutex work on Ms instead of Gs
Currently runtime.rwmutex is written to block the calling goroutine
rather than the calling thread. However, rwmutex was intended to be
used in the scheduler, which means it needs to be a thread-level
synchronization primitive.

Hence, this modifies rwmutex to synchronize threads instead of
goroutines. This has the consequence of making it write-barrier-free,
which is also important for using it in the scheduler.

The implementation makes three changes: it replaces the "w" semaphore
with a mutex, since this was all it was being used for anyway; it
replaces "writerSem" with a single pending M that parks on its note;
and it replaces "readerSem" with a list of Ms that park on their notes
plus a pass count that together emulate a counting semaphore. I
model-checked the safety and liveness of this implementation through
>1 billion schedules.

For #20738.

Change-Id: I3cf5a18c266a96a3f38165083812803510217787
Reviewed-on: https://go-review.googlesource.com/47071
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2017-06-28 22:08:57 +00:00
Ian Lance Taylor
09ebbf4085 runtime: add read/write mutex type
This is a runtime version of sync.RWMutex that can be used by code in
the runtime package. The type is not quite the same, in that the zero
value is not valid.

For future use by CL 43713.

Updates #19546

Change-Id: I431eb3688add16ce1274dab97285f555b72735bf
Reviewed-on: https://go-review.googlesource.com/45991
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2017-06-19 17:40:38 +00:00
Austin Clements
1a033b1a70 runtime: separate spans of noscan objects
Currently, we mix objects with pointers and objects without pointers
("noscan" objects) together in memory. As a result, for every object
we grey, we have to check that object's heap bits to find out if it's
noscan, which adds to the per-object cost of GC. This also hurts the
TLB footprint of the garbage collector because it decreases the
density of scannable objects at the page level.

This commit improves the situation by using separate spans for noscan
objects. This will allow a much simpler noscan check (in a follow up
CL), eliminate the need to clear the bitmap of noscan objects (in a
follow up CL), and improves TLB footprint by increasing the density of
scannable objects.

This is also a step toward eliminating dead bits, since the current
noscan check depends on checking the dead bit of the first word.

This has no effect on the heap size of the garbage benchmark.

We'll measure the performance change of this after the follow-up
optimizations.

This is a cherry-pick from dev.garbage commit d491e550c3. The only
non-trivial merge conflict was in updatememstats in mstats.go, where
we now have to separate the per-spanclass stats from the per-sizeclass
stats.

Change-Id: I13bdc4869538ece5649a8d2a41c6605371618e40
Reviewed-on: https://go-review.googlesource.com/41251
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2017-04-28 22:50:31 +00:00
Austin Clements
13ae271d5d runtime: introduce a type for lfstacks
The lfstack API is still a C-style API: lfstacks all have unhelpful
type uint64 and the APIs are package-level functions. Make the code
more readable and Go-style by creating an lfstack type with methods
for push, pop, and empty.

Change-Id: I64685fa3be0e82ae2d1a782a452a50974440a827
Reviewed-on: https://go-review.googlesource.com/38290
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2017-03-19 22:42:24 +00:00
Austin Clements
4b8f41daa6 runtime: print user stack on other threads during GOTRACBEACK=crash
Currently, when printing tracebacks of other threads during
GOTRACEBACK=crash, if the thread is on the system stack we print only
the header for the user goroutine and fail to print its stack. This
happens because we passed the g0 to traceback instead of curg. The g0
never has anything set in its gobuf, so traceback doesn't print
anything.

Fix this by passing _g_.m.curg to traceback instead of the g0.

Fixes #19494.

Change-Id: Idfabf94d6a725e9cdf94a3923dead6455ef3b217
Reviewed-on: https://go-review.googlesource.com/38012
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2017-03-15 22:16:12 +00:00
Austin Clements
4a7cf960c3 runtime: make ReadMemStats STW for < 25µs
Currently ReadMemStats stops the world for ~1.7 ms/GB of heap because
it collects statistics from every single span. For large heaps, this
can be quite costly. This is particularly unfortunate because many
production infrastructures call this function regularly to collect and
report statistics.

Fix this by tracking the necessary cumulative statistics in the
mcaches. ReadMemStats still has to stop the world to stabilize these
statistics, but there are only O(GOMAXPROCS) mcaches to collect
statistics from, so this pause is only 25µs even at GOMAXPROCS=100.

Fixes #13613.

Change-Id: I3c0a4e14833f4760dab675efc1916e73b4c0032a
Reviewed-on: https://go-review.googlesource.com/34937
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2017-03-04 02:56:37 +00:00
Russ Cox
b788fd80e6 runtime: new profile buffer implementation supporting label pointers
The existing CPU profiling buffer is a slice of uintptr, but we want to
start including profiling label data in the profiles, and those labels need
to be pointers in order to let them describe rich information.

This CL implements a new profBuf type that holds both a slice of uint64
for data and a slice of unsafe.Pointer for profiling labels (aka tags).
Making the runtime use these buffers will happen in followup CLs.

Change-Id: I9ff16b532d8edaf4ce0cbba1098229a561834efc
Reviewed-on: https://go-review.googlesource.com/36713
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2017-02-23 19:47:23 +00:00
Josh Bleecher Snyder
46a75870ad runtime: speed up fastrand() % n
This occurs a fair amount in the runtime for non-power-of-two n.
Use an alternative, faster formulation.

name           old time/op  new time/op  delta
Fastrandn/2-8  4.45ns ± 2%  2.09ns ± 3%  -53.12%  (p=0.000 n=14+14)
Fastrandn/3-8  4.78ns ±11%  2.06ns ± 2%  -56.94%  (p=0.000 n=15+15)
Fastrandn/4-8  4.76ns ± 9%  1.99ns ± 3%  -58.28%  (p=0.000 n=15+13)
Fastrandn/5-8  4.96ns ±13%  2.03ns ± 6%  -59.14%  (p=0.000 n=15+15)

name                    old time/op  new time/op  delta
SelectUncontended-8     33.7ns ± 2%  33.9ns ± 2%  +0.70%  (p=0.000 n=49+50)
SelectSyncContended-8   1.68µs ± 4%  1.65µs ± 4%  -1.54%  (p=0.000 n=50+45)
SelectAsyncContended-8   282ns ± 1%   277ns ± 1%  -1.50%  (p=0.000 n=48+43)
SelectNonblock-8        5.31ns ± 1%  5.32ns ± 1%    ~     (p=0.275 n=45+44)
SelectProdCons-8         585ns ± 3%   577ns ± 2%  -1.35%  (p=0.000 n=50+50)
GoroutineSelect-8       1.59ms ± 2%  1.59ms ± 1%    ~     (p=0.084 n=49+48)

Updates #16213

Change-Id: Ib555a4d7da2042a25c3976f76a436b536487d5b7
Reviewed-on: https://go-review.googlesource.com/36932
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2017-02-14 00:01:22 +00:00
Sokolov Yura
d03c124860 runtime: implement fastrand in go
So it could be inlined.

Using bit-tricks it could be implemented without condition
(improved trick version by Minux Ma).

Simple benchmark shows it is faster on i386 and x86_64, though
I don't know will it be faster on other architectures?

benchmark                       old ns/op     new ns/op     delta
BenchmarkFastrand-3             2.79          1.48          -46.95%
BenchmarkFastrandHashiter-3     25.9          24.9          -3.86%

Change-Id: Ie2eb6d0f598c0bb5fac7f6ad0f8b5e3eddaa361b
Reviewed-on: https://go-review.googlesource.com/34782
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2017-02-10 19:16:29 +00:00
Keith Randall
688995d1e9 cmd/compile: do more type conversion inline
The code to do the conversion is smaller than the
call to the runtime.
The 1-result asserts need to call panic if they fail, but that
code is out of line.

The only conversions left in the runtime are those which
might allocate and those which might need to generate an itab.

Given the following types:
  type E interface{}
  type I interface { foo() }
  type I2 iterface { foo(); bar() }
  type Big [10]int
  func (b Big) foo() { ... }

This CL inlines the following conversions:

was assertE2T
  var e E = ...
  b := i.(Big)
was assertE2T2
  var e E = ...
  b, ok := i.(Big)
was assertI2T
  var i I = ...
  b := i.(Big)
was assertI2T2
  var i I = ...
  b, ok := i.(Big)
was assertI2E
  var i I = ...
  e := i.(E)
was assertI2E2
  var i I = ...
  e, ok := i.(E)

These are the remaining runtime calls:

convT2E:
  var b Big = ...
  var e E = b
convT2I:
  var b Big = ...
  var i I = b
convI2I:
  var i2 I2 = ...
  var i I = i2
assertE2I:
  var e E = ...
  i := e.(I)
assertE2I2:
  var e E = ...
  i, ok := e.(I)
assertI2I:
  var i I = ...
  i2 := i.(I2)
assertI2I2:
  var i I = ...
  i2, ok := i.(I2)

Fixes #17405
Fixes #8422

Change-Id: Ida2367bf8ce3cd2c6bb599a1814f1d275afabe21
Reviewed-on: https://go-review.googlesource.com/32313
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-11-02 21:33:03 +00:00
Martin Möhrmann
d7b34d5f29 runtime: improve atoi implementation
- Adds overflow checks
- Adds parsing of negative integers
- Adds boolean return value to signal parsing errors
- Adds atoi32 for parsing of integers that fit in an int32
- Adds tests

Handling of errors to provide error messages
at the call sites is left to future CLs.

Updates #17718

Change-Id: I3cacd0ab1230b9efc5404c68edae7304d39bcbc0
Reviewed-on: https://go-review.googlesource.com/32390
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-11-01 14:04:39 +00:00
Austin Clements
87e48c5afd runtime, cmd/compile: rename memclr -> memclrNoHeapPointers
Since barrier-less memclr is only safe in very narrow circumstances,
this commit renames memclr to avoid accidentally calling memclr on
typed memory. This can cause subtle, non-deterministic bugs, so it's
worth some effort to prevent. In the near term, this will also prevent
bugs creeping in from any concurrent CLs that add calls to memclr; if
this happens, whichever patch hits master second will fail to compile.

This also adds the other new memclr variants to the compiler's
builtin.go to minimize the churn on that binary blob. We'll use these
in future commits.

Updates #17503.

Change-Id: I00eead049f5bd35ca107ea525966831f3d1ed9ca
Reviewed-on: https://go-review.googlesource.com/31369
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-10-28 18:20:33 +00:00
Austin Clements
4d6207790b runtime: consolidate h_allspans and mheap_.allspans
These are two ways to refer to the allspans array that hark back to
when the runtime was split between C and Go. Clean this up by making
mheap_.allspans a slice and eliminating h_allspans.

Change-Id: Ic9360d040cf3eb590b5dfbab0b82e8ace8525610
Reviewed-on: https://go-review.googlesource.com/30530
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-10-25 22:32:42 +00:00
Martin Möhrmann
252093f120 runtime: remove maxstring
Before this CL the runtime prevented printing of overlong strings with the print
function when the length of the string was determined to be corrupted.
Corruption was checked by comparing the string size against the limit
which was stored in maxstring.

However maxstring was not updated everywhere were go strings were created
e.g. for string constants during compile time. Thereby the check for maximum
string length prevented the printing of some valid strings.

The protection maxstring provided did not warrant the bookkeeping
and global synchronization needed to keep maxstring updated to the
correct limit everywhere.

Fixes #16999

Change-Id: I62cc2f4362f333f75b77f199ce1a71aac0ff7aeb
Reviewed-on: https://go-review.googlesource.com/28813
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-09-08 15:57:01 +00:00
Dmitry Vyukov
fcd7c02c70 runtime: fix CPU underutilization
Runqempty is a critical predicate for scheduler. If runqempty spuriously
returns true, then scheduler can fail to schedule arbitrary number of
runnable goroutines on idle Ps for arbitrary long time. With the addition
of runnext runqempty predicate become broken (can spuriously return true).
Consider that runnext is not nil and the main array is empty. Runqempty
observes that the array is empty, then it is descheduled for some time.
Then queue owner pushes another element to the queue evicting runnext
into the array. Then queue owner pops runnext. Then runqempty resumes
and observes runnext is nil and returns true. But there were no point
in time when the queue was empty.

Fix runqempty predicate to not return true spuriously.

Change-Id: Ifb7d75a699101f3ff753c4ce7c983cf08befd31e
Reviewed-on: https://go-review.googlesource.com/20858
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-05-03 10:06:32 +00:00
Austin Clements
61f56e925e runtime: fix pagesInUse accounting
When we grow the heap, we create a temporary "in use" span for the
memory acquired from the OS and then free that span to link it into
the heap. Hence, we (1) increase pagesInUse when we make the temporary
span so that (2) freeing the span will correctly decrease it.

However, currently step (1) increases pagesInUse by the number of
pages requested from the heap, while step (2) decreases it by the
number of pages requested from the OS (the size of the temporary
span). These aren't necessarily the same, since we round up the number
of pages we request from the OS, so steps 1 and 2 don't necessarily
cancel out like they're supposed to. Over time, this can add up and
cause pagesInUse to underflow and wrap around to 2^64. The garbage
collector computes the sweep ratio from this, so if this happens, the
sweep ratio becomes effectively infinite, causing the first allocation
on each P in a sweep cycle to sweep the entire heap. This makes
sweeping effectively STW.

Fix this by increasing pagesInUse in step 1 by the number of pages
requested from the OS, so that the two steps correctly cancel out. We
add a test that checks that the running total matches the actual state
of the heap.

Fixes #15022. For 1.6.x.

Change-Id: Iefd9d6abe37d0d447cbdbdf9941662e4f18eeffc
Reviewed-on: https://go-review.googlesource.com/21280
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2016-04-04 15:33:26 +00:00
Keith Randall
15ea61146e runtime: use unaligned loads on ppc64
benchmark                      old ns/op     new ns/op     delta
BenchmarkAlignedLoad-160       8.67          7.42          -14.42%
BenchmarkUnalignedLoad-160     8.63          7.37          -14.60%

Change-Id: Id4609d7b4038c4d2ec332efc4fe6f1adfb61b82b
Reviewed-on: https://go-review.googlesource.com/20812
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-03-18 19:21:53 +00:00
Emmanuel Odeke
6dfcc336c5 runtime: move testSchedLocalQueue* to export_test
Move functions testSchedLocalQueueLocal and testSchedLocalQueueSteal
from proc.go to export_test.go, the only site that they are used.

Fixes #14796

Change-Id: I16b6fa4a13835eab33f66a2c2e87a5f5c79b7bd3
Reviewed-on: https://go-review.googlesource.com/20640
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-03-13 00:34:58 +00:00