When everything is working correctly, any pointer the garbage
collector encounters can only point into a fully initialized heap
span, since the span must have been initialized before that pointer
could escape the heap allocator and become visible to the GC.
However, in various cases, we try to be defensive against bad
pointers. In findObject, this is just a sanity check: we never expect
to find a bad pointer, but programming errors can lead to them. In
spanOfHeap, we don't necessarily trust the pointer and we're trying to
check if it really does point to the heap, though it should always
point to something. Conservative scanning takes this to a new level,
since it can only guess that a word may be a pointer and verify this.
In all of these cases, we have a problem that the span lookup and
check can race with span initialization, since the span becomes
visible to lookups before it's fully initialized.
Furthermore, we're about to start initializing the span without the
heap lock held, which is going to introduce races where accesses were
previously protected by the heap lock.
To address this, this CL makes accesses to mspan.state atomic, and
ensures that the span is fully initialized before setting the state to
mSpanInUse. All loads are now atomic, and in any case where we don't
trust the pointer, it first atomically loads the span state and checks
that it's mSpanInUse, after which it will have synchronized with span
initialization and can safely check the other span fields.
For #10958, #24543, but a good fix in general.
Change-Id: I518b7c63555b02064b98aa5f802c92b758fef853
Reviewed-on: https://go-review.googlesource.com/c/go/+/203286
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Netpoll must be always be initialized when TestNetpollBreak is launched.
However, when it is run in standalone, it won't be the case, so it must
be forced.
Updates: #27707
Change-Id: I28147f3834f3d6aca982c6a298feadc09b55f66e
Reviewed-on: https://go-review.googlesource.com/c/go/+/204058
Run-TryBot: Clément Chigot <clement.chigot%atos.net@gtempaccount.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Fixes#35053
Change-Id: I31853d434610880044c169e0c1e9732f97ff1bdb
Reviewed-on: https://go-review.googlesource.com/c/go/+/202444
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David du Colombier <0intro@gmail.com>
The new netpollBreak function can be used to interrupt a blocking netpoll.
This function is not currently used; it will be used by later CLs.
Updates #27707
Change-Id: I5cb936609ba13c3c127ea1368a49194fc58c9f4d
Reviewed-on: https://go-review.googlesource.com/c/go/+/171824
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Currently, we map and grow the heap a whole arena (64MB) at a time.
Unfortunately, in order to fix#32828, we need to switch from
scavenging inline with allocation back to scavenging on heap growth,
but heap-growth scavenging happens in large jumps because we grow the
heap in large jumps.
In order to prepare for better heap-growth scavenging, this CL
separates mapping more space for the heap from actually "growing" it
(tracking the new space with spans). Instead, growing the heap keeps
track of the "current arena" it's growing into. It track that with new
spans as needed, and only maps more arena space when the current arena
is inadequate. The effect to the user is the same, but this will let
us scavenge on much smaller increments of heap growth.
There are two slightly subtleties to this change:
1. If an allocation requires mapping a new arena and that new arena
isn't contiguous with the current arena, we don't want to lose the
unused space in the current arena, so we have to immediately track
that with a span.
2. The mapped space must be accounted as released and idle, even
though it isn't actually tracked in a span.
For #32828, since this makes heap-growth scavenging far more
effective, especially at small heap sizes. For example, this change is
necessary for TestPhysicalMemoryUtilization to pass once we remove
inline scavenging.
Change-Id: I300e74a0534062467e4ce91cdc3508e5ef9aa73a
Reviewed-on: https://go-review.googlesource.com/c/go/+/189957
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
With gccgo, if we generate getg inlined, the backend may cache
the address of the TLS variable, which will become invalid after
a thread switch.
Currently there is no known bug for this. But if we didn't
implement this carefully, we may get subtle bugs. This CL adds a
test that will fail loudly if this is wrong. (See also
https://go.googlesource.com/gofrontend/+/refs/heads/master/libgo/runtime/proc.c#333
and an incorrect attempt CL 185337.)
Note: at least on Linux/AMD64, even with an incorrect
implementation, this only fails if the test is compiled with
-fPIC, which is not the default setting for gccgo test suite. So
some manual work is needed. Maybe we could extend the test suite
to run the runtime test with more settings (e.g. PIC and static).
Change-Id: I459a3b4c31f09b9785c0eca19b7756f80e8ef54c
Reviewed-on: https://go-review.googlesource.com/c/go/+/186357
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Than McIntosh <thanm@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Currently there's an invariant in the runtime wherein the heap lock
can only be acquired on the system stack, otherwise a self-deadlock
could occur if the stack grows while the lock is held.
This invariant is upheld and documented in a number of situations (e.g.
allocManual, freeManual) but there are other places where the invariant
is either not maintained at all which risks self-deadlock (e.g.
setGCPercent, gcResetMarkState, allocmcache) or is maintained but
undocumented (e.g. gcSweep, readGCStats_m).
This change adds go:systemstack to any function that acquires the heap
lock or adds a systemstack(func() { ... }) around the critical section,
where appropriate. It also documents the invariant on (*mheap).lock
directly and updates repetitive documentation to refer to that comment.
Fixes#32105.
Change-Id: I702b1290709c118b837389c78efde25c51a2cafb
Reviewed-on: https://go-review.googlesource.com/c/go/+/177857
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Austin Clements <austin@google.com>
This change tracks the number of potential free and unscavenged huge
pages which will be used to inform the rate at which scavenging should
occur.
For #30333.
Change-Id: I47663e5ffb64cac44ffa10db158486783f707479
Reviewed-on: https://go-review.googlesource.com/c/go/+/170860
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This change adds two new treap iteration types: one for large
unscavenged spans (contain at least one huge page) and one for small
unscavenged spans. This allows us to scavenge the huge spans first by
first iterating over the large ones, then the small ones.
Also, since we now depend on physHugePageSize being a power of two,
ensure that that's the case when it's retrieved from the OS.
For #30333.
Change-Id: I51662740205ad5e4905404a0856f5f2b2d2a5680
Reviewed-on: https://go-review.googlesource.com/c/go/+/174399
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This change introduces a treapIterFilter type which represents the
power set of states described by a treapIterType.
This change then adds a treapIterFilter field to each treap node
indicating the types of spans that live in that subtree. The field is
maintained via the same mechanism used to maintain maxPages. This allows
pred, succ, start, and end to be judicious about which subtrees it will
visit, ensuring that iteration avoids traversing irrelevant territory.
Without this change, repeated scavenging attempts can end up being N^2
as the scavenger walks over what it already scavenged before finding new
spans available for scavenging.
Finally, this change also only scavenges a span once it is removed from
the treap. There was always an invariant that spans owned by the treap
may not be mutated in-place, but with this change violating that
invariant can cause issues with scavenging.
For #30333.
Change-Id: I8040b997e21c94a8d3d9c8c6accfe23cebe0c3d3
Reviewed-on: https://go-review.googlesource.com/c/go/+/174878
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This change modifies the treap implementation to support holding all
spans in a single treap, instead of keeping them all in separate treaps.
This improves ergonomics for nearly all treap-related callsites.
With that said, iteration is now more expensive, but it never occurs on
the fast path, only on scavenging-related paths.
This change opens up the opportunity for further optimizations, such as
splitting spans without treap removal (taking treap removal off the span
allocator's critical path) as well as improvements to treap iteration
(building linked lists for each iteration type and managing them on
insert/removal, since those operations should be less frequent).
For #30333.
Change-Id: I3dac97afd3682a37fda09ae8656a770e1369d0a9
Reviewed-on: https://go-review.googlesource.com/c/go/+/174398
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This change modifies the treap implementation to be address-ordered
instead of size-ordered, and further augments it so it may be used for
allocation. It then modifies the find method to implement a first-fit
allocation policy.
This change to the treap implementation consequently makes it so that
spans are scavenged in highest-address-first order without any
additional changes to the scavenging code. Because the treap itself is
now address ordered, and the scavenging code iterates over it in
reverse, the highest address is now chosen instead of the largest span.
This change also renames the now wrongly-named "scavengeLargest" method
on mheap to just "scavengeLocked" and also fixes up logic in that method
which made assumptions about size.
For #30333.
Change-Id: I94b6f3209211cc1bfdc8cdaea04152a232cfbbb4
Reviewed-on: https://go-review.googlesource.com/c/go/+/164101
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This change exports the runtime mTreap in export_test.go and then adds a
series of tests which check that the invariants of the treap are
maintained under different operations. These tests also include tests
for the treap iterator type.
Also, we note that the find() operation on the treap never actually was
best-fit, so the tests just ensure that it returns an appropriately
sized span.
For #30333.
Change-Id: If81f7c746dda6677ebca925cb0a940134701b894
Reviewed-on: https://go-review.googlesource.com/c/go/+/164100
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
A goroutine should be preempted if it runs for 10ms without blocking.
We found that this doesn't work for goroutines which call short system calls.
For example, the next program can stuck for seconds without this fix:
$ cat main.go
package main
import (
"runtime"
"syscall"
)
func main() {
runtime.GOMAXPROCS(1)
c := make(chan int)
go func() {
c <- 1
for {
t := syscall.Timespec{
Nsec: 300,
}
if true {
syscall.Nanosleep(&t, nil)
}
}
}()
<-c
}
$ time go run main.go
real 0m8.796s
user 0m0.367s
sys 0m0.893s
Updates #10958
Change-Id: Id3be54d3779cc28bfc8b33fe578f13778f1ae2a2
Reviewed-on: https://go-review.googlesource.com/c/go/+/170138
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This change makes mTreap's iterator type, treapIter, bidirectional
instead of unidirectional. This change helps support moving the find
operation on a treap to return an iterator instead of a treapNode, in
order to hide the details of the treap when accessing elements.
For #28479.
Change-Id: I5dbea4fd4fb9bede6e81bfd089f2368886f98943
Reviewed-on: https://go-review.googlesource.com/c/156918
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This change adds the treapIter type which provides an iterator
abstraction for walking over an mTreap. In particular, the mTreap type
now has iter() and rev() for iterating both forwards (smallest to
largest) and backwards (largest to smallest). It also has an erase()
method for erasing elements at the iterator's current position.
For #28479.
While the expectation is that this change will slow down Go programs,
the impact on Go1 and Garbage is negligible.
Go1: https://perf.golang.org/search?q=upload:20181214.6
Garbage: https://perf.golang.org/search?q=upload:20181214.11
Change-Id: I60dbebbbe73cbbe7b78d45d2093cec12cc0bc649
Reviewed-on: https://go-review.googlesource.com/c/151537
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
When we delete an element, and it was the last element in the bucket,
update the slots between the new last element and the old last element
with the marker that says "no more elements beyond here".
Change-Id: I8efeeddf4c9b9fc491c678f84220a5a5094c9c76
Reviewed-on: https://go-review.googlesource.com/c/142438
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
This change extends the test function ReadMemStatsSlow to re-compute
the HeapReleased statistic such that it is checked in testing to be
consistent with the bookkeeping done in the runtime.
Change-Id: I49f5c2620f5731edea8e9f768744cf997dcd7c22
Reviewed-on: https://go-review.googlesource.com/c/142397
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently, if the runtime overflows the g0 stack on Windows, it leads
to an infinite recursion:
1. Something overflows the g0 stack bounds and calls morestack.
2. morestack determines it's on the g0 stack and hence cannot grow the
stack, so it calls badmorestackg0 (which prints "fatal: morestack on
g0") followed by abort.
3. abort performs an INT $3, which turns into a Windows
_EXCEPTION_BREAKPOINT exception.
4. This enters the Windows sigtramp, which ensures we're on the g0
stack and calls exceptionhandler.
5. exceptionhandler has a stack check prologue, so it determines that
it's out of stack and calls morestack.
6. goto 2
Fix this by making the exception handler avoid stack checks until it
has ruled out an abort and by blowing away the stack bounds in
lastcontinuehandler before we print the final fatal traceback (which
itself involves a lot of stack bounds checks).
Fixes#21382.
Change-Id: Ie66e91f708e18d131d97f22b43f9ac26f3aece5a
Reviewed-on: https://go-review.googlesource.com/120857
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
If the runtime code panics due to a bad index or slice expression,
then throw instead of panicing. This will skip calls to recover and dump
the entire runtime stack trace. The runtime should never panic due to
an out of bounds index, and this will help with debugging if it does.
For #24991
Updates #25201
Change-Id: I85a9feded8f0de914ee1558425931853223c0514
Reviewed-on: https://go-review.googlesource.com/121515
Reviewed-by: Austin Clements <austin@google.com>
This adds a mechanism for debuggers to safely inject calls to Go
functions on amd64. Debuggers must participate in a protocol with the
runtime, and need to know how to lay out a call frame, but the runtime
support takes care of the details of handling live pointers in
registers, stack growth, and detecting the trickier conditions when it
is unsafe to inject a user function call.
Fixes#21678.
Updates derekparker/delve#119.
Change-Id: I56d8ca67700f1f77e19d89e7fc92ab337b228834
Reviewed-on: https://go-review.googlesource.com/109699
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
The hmap field in the maptype is only used by the runtime to check the sizes of
the hmap structure created by the compiler and runtime agree.
Comments are already present about the hmap structure definitions in the
compiler and runtime needing to be in sync.
Add a test that checks the runtimes hmap size is as expected to detect
when the compilers and runtimes hmap sizes diverge instead of checking
this at runtime when a map is created.
Change-Id: I974945ebfdb66883a896386a17bbcae62a18cf2a
Reviewed-on: https://go-review.googlesource.com/91796
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
getcallersp is intrinsified, and so the dummy arg is no longer
needed. Remove it, as well as a few dummy args that are solely
to feed getcallersp.
Change-Id: Ibb6c948ff9c56537042b380ac3be3a91b247aaa6
Reviewed-on: https://go-review.googlesource.com/109596
Run-TryBot: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
CL 106735 changed to the new softfloat support on GOARM=5.
ARM assembly code that uses FP instructions not guarded on GOARM,
if any, will break. The easiest way to fix is probably to use Go
implementation on GOARM=5, like
MOVB runtime·goarm(SB), R11
CMP $5, R11
BEQ arm5
... FP instructions ...
RET
arm5:
CALL or JMP to Go implementation
Change-Id: I52fc76fac9c854ebe7c6c856c365fba35d3f560a
Reviewed-on: https://go-review.googlesource.com/107475
Run-TryBot: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Borrowed from cmd/compile, TestSizeof ensures
that the size of important types doesn't change unexpectedly.
It also helps reviewers see the impact of intended changes.
Change-Id: If57955f0c3e66054de3f40c6bba585b88694c7be
Reviewed-on: https://go-review.googlesource.com/99837
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Currently large sysReserve calls on some OSes don't actually reserve
the memory, but just check that it can be reserved. This was important
when we called sysReserve to "reserve" many gigabytes for the heap up
front, but now that we map memory in small increments as we need it,
this complication is no longer necessary.
This has one curious side benefit: currently, on Linux, allocations
that are large enough to be rejected by mmap wind up freezing the
application for a long time before it panics. This happens because
sysReserve doesn't reserve the memory, so sysMap calls mmap_fixed,
which calls mmap, which fails because the mapping is too large.
However, mmap_fixed doesn't inspect *why* mmap fails, so it falls back
to probing every page in the desired region individually with mincore
before performing an (otherwise dangerous) MAP_FIXED mapping, which
will also fail. This takes a long time for a large region. Now this
logic is gone, so the mmap failure leads to an immediate panic.
Updates #10460.
Change-Id: I8efe88c611871cdb14f99fadd09db83e0161ca2e
Reviewed-on: https://go-review.googlesource.com/85888
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
This replaces the contiguous heap arena mapping with a potentially
sparse mapping that can support heap mappings anywhere in the address
space.
This has several advantages over the current approach:
* There is no longer any limit on the size of the Go heap. (Currently
it's limited to 512GB.) Hence, this fixes#10460.
* It eliminates many failures modes of heap initialization and
growing. In particular it eliminates any possibility of panicking
with an address space conflict. This can happen for many reasons and
even causes a low but steady rate of TSAN test failures because of
conflicts with the TSAN runtime. See #16936 and #11993.
* It eliminates the notion of "non-reserved" heap, which was added
because creating huge address space reservations (particularly on
64-bit) led to huge process VSIZE. This was at best confusing and at
worst conflicted badly with ulimit -v. However, the non-reserved
heap logic is complicated, can race with other mappings in non-pure
Go binaries (e.g., #18976), and requires that the entire heap be
either reserved or non-reserved. We currently maintain the latter
property, but it's quite difficult to convince yourself of that, and
hence difficult to keep correct. This logic is still present, but
will be removed in the next CL.
* It fixes problems on 32-bit where skipping over parts of the address
space leads to mapping huge (and never-to-be-used) metadata
structures. See #19831.
This also completely rewrites and significantly simplifies
mheap.sysAlloc, which has been a source of many bugs. E.g., #21044,
#20259, #18651, and #13143 (and maybe #23222).
This change also makes it possible to allocate individual objects
larger than 512GB. As a result, a few tests that expected huge
allocations to fail needed to be changed to make even larger
allocations. However, at the moment attempting to allocate a humongous
object may cause the program to freeze for several minutes on Linux as
we fall back to probing every page with addrspace_free. That logic
(and this failure mode) will be removed in the next CL.
Fixes#10460.
Fixes#22204 (since it rewrites the code involved).
This slightly slows down compilebench and the x/benchmarks garbage
benchmark.
name old time/op new time/op delta
Template 184ms ± 1% 185ms ± 1% ~ (p=0.065 n=10+9)
Unicode 86.9ms ± 3% 86.3ms ± 1% ~ (p=0.631 n=10+10)
GoTypes 599ms ± 0% 602ms ± 0% +0.56% (p=0.000 n=10+9)
Compiler 2.87s ± 1% 2.89s ± 1% +0.51% (p=0.002 n=9+10)
SSA 7.29s ± 1% 7.25s ± 1% ~ (p=0.182 n=10+9)
Flate 118ms ± 2% 118ms ± 1% ~ (p=0.113 n=9+9)
GoParser 147ms ± 1% 148ms ± 1% +1.07% (p=0.003 n=9+10)
Reflect 401ms ± 1% 404ms ± 1% +0.71% (p=0.003 n=10+9)
Tar 175ms ± 1% 175ms ± 1% ~ (p=0.604 n=9+10)
XML 209ms ± 1% 210ms ± 1% ~ (p=0.052 n=10+10)
(https://perf.golang.org/search?q=upload:20171231.4)
name old time/op new time/op delta
Garbage/benchmem-MB=64-12 2.23ms ± 1% 2.25ms ± 1% +0.84% (p=0.000 n=19+19)
(https://perf.golang.org/search?q=upload:20171231.3)
Relative to the start of the sparse heap changes (starting at and
including "runtime: fix various contiguous bitmap assumptions"),
overall slowdown is roughly 1% on GC-intensive benchmarks:
name old time/op new time/op delta
Template 183ms ± 1% 185ms ± 1% +1.32% (p=0.000 n=9+9)
Unicode 84.9ms ± 2% 86.3ms ± 1% +1.65% (p=0.000 n=9+10)
GoTypes 595ms ± 1% 602ms ± 0% +1.19% (p=0.000 n=9+9)
Compiler 2.86s ± 0% 2.89s ± 1% +0.91% (p=0.000 n=9+10)
SSA 7.19s ± 0% 7.25s ± 1% +0.75% (p=0.000 n=8+9)
Flate 117ms ± 1% 118ms ± 1% +1.10% (p=0.000 n=10+9)
GoParser 146ms ± 2% 148ms ± 1% +1.48% (p=0.002 n=10+10)
Reflect 398ms ± 1% 404ms ± 1% +1.51% (p=0.000 n=10+9)
Tar 173ms ± 1% 175ms ± 1% +1.17% (p=0.000 n=10+10)
XML 208ms ± 1% 210ms ± 1% +0.62% (p=0.011 n=10+10)
[Geo mean] 369ms 373ms +1.17%
(https://perf.golang.org/search?q=upload:20180101.2)
name old time/op new time/op delta
Garbage/benchmem-MB=64-12 2.22ms ± 1% 2.25ms ± 1% +1.51% (p=0.000 n=20+19)
(https://perf.golang.org/search?q=upload:20180101.3)
Change-Id: I5daf4cfec24b252e5a57001f0a6c03f22479d0f0
Reviewed-on: https://go-review.googlesource.com/85887
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Handle make(map[any]any) and make(map[any]any, hint) where
hint <= BUCKETSIZE special to allow for faster map initialization
and to improve binary size by using runtime calls with fewer arguments.
Given hint is smaller or equal to BUCKETSIZE in which case
overLoadFactor(hint, 0) is false and no buckets would be allocated by makemap:
* If hmap needs to be allocated on the stack then only hmap's hash0
field needs to be initialized and no call to makemap is needed.
* If hmap needs to be allocated on the heap then a new special
makehmap function will allocate hmap and intialize hmap's
hash0 field.
Reduces size of the godoc by ~36kb.
AMD64
name old time/op new time/op delta
NewEmptyMap 16.6ns ± 2% 5.5ns ± 2% -66.72% (p=0.000 n=10+10)
NewSmallMap 64.8ns ± 1% 56.5ns ± 1% -12.75% (p=0.000 n=9+10)
Updates #6853
Change-Id: I624e90da6775afaa061178e95db8aca674f44e9b
Reviewed-on: https://go-review.googlesource.com/61190
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Currently systemstack always calls its argument, even if we're already
on the system stack. Unfortunately, traceback with _TraceJump stops at
the first systemstack it sees, which often cuts off runtime stacks
early in profiles.
Fix this by performing a tail call if we're already on the system
stack. This eliminates it from the traceback entirely, so it won't
stop prematurely (or all get mushed into a single node in the profile
graph).
Change-Id: Ibc69e8765e899f8d3806078517b8c7314da196f4
Reviewed-on: https://go-review.googlesource.com/74050
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Currently, there is a single bit for LockOSThread, so two calls to
LockOSThread followed by one call to UnlockOSThread will unlock the
thread. There's evidence (#20458) that this is almost never what
people want or expect and it makes these APIs very hard to use
correctly or reliably.
Change this so LockOSThread/UnlockOSThread can be nested and the
calling goroutine will not be unwired until UnlockOSThread has been
called as many times as LockOSThread has. This should fix the vast
majority of incorrect uses while having no effect on the vast majority
of correct uses.
Fixes#20458.
Change-Id: I1464e5e9a0ea4208fbb83638ee9847f929a2bacb
Reviewed-on: https://go-review.googlesource.com/45752
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
overLoadFactor wasn't really doing what it says it does.
It was reporting overOrEqualToLoadFactor. That's actually what we
want when adding an entry to a map, but it isn't what we want when
constructing a map in the first place.
The impetus for this change is that if you make a map with a hint
of exactly 8 (which happens, for example, with the unitMap in
time/format.go), we allocate 2 buckets for it instead of 1.
Instead, make overLoadFactor really report when it is > the max
allowed load factor, not >=. Adjust the callers who want to ensure
that the map is no more than the max load factor after an insertion
by adding a +1 to the current (pre-addition) size.
Change-Id: Ie8d85344800a9a870036b637b1031ddd9e4b93f9
Reviewed-on: https://go-review.googlesource.com/61053
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
Currently runtime.rwmutex is written to block the calling goroutine
rather than the calling thread. However, rwmutex was intended to be
used in the scheduler, which means it needs to be a thread-level
synchronization primitive.
Hence, this modifies rwmutex to synchronize threads instead of
goroutines. This has the consequence of making it write-barrier-free,
which is also important for using it in the scheduler.
The implementation makes three changes: it replaces the "w" semaphore
with a mutex, since this was all it was being used for anyway; it
replaces "writerSem" with a single pending M that parks on its note;
and it replaces "readerSem" with a list of Ms that park on their notes
plus a pass count that together emulate a counting semaphore. I
model-checked the safety and liveness of this implementation through
>1 billion schedules.
For #20738.
Change-Id: I3cf5a18c266a96a3f38165083812803510217787
Reviewed-on: https://go-review.googlesource.com/47071
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This is a runtime version of sync.RWMutex that can be used by code in
the runtime package. The type is not quite the same, in that the zero
value is not valid.
For future use by CL 43713.
Updates #19546
Change-Id: I431eb3688add16ce1274dab97285f555b72735bf
Reviewed-on: https://go-review.googlesource.com/45991
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently, we mix objects with pointers and objects without pointers
("noscan" objects) together in memory. As a result, for every object
we grey, we have to check that object's heap bits to find out if it's
noscan, which adds to the per-object cost of GC. This also hurts the
TLB footprint of the garbage collector because it decreases the
density of scannable objects at the page level.
This commit improves the situation by using separate spans for noscan
objects. This will allow a much simpler noscan check (in a follow up
CL), eliminate the need to clear the bitmap of noscan objects (in a
follow up CL), and improves TLB footprint by increasing the density of
scannable objects.
This is also a step toward eliminating dead bits, since the current
noscan check depends on checking the dead bit of the first word.
This has no effect on the heap size of the garbage benchmark.
We'll measure the performance change of this after the follow-up
optimizations.
This is a cherry-pick from dev.garbage commit d491e550c3. The only
non-trivial merge conflict was in updatememstats in mstats.go, where
we now have to separate the per-spanclass stats from the per-sizeclass
stats.
Change-Id: I13bdc4869538ece5649a8d2a41c6605371618e40
Reviewed-on: https://go-review.googlesource.com/41251
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
The lfstack API is still a C-style API: lfstacks all have unhelpful
type uint64 and the APIs are package-level functions. Make the code
more readable and Go-style by creating an lfstack type with methods
for push, pop, and empty.
Change-Id: I64685fa3be0e82ae2d1a782a452a50974440a827
Reviewed-on: https://go-review.googlesource.com/38290
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, when printing tracebacks of other threads during
GOTRACEBACK=crash, if the thread is on the system stack we print only
the header for the user goroutine and fail to print its stack. This
happens because we passed the g0 to traceback instead of curg. The g0
never has anything set in its gobuf, so traceback doesn't print
anything.
Fix this by passing _g_.m.curg to traceback instead of the g0.
Fixes#19494.
Change-Id: Idfabf94d6a725e9cdf94a3923dead6455ef3b217
Reviewed-on: https://go-review.googlesource.com/38012
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Currently ReadMemStats stops the world for ~1.7 ms/GB of heap because
it collects statistics from every single span. For large heaps, this
can be quite costly. This is particularly unfortunate because many
production infrastructures call this function regularly to collect and
report statistics.
Fix this by tracking the necessary cumulative statistics in the
mcaches. ReadMemStats still has to stop the world to stabilize these
statistics, but there are only O(GOMAXPROCS) mcaches to collect
statistics from, so this pause is only 25µs even at GOMAXPROCS=100.
Fixes#13613.
Change-Id: I3c0a4e14833f4760dab675efc1916e73b4c0032a
Reviewed-on: https://go-review.googlesource.com/34937
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
The existing CPU profiling buffer is a slice of uintptr, but we want to
start including profiling label data in the profiles, and those labels need
to be pointers in order to let them describe rich information.
This CL implements a new profBuf type that holds both a slice of uint64
for data and a slice of unsafe.Pointer for profiling labels (aka tags).
Making the runtime use these buffers will happen in followup CLs.
Change-Id: I9ff16b532d8edaf4ce0cbba1098229a561834efc
Reviewed-on: https://go-review.googlesource.com/36713
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
So it could be inlined.
Using bit-tricks it could be implemented without condition
(improved trick version by Minux Ma).
Simple benchmark shows it is faster on i386 and x86_64, though
I don't know will it be faster on other architectures?
benchmark old ns/op new ns/op delta
BenchmarkFastrand-3 2.79 1.48 -46.95%
BenchmarkFastrandHashiter-3 25.9 24.9 -3.86%
Change-Id: Ie2eb6d0f598c0bb5fac7f6ad0f8b5e3eddaa361b
Reviewed-on: https://go-review.googlesource.com/34782
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The code to do the conversion is smaller than the
call to the runtime.
The 1-result asserts need to call panic if they fail, but that
code is out of line.
The only conversions left in the runtime are those which
might allocate and those which might need to generate an itab.
Given the following types:
type E interface{}
type I interface { foo() }
type I2 iterface { foo(); bar() }
type Big [10]int
func (b Big) foo() { ... }
This CL inlines the following conversions:
was assertE2T
var e E = ...
b := i.(Big)
was assertE2T2
var e E = ...
b, ok := i.(Big)
was assertI2T
var i I = ...
b := i.(Big)
was assertI2T2
var i I = ...
b, ok := i.(Big)
was assertI2E
var i I = ...
e := i.(E)
was assertI2E2
var i I = ...
e, ok := i.(E)
These are the remaining runtime calls:
convT2E:
var b Big = ...
var e E = b
convT2I:
var b Big = ...
var i I = b
convI2I:
var i2 I2 = ...
var i I = i2
assertE2I:
var e E = ...
i := e.(I)
assertE2I2:
var e E = ...
i, ok := e.(I)
assertI2I:
var i I = ...
i2 := i.(I2)
assertI2I2:
var i I = ...
i2, ok := i.(I2)
Fixes#17405Fixes#8422
Change-Id: Ida2367bf8ce3cd2c6bb599a1814f1d275afabe21
Reviewed-on: https://go-review.googlesource.com/32313
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: David Chase <drchase@google.com>
- Adds overflow checks
- Adds parsing of negative integers
- Adds boolean return value to signal parsing errors
- Adds atoi32 for parsing of integers that fit in an int32
- Adds tests
Handling of errors to provide error messages
at the call sites is left to future CLs.
Updates #17718
Change-Id: I3cacd0ab1230b9efc5404c68edae7304d39bcbc0
Reviewed-on: https://go-review.googlesource.com/32390
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Since barrier-less memclr is only safe in very narrow circumstances,
this commit renames memclr to avoid accidentally calling memclr on
typed memory. This can cause subtle, non-deterministic bugs, so it's
worth some effort to prevent. In the near term, this will also prevent
bugs creeping in from any concurrent CLs that add calls to memclr; if
this happens, whichever patch hits master second will fail to compile.
This also adds the other new memclr variants to the compiler's
builtin.go to minimize the churn on that binary blob. We'll use these
in future commits.
Updates #17503.
Change-Id: I00eead049f5bd35ca107ea525966831f3d1ed9ca
Reviewed-on: https://go-review.googlesource.com/31369
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
These are two ways to refer to the allspans array that hark back to
when the runtime was split between C and Go. Clean this up by making
mheap_.allspans a slice and eliminating h_allspans.
Change-Id: Ic9360d040cf3eb590b5dfbab0b82e8ace8525610
Reviewed-on: https://go-review.googlesource.com/30530
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Before this CL the runtime prevented printing of overlong strings with the print
function when the length of the string was determined to be corrupted.
Corruption was checked by comparing the string size against the limit
which was stored in maxstring.
However maxstring was not updated everywhere were go strings were created
e.g. for string constants during compile time. Thereby the check for maximum
string length prevented the printing of some valid strings.
The protection maxstring provided did not warrant the bookkeeping
and global synchronization needed to keep maxstring updated to the
correct limit everywhere.
Fixes#16999
Change-Id: I62cc2f4362f333f75b77f199ce1a71aac0ff7aeb
Reviewed-on: https://go-review.googlesource.com/28813
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Runqempty is a critical predicate for scheduler. If runqempty spuriously
returns true, then scheduler can fail to schedule arbitrary number of
runnable goroutines on idle Ps for arbitrary long time. With the addition
of runnext runqempty predicate become broken (can spuriously return true).
Consider that runnext is not nil and the main array is empty. Runqempty
observes that the array is empty, then it is descheduled for some time.
Then queue owner pushes another element to the queue evicting runnext
into the array. Then queue owner pops runnext. Then runqempty resumes
and observes runnext is nil and returns true. But there were no point
in time when the queue was empty.
Fix runqempty predicate to not return true spuriously.
Change-Id: Ifb7d75a699101f3ff753c4ce7c983cf08befd31e
Reviewed-on: https://go-review.googlesource.com/20858
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
When we grow the heap, we create a temporary "in use" span for the
memory acquired from the OS and then free that span to link it into
the heap. Hence, we (1) increase pagesInUse when we make the temporary
span so that (2) freeing the span will correctly decrease it.
However, currently step (1) increases pagesInUse by the number of
pages requested from the heap, while step (2) decreases it by the
number of pages requested from the OS (the size of the temporary
span). These aren't necessarily the same, since we round up the number
of pages we request from the OS, so steps 1 and 2 don't necessarily
cancel out like they're supposed to. Over time, this can add up and
cause pagesInUse to underflow and wrap around to 2^64. The garbage
collector computes the sweep ratio from this, so if this happens, the
sweep ratio becomes effectively infinite, causing the first allocation
on each P in a sweep cycle to sweep the entire heap. This makes
sweeping effectively STW.
Fix this by increasing pagesInUse in step 1 by the number of pages
requested from the OS, so that the two steps correctly cancel out. We
add a test that checks that the running total matches the actual state
of the heap.
Fixes#15022. For 1.6.x.
Change-Id: Iefd9d6abe37d0d447cbdbdf9941662e4f18eeffc
Reviewed-on: https://go-review.googlesource.com/21280
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Move functions testSchedLocalQueueLocal and testSchedLocalQueueSteal
from proc.go to export_test.go, the only site that they are used.
Fixes#14796
Change-Id: I16b6fa4a13835eab33f66a2c2e87a5f5c79b7bd3
Reviewed-on: https://go-review.googlesource.com/20640
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>