mirror of
https://github.com/golang/go.git
synced 2025-05-17 13:24:38 +00:00
cmd/vendor/github.com/google/pprof: refresh from upstream
Update vendored pprof to commit 4fc39a00b6b8c1aad05260f01429ec70e127252c from github.com/google/pprof (2017-11-01). Fixes #19380 Updates #21047 Change-Id: Ib64a94a45209039e5945acbcfa0392790c8ee41e Reviewed-on: https://go-review.googlesource.com/57370 Run-TryBot: Alberto Donizetti <alb.donizetti@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
This commit is contained in:
parent
3039bff9d0
commit
aec345d638
8
src/cmd/vendor/github.com/google/pprof/.gitignore
generated
vendored
Normal file
8
src/cmd/vendor/github.com/google/pprof/.gitignore
generated
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
.DS_Store
|
||||
*~
|
||||
*.orig
|
||||
*.exe
|
||||
.*.swp
|
||||
core
|
||||
coverage.txt
|
||||
pprof
|
@ -1,9 +1,9 @@
|
||||
Want to contribute? Great! First, read this page (including the small print at the end).
|
||||
|
||||
### Before you contribute
|
||||
|
||||
Before we can use your code, you must sign the
|
||||
[Google Individual Contributor License Agreement]
|
||||
(https://cla.developers.google.com/about/google-individual)
|
||||
[Google Individual Contributor License Agreement](https://cla.developers.google.com/about/google-individual)
|
||||
(CLA), which you can do online. The CLA is necessary mainly because you own the
|
||||
copyright to your changes, even after your contribution becomes part of our
|
||||
codebase, so we need your permission to use and distribute your code. We also
|
||||
@ -17,11 +17,11 @@ possibly guide you. Coordinating up front makes it much easier to avoid
|
||||
frustration later on.
|
||||
|
||||
### Code reviews
|
||||
All submissions, including submissions by project members, require review. We
|
||||
use Github pull requests for this purpose.
|
||||
|
||||
All submissions, including submissions by project members, require review.
|
||||
We use Github pull requests for this purpose.
|
||||
|
||||
### The small print
|
||||
Contributions made by corporations are covered by a different agreement than
|
||||
the one above, the
|
||||
[Software Grant and Corporate Contributor License Agreement]
|
||||
(https://cla.developers.google.com/about/google-corporate).
|
||||
|
||||
Contributions made by corporations are covered by a different agreement than the one above,
|
||||
the [Software Grant and Corporate Contributor License Agreement](https://cla.developers.google.com/about/google-corporate).
|
29
src/cmd/vendor/github.com/google/pprof/README.md
generated
vendored
29
src/cmd/vendor/github.com/google/pprof/README.md
generated
vendored
@ -1,3 +1,6 @@
|
||||
[](https://travis-ci.org/google/pprof)
|
||||
[](https://codecov.io/gh/google/pprof)
|
||||
|
||||
# Introduction
|
||||
|
||||
pprof is a tool for visualization and analysis of profiling data.
|
||||
@ -24,7 +27,7 @@ them through the use of the native binutils tools (addr2line and nm).
|
||||
|
||||
Prerequisites:
|
||||
|
||||
- Go development kit. Known to work with Go 1.5.
|
||||
- Go development kit. Requires Go 1.7 or newer.
|
||||
Follow [these instructions](http://golang.org/doc/code.html) to install the
|
||||
go tool and set up GOPATH.
|
||||
|
||||
@ -35,6 +38,10 @@ To build and install it, use the `go get` tool.
|
||||
|
||||
go get github.com/google/pprof
|
||||
|
||||
Remember to set GOPATH to the directory where you want pprof to be
|
||||
installed. The binary will be in `$GOPATH/bin` and the sources under
|
||||
`$GOPATH/src/github.com/google/pprof`.
|
||||
|
||||
# Basic usage
|
||||
|
||||
pprof can read a profile from a file or directly from a server via http.
|
||||
@ -70,12 +77,28 @@ This will open a simple shell that takes pprof commands to generate reports.
|
||||
Type 'help' for available commands/options.
|
||||
```
|
||||
|
||||
## Run pprof via a web interface
|
||||
|
||||
If the `-http` flag is specified, pprof starts a web server at
|
||||
the specified host:port that provides an interactive web-based interface to pprof.
|
||||
Host is optional, and is "localhost" by default. Port is optional, and is a
|
||||
random available port by default. `-http=":"` starts a server locally at
|
||||
a random port.
|
||||
|
||||
```
|
||||
pprof -http=[host]:[port] [main_binary] profile.pb.gz
|
||||
```
|
||||
|
||||
The preceding command should automatically open your web browser at
|
||||
the right page; if not, you can manually visit the specified port in
|
||||
your web browser.
|
||||
|
||||
## Using pprof with Linux Perf
|
||||
|
||||
pprof can read `perf.data` files generated by the
|
||||
[Linux perf](https://perf.wiki.kernel.org/index.php) tool by using the
|
||||
[Linux perf](https://perf.wiki.kernel.org/index.php/Main_Page) tool by using the
|
||||
`perf_to_profile` program from the
|
||||
[perf_data_converter](http://github.com/google/perf_data_converter) package.
|
||||
[perf_data_converter](https://github.com/google/perf_data_converter) package.
|
||||
|
||||
## Further documentation
|
||||
|
||||
|
8
src/cmd/vendor/github.com/google/pprof/doc/developer/profile.proto.md
generated
vendored
8
src/cmd/vendor/github.com/google/pprof/doc/developer/profile.proto.md
generated
vendored
@ -128,9 +128,11 @@ size of 6MB.
|
||||
|
||||
Labels can be string-based or numeric. They are represented by the Label
|
||||
message, with a key identifying the label and either a string or numeric
|
||||
value. For numeric labels, by convention the key represents the measurement unit
|
||||
of the numeric value. So for the previous example, the samples would have labels
|
||||
{“bytes”, 2097152} and {“bytes”, 4194304}.
|
||||
value. For numeric labels, the measurement unit can be specified in the profile.
|
||||
If no unit is specified and the key is "request" or "alignment",
|
||||
then the units are assumed to be "bytes". Otherwise when no unit is specified
|
||||
the key will be used as the measurement unit of the numeric value. All tags with
|
||||
the same key should have the same unit.
|
||||
|
||||
## Keep and drop expressions
|
||||
|
||||
|
97
src/cmd/vendor/github.com/google/pprof/doc/pprof.md
generated
vendored
97
src/cmd/vendor/github.com/google/pprof/doc/pprof.md
generated
vendored
@ -29,7 +29,40 @@ location. pprof is agnostic to the profile semantics, so other uses are
|
||||
possible. The interpretation of the reports generated by pprof depends on the
|
||||
semantics defined by the source of the profile.
|
||||
|
||||
# General usage
|
||||
# Usage Modes
|
||||
|
||||
There are few different ways of using `pprof`.
|
||||
|
||||
## Report generation
|
||||
|
||||
If a report format is requested on the command line:
|
||||
|
||||
pprof <format> [options] source
|
||||
|
||||
pprof will generate a report in the specified format and exit.
|
||||
Formats can be either text, or graphical. See below for details about
|
||||
supported formats, options, and sources.
|
||||
|
||||
## Interactive terminal use
|
||||
|
||||
Without a format specifier:
|
||||
|
||||
pprof [options] source
|
||||
|
||||
pprof will start an interactive shell in which the user can type
|
||||
commands. Type `help` to get online help.
|
||||
|
||||
## Web interface
|
||||
|
||||
If a host:port is specified on the command line:
|
||||
|
||||
pprof -http=[host]:[port] [options] source
|
||||
|
||||
pprof will start serving HTTP requests on the specified port. Visit
|
||||
the HTTP url corresponding to the port (typically `http://<host>:<port>/`)
|
||||
in a browser to see the interface.
|
||||
|
||||
# Details
|
||||
|
||||
The objective of pprof is to generate a report for a profile. The report is
|
||||
generated from a location hierarchy, which is reconstructed from the profile
|
||||
@ -38,14 +71,12 @@ itself, while *cum* is the value of the location plus all its
|
||||
descendants. Samples that include a location multiple times (eg for recursive
|
||||
functions) are counted only once per location.
|
||||
|
||||
The basic usage of pprof is
|
||||
## Options
|
||||
|
||||
pprof <format> [options] source
|
||||
|
||||
Where *format* selects the nature of the report, and *options* configure the
|
||||
contents of the report. Each option has a value, which can be boolean, numeric,
|
||||
or strings. While only one format can be specified, most options can be selected
|
||||
independently of each other.
|
||||
*options* configure the contents of a report. Each option has a value,
|
||||
which can be boolean, numeric, or strings. While only one format can
|
||||
be specified, most options can be selected independently of each
|
||||
other.
|
||||
|
||||
Some common pprof options are:
|
||||
|
||||
@ -74,10 +105,56 @@ number of values - 1) or the name of the sample value.
|
||||
|
||||
Sample values are numeric values associated to a unit. If pprof can recognize
|
||||
these units, it will attempt to scale the values to a suitable unit for
|
||||
visualization. The `unite=` option will force the use of a specific unit. For
|
||||
example, `sample_index=sec` will force any time values to be reported in
|
||||
visualization. The `unit=` option will force the use of a specific unit. For
|
||||
example, `unit=sec` will force any time values to be reported in
|
||||
seconds. pprof recognizes most common time and memory size units.
|
||||
|
||||
## Tag filtering
|
||||
|
||||
Samples in a profile may have tags. These tags have a name and a value; this
|
||||
value can be either numeric or a string. pprof can select samples from a
|
||||
profile based on these tags using the `-tagfocus` and `-tagignore` options.
|
||||
|
||||
Generally, these options work as follows:
|
||||
* **-tagfocus=_regex_** or **-tagfocus=_range_:** Restrict to samples with tags
|
||||
matched by regexp or in range.
|
||||
* **-tagignore=_regex_** or **-tagignore=_range_:** Discard samples with tags
|
||||
matched by regexp or in range.
|
||||
|
||||
When using `-tagfocus=regex` and `-tagignore=regex`, the regex will be compared
|
||||
to each value associated with each tag. If one specifies a value
|
||||
like `regex1,regex2`, then only samples with a tag value matching `regex1`
|
||||
and a tag value matching `regex2` will be kept.
|
||||
|
||||
In addition to being able to filter on tag values, one can specify the name of
|
||||
the tag which a certain value must be associated with using the notation
|
||||
`-tagfocus=tagName=value`. Here, the `tagName` must match the tag's name
|
||||
exactly, and the value can be either a regex or a range. If one specifies
|
||||
a value like `regex1,regex2`, then samples with a tag value (paired with the
|
||||
specified tag name) matching either `regex1` or matching `regex2` will match.
|
||||
|
||||
Here are examples explaining how `tagfocus` can be used:
|
||||
|
||||
* `-tagfocus 128kb:512kb` accepts a sample iff it has any numeric tag with
|
||||
memory value in the specified range.
|
||||
* `-tagfocus mytag=128kb:512kb` accepts a sample iff it has a numeric tag
|
||||
`mytag` with memory value in the specified range. There isn't a way to say
|
||||
`-tagfocus mytag=128kb:512kb,16kb:32kb`
|
||||
or `-tagfocus mytag=128kb:512kb,mytag2=128kb:512kb`. Just single value or
|
||||
range for numeric tags.
|
||||
* `-tagfocus someregex` accepts a sample iff it has any string tag with
|
||||
`tagName:tagValue` string matching specified regexp. In the future, this
|
||||
will change to accept sample iff it has any string tag with `tagValue` string
|
||||
matching specified regexp.
|
||||
* `-tagfocus mytag=myvalue1,myvalue2` matches if either of the two tag values
|
||||
are present.
|
||||
|
||||
`-tagignore` works similarly, except that it discards matching samples, instead
|
||||
of keeping them.
|
||||
|
||||
If both the `-tagignore` and `-tagfocus` expressions (either a regexp or a
|
||||
range) match a given sample, then the sample will be discarded.
|
||||
|
||||
## Text reports
|
||||
|
||||
pprof text reports show the location hierarchy in text format.
|
||||
|
8
src/cmd/vendor/github.com/google/pprof/driver/driver.go
generated
vendored
8
src/cmd/vendor/github.com/google/pprof/driver/driver.go
generated
vendored
@ -29,10 +29,10 @@ import (
|
||||
// manager. Then it generates a report formatted according to the
|
||||
// options selected through the flags package.
|
||||
func PProf(o *Options) error {
|
||||
return internaldriver.PProf(o.InternalOptions())
|
||||
return internaldriver.PProf(o.internalOptions())
|
||||
}
|
||||
|
||||
func (o *Options) InternalOptions() *plugin.Options {
|
||||
func (o *Options) internalOptions() *plugin.Options {
|
||||
var obj plugin.ObjTool
|
||||
if o.Obj != nil {
|
||||
obj = &internalObjTool{o.Obj}
|
||||
@ -273,9 +273,9 @@ type internalSymbolizer struct {
|
||||
}
|
||||
|
||||
func (s *internalSymbolizer) Symbolize(mode string, srcs plugin.MappingSources, prof *profile.Profile) error {
|
||||
isrcs := plugin.MappingSources{}
|
||||
isrcs := MappingSources{}
|
||||
for m, s := range srcs {
|
||||
isrcs[m] = s
|
||||
}
|
||||
return s.Symbolize(mode, isrcs, prof)
|
||||
return s.Symbolizer.Symbolize(mode, isrcs, prof)
|
||||
}
|
||||
|
19
src/cmd/vendor/github.com/google/pprof/internal/binutils/addr2liner.go
generated
vendored
19
src/cmd/vendor/github.com/google/pprof/internal/binutils/addr2liner.go
generated
vendored
@ -21,6 +21,7 @@ import (
|
||||
"os/exec"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
)
|
||||
@ -36,6 +37,7 @@ const (
|
||||
// addr2Liner is a connection to an addr2line command for obtaining
|
||||
// address and line number information from a binary.
|
||||
type addr2Liner struct {
|
||||
mu sync.Mutex
|
||||
rw lineReaderWriter
|
||||
base uint64
|
||||
|
||||
@ -170,9 +172,10 @@ func (d *addr2Liner) readFrame() (plugin.Frame, bool) {
|
||||
Line: linenumber}, false
|
||||
}
|
||||
|
||||
// addrInfo returns the stack frame information for a specific program
|
||||
// address. It returns nil if the address could not be identified.
|
||||
func (d *addr2Liner) addrInfo(addr uint64) ([]plugin.Frame, error) {
|
||||
func (d *addr2Liner) rawAddrInfo(addr uint64) ([]plugin.Frame, error) {
|
||||
d.mu.Lock()
|
||||
defer d.mu.Unlock()
|
||||
|
||||
if err := d.rw.write(fmt.Sprintf("%x", addr-d.base)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -201,6 +204,16 @@ func (d *addr2Liner) addrInfo(addr uint64) ([]plugin.Frame, error) {
|
||||
stack = append(stack, frame)
|
||||
}
|
||||
}
|
||||
return stack, err
|
||||
}
|
||||
|
||||
// addrInfo returns the stack frame information for a specific program
|
||||
// address. It returns nil if the address could not be identified.
|
||||
func (d *addr2Liner) addrInfo(addr uint64) ([]plugin.Frame, error) {
|
||||
stack, err := d.rawAddrInfo(addr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Get better name from nm if possible.
|
||||
if len(stack) > 0 && d.nm != nil {
|
||||
|
5
src/cmd/vendor/github.com/google/pprof/internal/binutils/addr2liner_llvm.go
generated
vendored
5
src/cmd/vendor/github.com/google/pprof/internal/binutils/addr2liner_llvm.go
generated
vendored
@ -21,6 +21,7 @@ import (
|
||||
"os/exec"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
)
|
||||
@ -32,6 +33,7 @@ const (
|
||||
// llvmSymbolizer is a connection to an llvm-symbolizer command for
|
||||
// obtaining address and line number information from a binary.
|
||||
type llvmSymbolizer struct {
|
||||
sync.Mutex
|
||||
filename string
|
||||
rw lineReaderWriter
|
||||
base uint64
|
||||
@ -150,6 +152,9 @@ func (d *llvmSymbolizer) readFrame() (plugin.Frame, bool) {
|
||||
// addrInfo returns the stack frame information for a specific program
|
||||
// address. It returns nil if the address could not be identified.
|
||||
func (d *llvmSymbolizer) addrInfo(addr uint64) ([]plugin.Frame, error) {
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
|
||||
if err := d.rw.write(fmt.Sprintf("%s 0x%x", d.filename, addr-d.base)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
19
src/cmd/vendor/github.com/google/pprof/internal/binutils/addr2liner_nm.go
generated
vendored
19
src/cmd/vendor/github.com/google/pprof/internal/binutils/addr2liner_nm.go
generated
vendored
@ -48,22 +48,23 @@ func newAddr2LinerNM(cmd, file string, base uint64) (*addr2LinerNM, error) {
|
||||
if cmd == "" {
|
||||
cmd = defaultNM
|
||||
}
|
||||
var b bytes.Buffer
|
||||
c := exec.Command(cmd, "-n", file)
|
||||
c.Stdout = &b
|
||||
if err := c.Run(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return parseAddr2LinerNM(base, &b)
|
||||
}
|
||||
|
||||
func parseAddr2LinerNM(base uint64, nm io.Reader) (*addr2LinerNM, error) {
|
||||
a := &addr2LinerNM{
|
||||
m: []symbolInfo{},
|
||||
}
|
||||
|
||||
var b bytes.Buffer
|
||||
c := exec.Command(cmd, "-n", file)
|
||||
c.Stdout = &b
|
||||
|
||||
if err := c.Run(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Parse nm output and populate symbol map.
|
||||
// Skip lines we fail to parse.
|
||||
buf := bufio.NewReader(&b)
|
||||
buf := bufio.NewReader(nm)
|
||||
for {
|
||||
line, err := buf.ReadString('\n')
|
||||
if line == "" && err != nil {
|
||||
|
80
src/cmd/vendor/github.com/google/pprof/internal/binutils/binutils.go
generated
vendored
80
src/cmd/vendor/github.com/google/pprof/internal/binutils/binutils.go
generated
vendored
@ -24,14 +24,21 @@ import (
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/google/pprof/internal/elfexec"
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
)
|
||||
|
||||
// A Binutils implements plugin.ObjTool by invoking the GNU binutils.
|
||||
// SetConfig must be called before any of the other methods.
|
||||
type Binutils struct {
|
||||
mu sync.Mutex
|
||||
rep *binrep
|
||||
}
|
||||
|
||||
// binrep is an immutable representation for Binutils. It is atomically
|
||||
// replaced on every mutation to provide thread-safe access.
|
||||
type binrep struct {
|
||||
// Commands to invoke.
|
||||
llvmSymbolizer string
|
||||
llvmSymbolizerFound bool
|
||||
@ -47,11 +54,38 @@ type Binutils struct {
|
||||
fast bool
|
||||
}
|
||||
|
||||
// get returns the current representation for bu, initializing it if necessary.
|
||||
func (bu *Binutils) get() *binrep {
|
||||
bu.mu.Lock()
|
||||
r := bu.rep
|
||||
if r == nil {
|
||||
r = &binrep{}
|
||||
initTools(r, "")
|
||||
bu.rep = r
|
||||
}
|
||||
bu.mu.Unlock()
|
||||
return r
|
||||
}
|
||||
|
||||
// update modifies the rep for bu via the supplied function.
|
||||
func (bu *Binutils) update(fn func(r *binrep)) {
|
||||
r := &binrep{}
|
||||
bu.mu.Lock()
|
||||
defer bu.mu.Unlock()
|
||||
if bu.rep == nil {
|
||||
initTools(r, "")
|
||||
} else {
|
||||
*r = *bu.rep
|
||||
}
|
||||
fn(r)
|
||||
bu.rep = r
|
||||
}
|
||||
|
||||
// SetFastSymbolization sets a toggle that makes binutils use fast
|
||||
// symbolization (using nm), which is much faster than addr2line but
|
||||
// provides only symbol name information (no file/line).
|
||||
func (b *Binutils) SetFastSymbolization(fast bool) {
|
||||
b.fast = fast
|
||||
func (bu *Binutils) SetFastSymbolization(fast bool) {
|
||||
bu.update(func(r *binrep) { r.fast = fast })
|
||||
}
|
||||
|
||||
// SetTools processes the contents of the tools option. It
|
||||
@ -59,7 +93,11 @@ func (b *Binutils) SetFastSymbolization(fast bool) {
|
||||
// of the form t:path, where cmd will be used to look only for the
|
||||
// tool named t. If t is not specified, the path is searched for all
|
||||
// tools.
|
||||
func (b *Binutils) SetTools(config string) {
|
||||
func (bu *Binutils) SetTools(config string) {
|
||||
bu.update(func(r *binrep) { initTools(r, config) })
|
||||
}
|
||||
|
||||
func initTools(b *binrep, config string) {
|
||||
// paths collect paths per tool; Key "" contains the default.
|
||||
paths := make(map[string][]string)
|
||||
for _, t := range strings.Split(config, ",") {
|
||||
@ -91,11 +129,8 @@ func findExe(cmd string, paths []string) (string, bool) {
|
||||
|
||||
// Disasm returns the assembly instructions for the specified address range
|
||||
// of a binary.
|
||||
func (b *Binutils) Disasm(file string, start, end uint64) ([]plugin.Inst, error) {
|
||||
if b.addr2line == "" {
|
||||
// Update the command invocations if not initialized.
|
||||
b.SetTools("")
|
||||
}
|
||||
func (bu *Binutils) Disasm(file string, start, end uint64) ([]plugin.Inst, error) {
|
||||
b := bu.get()
|
||||
cmd := exec.Command(b.objdump, "-d", "-C", "--no-show-raw-insn", "-l",
|
||||
fmt.Sprintf("--start-address=%#x", start),
|
||||
fmt.Sprintf("--stop-address=%#x", end),
|
||||
@ -109,11 +144,8 @@ func (b *Binutils) Disasm(file string, start, end uint64) ([]plugin.Inst, error)
|
||||
}
|
||||
|
||||
// Open satisfies the plugin.ObjTool interface.
|
||||
func (b *Binutils) Open(name string, start, limit, offset uint64) (plugin.ObjFile, error) {
|
||||
if b.addr2line == "" {
|
||||
// Update the command invocations if not initialized.
|
||||
b.SetTools("")
|
||||
}
|
||||
func (bu *Binutils) Open(name string, start, limit, offset uint64) (plugin.ObjFile, error) {
|
||||
b := bu.get()
|
||||
|
||||
// Make sure file is a supported executable.
|
||||
// The pprof driver uses Open to sniff the difference
|
||||
@ -140,7 +172,7 @@ func (b *Binutils) Open(name string, start, limit, offset uint64) (plugin.ObjFil
|
||||
return nil, fmt.Errorf("unrecognized binary: %s", name)
|
||||
}
|
||||
|
||||
func (b *Binutils) openMachO(name string, start, limit, offset uint64) (plugin.ObjFile, error) {
|
||||
func (b *binrep) openMachO(name string, start, limit, offset uint64) (plugin.ObjFile, error) {
|
||||
of, err := macho.Open(name)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Parsing %s: %v", name, err)
|
||||
@ -153,7 +185,7 @@ func (b *Binutils) openMachO(name string, start, limit, offset uint64) (plugin.O
|
||||
return &fileAddr2Line{file: file{b: b, name: name}}, nil
|
||||
}
|
||||
|
||||
func (b *Binutils) openELF(name string, start, limit, offset uint64) (plugin.ObjFile, error) {
|
||||
func (b *binrep) openELF(name string, start, limit, offset uint64) (plugin.ObjFile, error) {
|
||||
ef, err := elf.Open(name)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Parsing %s: %v", name, err)
|
||||
@ -202,7 +234,7 @@ func (b *Binutils) openELF(name string, start, limit, offset uint64) (plugin.Obj
|
||||
|
||||
// file implements the binutils.ObjFile interface.
|
||||
type file struct {
|
||||
b *Binutils
|
||||
b *binrep
|
||||
name string
|
||||
base uint64
|
||||
buildID string
|
||||
@ -263,22 +295,27 @@ func (f *fileNM) SourceLine(addr uint64) ([]plugin.Frame, error) {
|
||||
// information). It can be slow for large binaries with debug
|
||||
// information.
|
||||
type fileAddr2Line struct {
|
||||
once sync.Once
|
||||
file
|
||||
addr2liner *addr2Liner
|
||||
llvmSymbolizer *llvmSymbolizer
|
||||
}
|
||||
|
||||
func (f *fileAddr2Line) SourceLine(addr uint64) ([]plugin.Frame, error) {
|
||||
f.once.Do(f.init)
|
||||
if f.llvmSymbolizer != nil {
|
||||
return f.llvmSymbolizer.addrInfo(addr)
|
||||
}
|
||||
if f.addr2liner != nil {
|
||||
return f.addr2liner.addrInfo(addr)
|
||||
}
|
||||
return nil, fmt.Errorf("could not find local addr2liner")
|
||||
}
|
||||
|
||||
func (f *fileAddr2Line) init() {
|
||||
if llvmSymbolizer, err := newLLVMSymbolizer(f.b.llvmSymbolizer, f.name, f.base); err == nil {
|
||||
f.llvmSymbolizer = llvmSymbolizer
|
||||
return f.llvmSymbolizer.addrInfo(addr)
|
||||
return
|
||||
}
|
||||
|
||||
if addr2liner, err := newAddr2Liner(f.b.addr2line, f.name, f.base); err == nil {
|
||||
@ -290,13 +327,14 @@ func (f *fileAddr2Line) SourceLine(addr uint64) ([]plugin.Frame, error) {
|
||||
if nm, err := newAddr2LinerNM(f.b.nm, f.name, f.base); err == nil {
|
||||
f.addr2liner.nm = nm
|
||||
}
|
||||
return f.addr2liner.addrInfo(addr)
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("could not find local addr2liner")
|
||||
}
|
||||
|
||||
func (f *fileAddr2Line) Close() error {
|
||||
if f.llvmSymbolizer != nil {
|
||||
f.llvmSymbolizer.rw.close()
|
||||
f.llvmSymbolizer = nil
|
||||
}
|
||||
if f.addr2liner != nil {
|
||||
f.addr2liner.rw.close()
|
||||
f.addr2liner = nil
|
||||
|
159
src/cmd/vendor/github.com/google/pprof/internal/binutils/binutils_test.go
generated
vendored
159
src/cmd/vendor/github.com/google/pprof/internal/binutils/binutils_test.go
generated
vendored
@ -15,7 +15,13 @@
|
||||
package binutils
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"math"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
@ -37,7 +43,7 @@ func functionName(level int) (name string) {
|
||||
func TestAddr2Liner(t *testing.T) {
|
||||
const offset = 0x500
|
||||
|
||||
a := addr2Liner{&mockAddr2liner{}, offset, nil}
|
||||
a := addr2Liner{rw: &mockAddr2liner{}, base: offset}
|
||||
for i := 1; i < 8; i++ {
|
||||
addr := i*0x1000 + offset
|
||||
s, err := a.addrInfo(uint64(addr))
|
||||
@ -112,24 +118,23 @@ func (a *mockAddr2liner) close() {
|
||||
}
|
||||
|
||||
func TestAddr2LinerLookup(t *testing.T) {
|
||||
oddSizedMap := addr2LinerNM{
|
||||
m: []symbolInfo{
|
||||
{0x1000, "0x1000"},
|
||||
{0x2000, "0x2000"},
|
||||
{0x3000, "0x3000"},
|
||||
},
|
||||
const oddSizedData = `
|
||||
00001000 T 0x1000
|
||||
00002000 T 0x2000
|
||||
00003000 T 0x3000
|
||||
`
|
||||
const evenSizedData = `
|
||||
0000000000001000 T 0x1000
|
||||
0000000000002000 T 0x2000
|
||||
0000000000003000 T 0x3000
|
||||
0000000000004000 T 0x4000
|
||||
`
|
||||
for _, d := range []string{oddSizedData, evenSizedData} {
|
||||
a, err := parseAddr2LinerNM(0, bytes.NewBufferString(d))
|
||||
if err != nil {
|
||||
t.Errorf("nm parse error: %v", err)
|
||||
continue
|
||||
}
|
||||
evenSizedMap := addr2LinerNM{
|
||||
m: []symbolInfo{
|
||||
{0x1000, "0x1000"},
|
||||
{0x2000, "0x2000"},
|
||||
{0x3000, "0x3000"},
|
||||
{0x4000, "0x4000"},
|
||||
},
|
||||
}
|
||||
for _, a := range []*addr2LinerNM{
|
||||
&oddSizedMap, &evenSizedMap,
|
||||
} {
|
||||
for address, want := range map[uint64]string{
|
||||
0x1000: "0x1000",
|
||||
0x1001: "0x1000",
|
||||
@ -141,6 +146,11 @@ func TestAddr2LinerLookup(t *testing.T) {
|
||||
t.Errorf("%x: got %v, want %s", address, got, want)
|
||||
}
|
||||
}
|
||||
for _, unknown := range []uint64{0x0fff, 0x4001} {
|
||||
if got, _ := a.addrInfo(unknown); got != nil {
|
||||
t.Errorf("%x: got %v, want nil", unknown, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -150,3 +160,116 @@ func checkAddress(got []plugin.Frame, address uint64, want string) bool {
|
||||
}
|
||||
return got[0].Func == want
|
||||
}
|
||||
|
||||
func TestSetTools(t *testing.T) {
|
||||
// Test that multiple calls work.
|
||||
bu := &Binutils{}
|
||||
bu.SetTools("")
|
||||
bu.SetTools("")
|
||||
}
|
||||
|
||||
func TestSetFastSymbolization(t *testing.T) {
|
||||
// Test that multiple calls work.
|
||||
bu := &Binutils{}
|
||||
bu.SetFastSymbolization(true)
|
||||
bu.SetFastSymbolization(false)
|
||||
}
|
||||
|
||||
func skipUnlessLinuxAmd64(t *testing.T) {
|
||||
if runtime.GOOS != "linux" || runtime.GOARCH != "amd64" {
|
||||
t.Skip("Disasm only tested on x86-64 linux")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDisasm(t *testing.T) {
|
||||
skipUnlessLinuxAmd64(t)
|
||||
bu := &Binutils{}
|
||||
insts, err := bu.Disasm(filepath.Join("testdata", "hello"), 0, math.MaxUint64)
|
||||
if err != nil {
|
||||
t.Fatalf("Disasm: unexpected error %v", err)
|
||||
}
|
||||
mainCount := 0
|
||||
for _, x := range insts {
|
||||
if x.Function == "main" {
|
||||
mainCount++
|
||||
}
|
||||
}
|
||||
if mainCount == 0 {
|
||||
t.Error("Disasm: found no main instructions")
|
||||
}
|
||||
}
|
||||
|
||||
func TestObjFile(t *testing.T) {
|
||||
skipUnlessLinuxAmd64(t)
|
||||
bu := &Binutils{}
|
||||
f, err := bu.Open(filepath.Join("testdata", "hello"), 0, math.MaxUint64, 0)
|
||||
if err != nil {
|
||||
t.Fatalf("Open: unexpected error %v", err)
|
||||
}
|
||||
defer f.Close()
|
||||
syms, err := f.Symbols(regexp.MustCompile("main"), 0)
|
||||
if err != nil {
|
||||
t.Fatalf("Symbols: unexpected error %v", err)
|
||||
}
|
||||
|
||||
find := func(name string) *plugin.Sym {
|
||||
for _, s := range syms {
|
||||
for _, n := range s.Name {
|
||||
if n == name {
|
||||
return s
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
m := find("main")
|
||||
if m == nil {
|
||||
t.Fatalf("Symbols: did not find main")
|
||||
}
|
||||
frames, err := f.SourceLine(m.Start)
|
||||
if err != nil {
|
||||
t.Fatalf("SourceLine: unexpected error %v", err)
|
||||
}
|
||||
expect := []plugin.Frame{
|
||||
{Func: "main", File: "/tmp/hello.c", Line: 3},
|
||||
}
|
||||
if !reflect.DeepEqual(frames, expect) {
|
||||
t.Fatalf("SourceLine for main: expect %v; got %v\n", expect, frames)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLLVMSymbolizer(t *testing.T) {
|
||||
if runtime.GOOS != "linux" {
|
||||
t.Skip("testtdata/llvm-symbolizer has only been tested on linux")
|
||||
}
|
||||
|
||||
cmd := filepath.Join("testdata", "fake-llvm-symbolizer")
|
||||
symbolizer, err := newLLVMSymbolizer(cmd, "foo", 0)
|
||||
if err != nil {
|
||||
t.Fatalf("newLLVMSymbolizer: unexpected error %v", err)
|
||||
}
|
||||
defer symbolizer.rw.close()
|
||||
|
||||
for _, c := range []struct {
|
||||
addr uint64
|
||||
frames []plugin.Frame
|
||||
}{
|
||||
{0x10, []plugin.Frame{
|
||||
{Func: "Inlined_0x10", File: "foo.h", Line: 0},
|
||||
{Func: "Func_0x10", File: "foo.c", Line: 2},
|
||||
}},
|
||||
{0x20, []plugin.Frame{
|
||||
{Func: "Inlined_0x20", File: "foo.h", Line: 0},
|
||||
{Func: "Func_0x20", File: "foo.c", Line: 2},
|
||||
}},
|
||||
} {
|
||||
frames, err := symbolizer.addrInfo(c.addr)
|
||||
if err != nil {
|
||||
t.Errorf("LLVM: unexpected error %v", err)
|
||||
continue
|
||||
}
|
||||
if !reflect.DeepEqual(frames, c.frames) {
|
||||
t.Errorf("LLVM: expect %v; got %v\n", c.frames, frames)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
4
src/cmd/vendor/github.com/google/pprof/internal/binutils/disasm_test.go
generated
vendored
4
src/cmd/vendor/github.com/google/pprof/internal/binutils/disasm_test.go
generated
vendored
@ -73,7 +73,7 @@ func TestFindSymbols(t *testing.T) {
|
||||
|
||||
func checkSymbol(got []*plugin.Sym, want []plugin.Sym) error {
|
||||
if len(got) != len(want) {
|
||||
return fmt.Errorf("unexpected number of symbols %d (want %d)\n", len(got), len(want))
|
||||
return fmt.Errorf("unexpected number of symbols %d (want %d)", len(got), len(want))
|
||||
}
|
||||
|
||||
for i, g := range got {
|
||||
@ -134,8 +134,6 @@ func TestFunctionAssembly(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
const objdump = "testdata/wrapper/objdump"
|
||||
|
||||
for _, tc := range testcases {
|
||||
insts, err := disassemble([]byte(tc.asm))
|
||||
if err != nil {
|
||||
|
34
src/cmd/vendor/github.com/google/pprof/internal/binutils/testdata/fake-llvm-symbolizer
generated
vendored
Executable file
34
src/cmd/vendor/github.com/google/pprof/internal/binutils/testdata/fake-llvm-symbolizer
generated
vendored
Executable file
@ -0,0 +1,34 @@
|
||||
#!/bin/sh
|
||||
#
|
||||
# Copyright 2014 Google Inc. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Fake llvm-symbolizer to use in tests
|
||||
|
||||
set -f
|
||||
IFS=" "
|
||||
|
||||
while read line; do
|
||||
# line has form:
|
||||
# filename 0xaddr
|
||||
# Emit dummy output that matches llvm-symbolizer output format.
|
||||
set -- $line
|
||||
fname=$1
|
||||
addr=$2
|
||||
echo "Inlined_$addr"
|
||||
echo "$fname.h"
|
||||
echo "Func_$addr"
|
||||
echo "$fname.c:2"
|
||||
echo
|
||||
done
|
BIN
src/cmd/vendor/github.com/google/pprof/internal/binutils/testdata/hello
generated
vendored
Executable file
BIN
src/cmd/vendor/github.com/google/pprof/internal/binutils/testdata/hello
generated
vendored
Executable file
Binary file not shown.
54
src/cmd/vendor/github.com/google/pprof/internal/driver/cli.go
generated
vendored
54
src/cmd/vendor/github.com/google/pprof/internal/driver/cli.go
generated
vendored
@ -28,10 +28,13 @@ type source struct {
|
||||
ExecName string
|
||||
BuildID string
|
||||
Base []string
|
||||
Normalize bool
|
||||
|
||||
Seconds int
|
||||
Timeout int
|
||||
Symbolize string
|
||||
HTTPHostport string
|
||||
Comment string
|
||||
}
|
||||
|
||||
// Parse parses the command lines through the specified flags package
|
||||
@ -41,9 +44,11 @@ func parseFlags(o *plugin.Options) (*source, []string, error) {
|
||||
flag := o.Flagset
|
||||
// Comparisons.
|
||||
flagBase := flag.StringList("base", "", "Source for base profile for comparison")
|
||||
// Internal options.
|
||||
// Source options.
|
||||
flagSymbolize := flag.String("symbolize", "", "Options for profile symbolization")
|
||||
flagBuildID := flag.String("buildid", "", "Override build id for first mapping")
|
||||
flagTimeout := flag.Int("timeout", -1, "Timeout in seconds for fetching a profile")
|
||||
flagAddComment := flag.String("add_comment", "", "Annotation string to record in the profile")
|
||||
// CPU profile options
|
||||
flagSeconds := flag.Int("seconds", -1, "Length of time for dynamic profiles")
|
||||
// Heap profile options
|
||||
@ -57,7 +62,7 @@ func parseFlags(o *plugin.Options) (*source, []string, error) {
|
||||
flagMeanDelay := flag.Bool("mean_delay", false, "Display mean delay at each region")
|
||||
flagTools := flag.String("tools", os.Getenv("PPROF_TOOLS"), "Path for object tool pathnames")
|
||||
|
||||
flagTimeout := flag.Int("timeout", -1, "Timeout in seconds for fetching a profile")
|
||||
flagHTTP := flag.String("http", "", "Present interactive web based UI at the specified http host:port")
|
||||
|
||||
// Flags used during command processing
|
||||
installedFlags := installFlags(flag)
|
||||
@ -106,6 +111,9 @@ func parseFlags(o *plugin.Options) (*source, []string, error) {
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
if cmd != nil && *flagHTTP != "" {
|
||||
return nil, nil, fmt.Errorf("-http is not compatible with an output format on the command line")
|
||||
}
|
||||
|
||||
si := pprofVariables["sample_index"].value
|
||||
si = sampleIndex(flagTotalDelay, si, "delay", "-total_delay", o.UI)
|
||||
@ -128,6 +136,8 @@ func parseFlags(o *plugin.Options) (*source, []string, error) {
|
||||
Seconds: *flagSeconds,
|
||||
Timeout: *flagTimeout,
|
||||
Symbolize: *flagSymbolize,
|
||||
HTTPHostport: *flagHTTP,
|
||||
Comment: *flagAddComment,
|
||||
}
|
||||
|
||||
for _, s := range *flagBase {
|
||||
@ -136,6 +146,12 @@ func parseFlags(o *plugin.Options) (*source, []string, error) {
|
||||
}
|
||||
}
|
||||
|
||||
normalize := pprofVariables["normalize"].boolValue()
|
||||
if normalize && len(source.Base) == 0 {
|
||||
return nil, nil, fmt.Errorf("Must have base profile to normalize by")
|
||||
}
|
||||
source.Normalize = normalize
|
||||
|
||||
if bu, ok := o.Obj.(*binutils.Binutils); ok {
|
||||
bu.SetTools(*flagTools)
|
||||
}
|
||||
@ -240,13 +256,33 @@ func outputFormat(bcmd map[string]*bool, acmd map[string]*string) (cmd []string,
|
||||
return cmd, nil
|
||||
}
|
||||
|
||||
var usageMsgHdr = "usage: pprof [options] [-base source] [binary] <source> ...\n"
|
||||
var usageMsgHdr = `usage:
|
||||
|
||||
Produce output in the specified format.
|
||||
|
||||
pprof <format> [options] [binary] <source> ...
|
||||
|
||||
Omit the format to get an interactive shell whose commands can be used
|
||||
to generate various views of a profile
|
||||
|
||||
pprof [options] [binary] <source> ...
|
||||
|
||||
Omit the format and provide the "-http" flag to get an interactive web
|
||||
interface at the specified host:port that can be used to navigate through
|
||||
various views of a profile.
|
||||
|
||||
pprof -http [host]:[port] [options] [binary] <source> ...
|
||||
|
||||
Details:
|
||||
`
|
||||
|
||||
var usageMsgSrc = "\n\n" +
|
||||
" Source options:\n" +
|
||||
" -seconds Duration for time-based profile collection\n" +
|
||||
" -timeout Timeout in seconds for profile collection\n" +
|
||||
" -buildid Override build id for main binary\n" +
|
||||
" -add_comment Free-form annotation to add to the profile\n" +
|
||||
" Displayed on some reports or with pprof -comments\n" +
|
||||
" -base source Source of profile to use as baseline\n" +
|
||||
" profile.pb.gz Profile in compressed protobuf format\n" +
|
||||
" legacy_profile Profile in legacy pprof format\n" +
|
||||
@ -261,8 +297,20 @@ var usageMsgSrc = "\n\n" +
|
||||
|
||||
var usageMsgVars = "\n\n" +
|
||||
" Misc options:\n" +
|
||||
" -http Provide web based interface at host:port.\n" +
|
||||
" Host is optional and 'localhost' by default.\n" +
|
||||
" Port is optional and a randomly available port by default.\n" +
|
||||
" -tools Search path for object tools\n" +
|
||||
"\n" +
|
||||
" Legacy convenience options:\n" +
|
||||
" -inuse_space Same as -sample_index=inuse_space\n" +
|
||||
" -inuse_objects Same as -sample_index=inuse_objects\n" +
|
||||
" -alloc_space Same as -sample_index=alloc_space\n" +
|
||||
" -alloc_objects Same as -sample_index=alloc_objects\n" +
|
||||
" -total_delay Same as -sample_index=delay\n" +
|
||||
" -contentions Same as -sample_index=contentions\n" +
|
||||
" -mean_delay Same as -mean -sample_index=delay\n" +
|
||||
"\n" +
|
||||
" Environment Variables:\n" +
|
||||
" PPROF_TMPDIR Location for saved profiles (default $HOME/pprof)\n" +
|
||||
" PPROF_TOOLS Search path for object-level tools\n" +
|
||||
|
21
src/cmd/vendor/github.com/google/pprof/internal/driver/commands.go
generated
vendored
21
src/cmd/vendor/github.com/google/pprof/internal/driver/commands.go
generated
vendored
@ -159,7 +159,7 @@ var pprofVariables = variables{
|
||||
"Scale the sample values to this unit.",
|
||||
"For time-based profiles, use seconds, milliseconds, nanoseconds, etc.",
|
||||
"For memory profiles, use megabytes, kilobytes, bytes, etc.",
|
||||
" auto will scale each value independently to the most natural unit.")},
|
||||
"Using auto will scale each value independently to the most natural unit.")},
|
||||
"compact_labels": &variable{boolKind, "f", "", "Show minimal headers"},
|
||||
"source_path": &variable{stringKind, "", "", "Search path for source files"},
|
||||
|
||||
@ -195,11 +195,15 @@ var pprofVariables = variables{
|
||||
"If set, only show nodes that match this location.",
|
||||
"Matching includes the function name, filename or object name.")},
|
||||
"tagfocus": &variable{stringKind, "", "", helpText(
|
||||
"Restrict to samples with tags in range or matched by regexp",
|
||||
"Discard samples that do not include a node with a tag matching this regexp.")},
|
||||
"Restricts to samples with tags in range or matched by regexp",
|
||||
"Use name=value syntax to limit the matching to a specific tag.",
|
||||
"Numeric tag filter examples: 1kb, 1kb:10kb, memory=32mb:",
|
||||
"String tag filter examples: foo, foo.*bar, mytag=foo.*bar")},
|
||||
"tagignore": &variable{stringKind, "", "", helpText(
|
||||
"Discard samples with tags in range or matched by regexp",
|
||||
"Discard samples that do include a node with a tag matching this regexp.")},
|
||||
"Use name=value syntax to limit the matching to a specific tag.",
|
||||
"Numeric tag filter examples: 1kb, 1kb:10kb, memory=32mb:",
|
||||
"String tag filter examples: foo, foo.*bar, mytag=foo.*bar")},
|
||||
"tagshow": &variable{stringKind, "", "", helpText(
|
||||
"Only consider tags matching this regexp",
|
||||
"Discard tags that do not match this regexp")},
|
||||
@ -218,6 +222,8 @@ var pprofVariables = variables{
|
||||
"Sample value to report (0-based index or name)",
|
||||
"Profiles contain multiple values per sample.",
|
||||
"Use sample_index=i to select the ith value (starting at 0).")},
|
||||
"normalize": &variable{boolKind, "f", "", helpText(
|
||||
"Scales profile based on the base profile.")},
|
||||
|
||||
// Data sorting criteria
|
||||
"flat": &variable{boolKind, "t", "cumulative", helpText("Sort entries based on own weight")},
|
||||
@ -227,9 +233,6 @@ var pprofVariables = variables{
|
||||
"functions": &variable{boolKind, "t", "granularity", helpText(
|
||||
"Aggregate at the function level.",
|
||||
"Takes into account the filename/lineno where the function was defined.")},
|
||||
"functionnameonly": &variable{boolKind, "f", "granularity", helpText(
|
||||
"Aggregate at the function level.",
|
||||
"Ignores the filename/lineno where the function was defined.")},
|
||||
"files": &variable{boolKind, "f", "granularity", "Aggregate at the file level."},
|
||||
"lines": &variable{boolKind, "f", "granularity", "Aggregate at the source code line level."},
|
||||
"addresses": &variable{boolKind, "f", "granularity", helpText(
|
||||
@ -266,7 +269,7 @@ func usage(commandLine bool) string {
|
||||
|
||||
var help string
|
||||
if commandLine {
|
||||
help = " Output formats (select only one):\n"
|
||||
help = " Output formats (select at most one):\n"
|
||||
} else {
|
||||
help = " Commands:\n"
|
||||
commands = append(commands, fmtHelp("o/options", "List options and their current values"))
|
||||
@ -471,7 +474,7 @@ func (vars variables) set(name, value string) error {
|
||||
case boolKind:
|
||||
var b bool
|
||||
if b, err = stringToBool(value); err == nil {
|
||||
if v.group != "" && b == false {
|
||||
if v.group != "" && !b {
|
||||
err = fmt.Errorf("%q can only be set to true", name)
|
||||
}
|
||||
}
|
||||
|
71
src/cmd/vendor/github.com/google/pprof/internal/driver/driver.go
generated
vendored
71
src/cmd/vendor/github.com/google/pprof/internal/driver/driver.go
generated
vendored
@ -23,6 +23,7 @@ import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
"github.com/google/pprof/internal/report"
|
||||
@ -52,24 +53,30 @@ func PProf(eo *plugin.Options) error {
|
||||
return generateReport(p, cmd, pprofVariables, o)
|
||||
}
|
||||
|
||||
if src.HTTPHostport != "" {
|
||||
return serveWebInterface(src.HTTPHostport, p, o)
|
||||
}
|
||||
return interactive(p, o)
|
||||
}
|
||||
|
||||
func generateReport(p *profile.Profile, cmd []string, vars variables, o *plugin.Options) error {
|
||||
func generateRawReport(p *profile.Profile, cmd []string, vars variables, o *plugin.Options) (*command, *report.Report, error) {
|
||||
p = p.Copy() // Prevent modification to the incoming profile.
|
||||
|
||||
// Identify units of numeric tags in profile.
|
||||
numLabelUnits := identifyNumLabelUnits(p, o.UI)
|
||||
|
||||
vars = applyCommandOverrides(cmd, vars)
|
||||
|
||||
// Delay focus after configuring report to get percentages on all samples.
|
||||
relative := vars["relative_percentages"].boolValue()
|
||||
if relative {
|
||||
if err := applyFocus(p, vars, o.UI); err != nil {
|
||||
return err
|
||||
if err := applyFocus(p, numLabelUnits, vars, o.UI); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
}
|
||||
ropt, err := reportOptions(p, vars)
|
||||
ropt, err := reportOptions(p, numLabelUnits, vars)
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, nil, err
|
||||
}
|
||||
c := pprofCommands[cmd[0]]
|
||||
if c == nil {
|
||||
@ -79,18 +86,27 @@ func generateReport(p *profile.Profile, cmd []string, vars variables, o *plugin.
|
||||
if len(cmd) == 2 {
|
||||
s, err := regexp.Compile(cmd[1])
|
||||
if err != nil {
|
||||
return fmt.Errorf("parsing argument regexp %s: %v", cmd[1], err)
|
||||
return nil, nil, fmt.Errorf("parsing argument regexp %s: %v", cmd[1], err)
|
||||
}
|
||||
ropt.Symbol = s
|
||||
}
|
||||
|
||||
rpt := report.New(p, ropt)
|
||||
if !relative {
|
||||
if err := applyFocus(p, vars, o.UI); err != nil {
|
||||
return err
|
||||
if err := applyFocus(p, numLabelUnits, vars, o.UI); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
}
|
||||
if err := aggregate(p, vars); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
return c, rpt, nil
|
||||
}
|
||||
|
||||
func generateReport(p *profile.Profile, cmd []string, vars variables, o *plugin.Options) error {
|
||||
c, rpt, err := generateRawReport(p, cmd, vars, o)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@ -160,20 +176,20 @@ func applyCommandOverrides(cmd []string, v variables) variables {
|
||||
v.set("nodecount", "80")
|
||||
}
|
||||
}
|
||||
if trim == false {
|
||||
if !trim {
|
||||
v.set("nodecount", "0")
|
||||
v.set("nodefraction", "0")
|
||||
v.set("edgefraction", "0")
|
||||
}
|
||||
if focus == false {
|
||||
if !focus {
|
||||
v.set("focus", "")
|
||||
v.set("ignore", "")
|
||||
}
|
||||
if tagfocus == false {
|
||||
if !tagfocus {
|
||||
v.set("tagfocus", "")
|
||||
v.set("tagignore", "")
|
||||
}
|
||||
if hide == false {
|
||||
if !hide {
|
||||
v.set("hide", "")
|
||||
v.set("show", "")
|
||||
}
|
||||
@ -196,25 +212,20 @@ func aggregate(prof *profile.Profile, v variables) error {
|
||||
case v["functions"].boolValue():
|
||||
inlines = true
|
||||
function = true
|
||||
filename = true
|
||||
case v["noinlines"].boolValue():
|
||||
function = true
|
||||
filename = true
|
||||
case v["addressnoinlines"].boolValue():
|
||||
function = true
|
||||
filename = true
|
||||
linenumber = true
|
||||
address = true
|
||||
case v["functionnameonly"].boolValue():
|
||||
inlines = true
|
||||
function = true
|
||||
default:
|
||||
return fmt.Errorf("unexpected granularity")
|
||||
}
|
||||
return prof.Aggregate(inlines, function, filename, linenumber, address)
|
||||
}
|
||||
|
||||
func reportOptions(p *profile.Profile, vars variables) (*report.Options, error) {
|
||||
func reportOptions(p *profile.Profile, numLabelUnits map[string]string, vars variables) (*report.Options, error) {
|
||||
si, mean := vars["sample_index"].value, vars["mean"].boolValue()
|
||||
value, meanDiv, sample, err := sampleFormat(p, si, mean)
|
||||
if err != nil {
|
||||
@ -230,6 +241,14 @@ func reportOptions(p *profile.Profile, vars variables) (*report.Options, error)
|
||||
return nil, fmt.Errorf("zero divisor specified")
|
||||
}
|
||||
|
||||
var filters []string
|
||||
for _, k := range []string{"focus", "ignore", "hide", "show", "tagfocus", "tagignore", "tagshow", "taghide"} {
|
||||
v := vars[k].value
|
||||
if v != "" {
|
||||
filters = append(filters, k+"="+v)
|
||||
}
|
||||
}
|
||||
|
||||
ropt := &report.Options{
|
||||
CumSort: vars["cum"].boolValue(),
|
||||
CallTree: vars["call_tree"].boolValue(),
|
||||
@ -243,6 +262,9 @@ func reportOptions(p *profile.Profile, vars variables) (*report.Options, error)
|
||||
NodeFraction: vars["nodefraction"].floatValue(),
|
||||
EdgeFraction: vars["edgefraction"].floatValue(),
|
||||
|
||||
ActiveFilters: filters,
|
||||
NumLabelUnits: numLabelUnits,
|
||||
|
||||
SampleValue: value,
|
||||
SampleMeanDivisor: meanDiv,
|
||||
SampleType: stype,
|
||||
@ -260,6 +282,19 @@ func reportOptions(p *profile.Profile, vars variables) (*report.Options, error)
|
||||
return ropt, nil
|
||||
}
|
||||
|
||||
// identifyNumLabelUnits returns a map of numeric label keys to the units
|
||||
// associated with those keys.
|
||||
func identifyNumLabelUnits(p *profile.Profile, ui plugin.UI) map[string]string {
|
||||
numLabelUnits, ignoredUnits := p.NumLabelUnits()
|
||||
|
||||
// Print errors for tags with multiple units associated with
|
||||
// a single key.
|
||||
for k, units := range ignoredUnits {
|
||||
ui.PrintErr(fmt.Sprintf("For tag %s used unit %s, also encountered unit(s) %s", k, numLabelUnits[k], strings.Join(units, ", ")))
|
||||
}
|
||||
return numLabelUnits
|
||||
}
|
||||
|
||||
type sampleValueFunc func([]int64) int64
|
||||
|
||||
// sampleFormat returns a function to extract values out of a profile.Sample,
|
||||
|
57
src/cmd/vendor/github.com/google/pprof/internal/driver/driver_focus.go
generated
vendored
57
src/cmd/vendor/github.com/google/pprof/internal/driver/driver_focus.go
generated
vendored
@ -28,13 +28,13 @@ import (
|
||||
var tagFilterRangeRx = regexp.MustCompile("([[:digit:]]+)([[:alpha:]]+)")
|
||||
|
||||
// applyFocus filters samples based on the focus/ignore options
|
||||
func applyFocus(prof *profile.Profile, v variables, ui plugin.UI) error {
|
||||
func applyFocus(prof *profile.Profile, numLabelUnits map[string]string, v variables, ui plugin.UI) error {
|
||||
focus, err := compileRegexOption("focus", v["focus"].value, nil)
|
||||
ignore, err := compileRegexOption("ignore", v["ignore"].value, err)
|
||||
hide, err := compileRegexOption("hide", v["hide"].value, err)
|
||||
show, err := compileRegexOption("show", v["show"].value, err)
|
||||
tagfocus, err := compileTagFilter("tagfocus", v["tagfocus"].value, ui, err)
|
||||
tagignore, err := compileTagFilter("tagignore", v["tagignore"].value, ui, err)
|
||||
tagfocus, err := compileTagFilter("tagfocus", v["tagfocus"].value, numLabelUnits, ui, err)
|
||||
tagignore, err := compileTagFilter("tagignore", v["tagignore"].value, numLabelUnits, ui, err)
|
||||
prunefrom, err := compileRegexOption("prune_from", v["prune_from"].value, err)
|
||||
if err != nil {
|
||||
return err
|
||||
@ -59,7 +59,7 @@ func applyFocus(prof *profile.Profile, v variables, ui plugin.UI) error {
|
||||
if prunefrom != nil {
|
||||
prof.PruneFrom(prunefrom)
|
||||
}
|
||||
return nil
|
||||
return err
|
||||
}
|
||||
|
||||
func compileRegexOption(name, value string, err error) (*regexp.Regexp, error) {
|
||||
@ -73,23 +73,49 @@ func compileRegexOption(name, value string, err error) (*regexp.Regexp, error) {
|
||||
return rx, nil
|
||||
}
|
||||
|
||||
func compileTagFilter(name, value string, ui plugin.UI, err error) (func(*profile.Sample) bool, error) {
|
||||
func compileTagFilter(name, value string, numLabelUnits map[string]string, ui plugin.UI, err error) (func(*profile.Sample) bool, error) {
|
||||
if value == "" || err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tagValuePair := strings.SplitN(value, "=", 2)
|
||||
var wantKey string
|
||||
if len(tagValuePair) == 2 {
|
||||
wantKey = tagValuePair[0]
|
||||
value = tagValuePair[1]
|
||||
}
|
||||
|
||||
if numFilter := parseTagFilterRange(value); numFilter != nil {
|
||||
ui.PrintErr(name, ":Interpreted '", value, "' as range, not regexp")
|
||||
return func(s *profile.Sample) bool {
|
||||
for key, vals := range s.NumLabel {
|
||||
labelFilter := func(vals []int64, unit string) bool {
|
||||
for _, val := range vals {
|
||||
if numFilter(val, key) {
|
||||
if numFilter(val, unit) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
numLabelUnit := func(key string) string {
|
||||
return numLabelUnits[key]
|
||||
}
|
||||
if wantKey == "" {
|
||||
return func(s *profile.Sample) bool {
|
||||
for key, vals := range s.NumLabel {
|
||||
if labelFilter(vals, numLabelUnit(key)) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}, nil
|
||||
}
|
||||
return func(s *profile.Sample) bool {
|
||||
if vals, ok := s.NumLabel[wantKey]; ok {
|
||||
return labelFilter(vals, numLabelUnit(wantKey))
|
||||
}
|
||||
return false
|
||||
}, nil
|
||||
}
|
||||
|
||||
var rfx []*regexp.Regexp
|
||||
for _, tagf := range strings.Split(value, ",") {
|
||||
fx, err := regexp.Compile(tagf)
|
||||
@ -98,11 +124,13 @@ func compileTagFilter(name, value string, ui plugin.UI, err error) (func(*profil
|
||||
}
|
||||
rfx = append(rfx, fx)
|
||||
}
|
||||
if wantKey == "" {
|
||||
return func(s *profile.Sample) bool {
|
||||
matchedrx:
|
||||
for _, rx := range rfx {
|
||||
for key, vals := range s.Label {
|
||||
for _, val := range vals {
|
||||
// TODO: Match against val, not key:val in future
|
||||
if rx.MatchString(key + ":" + val) {
|
||||
continue matchedrx
|
||||
}
|
||||
@ -113,6 +141,19 @@ func compileTagFilter(name, value string, ui plugin.UI, err error) (func(*profil
|
||||
return true
|
||||
}, nil
|
||||
}
|
||||
return func(s *profile.Sample) bool {
|
||||
if vals, ok := s.Label[wantKey]; ok {
|
||||
for _, rx := range rfx {
|
||||
for _, val := range vals {
|
||||
if rx.MatchString(val) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}, nil
|
||||
}
|
||||
|
||||
// parseTagFilterRange returns a function to checks if a value is
|
||||
// contained on the range described by a string. It can recognize
|
||||
|
545
src/cmd/vendor/github.com/google/pprof/internal/driver/driver_test.go
generated
vendored
545
src/cmd/vendor/github.com/google/pprof/internal/driver/driver_test.go
generated
vendored
@ -16,9 +16,13 @@ package driver
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
_ "net/http/pprof"
|
||||
"os"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strconv"
|
||||
@ -32,52 +36,61 @@ import (
|
||||
"github.com/google/pprof/profile"
|
||||
)
|
||||
|
||||
var updateFlag = flag.Bool("update", false, "Update the golden files")
|
||||
|
||||
func TestParse(t *testing.T) {
|
||||
// Override weblist command to collect output in buffer
|
||||
pprofCommands["weblist"].postProcess = nil
|
||||
|
||||
// Our mockObjTool.Open will always return success, causing
|
||||
// driver.locateBinaries to "find" the binaries below in a non-existant
|
||||
// driver.locateBinaries to "find" the binaries below in a non-existent
|
||||
// directory. As a workaround, point the search path to the fake
|
||||
// directory containing out fake binaries.
|
||||
savePath := os.Getenv("PPROF_BINARY_PATH")
|
||||
os.Setenv("PPROF_BINARY_PATH", "/path/to")
|
||||
defer os.Setenv("PPROF_BINARY_PATH", savePath)
|
||||
|
||||
testcase := []struct {
|
||||
flags, source string
|
||||
}{
|
||||
{"text,functions,flat", "cpu"},
|
||||
{"tree,addresses,flat,nodecount=4", "cpusmall"},
|
||||
{"text,functions,flat", "unknown"},
|
||||
{"text,functions,flat,nodecount=5,call_tree", "unknown"},
|
||||
{"text,alloc_objects,flat", "heap_alloc"},
|
||||
{"text,files,flat", "heap"},
|
||||
{"text,files,flat,focus=[12]00,taghide=[X3]00", "heap"},
|
||||
{"text,inuse_objects,flat", "heap"},
|
||||
{"text,lines,cum,hide=line[X3]0", "cpu"},
|
||||
{"text,lines,cum,show=[12]00", "cpu"},
|
||||
{"text,lines,cum,hide=line[X3]0,focus=[12]00", "cpu"},
|
||||
{"topproto,lines,cum,hide=mangled[X3]0", "cpu"},
|
||||
{"tree,lines,cum,focus=[24]00", "heap"},
|
||||
{"tree,relative_percentages,cum,focus=[24]00", "heap"},
|
||||
{"callgrind", "cpu"},
|
||||
{"callgrind,call_tree", "cpu"},
|
||||
{"callgrind", "heap"},
|
||||
{"dot,functions,flat", "cpu"},
|
||||
{"dot,functions,flat,call_tree", "cpu"},
|
||||
{"dot,lines,flat,focus=[12]00", "heap"},
|
||||
{"dot,unit=minimum", "heap_sizetags"},
|
||||
{"dot,addresses,flat,ignore=[X3]002,focus=[X1]000", "contention"},
|
||||
{"dot,files,cum", "contention"},
|
||||
{"comments", "cpu"},
|
||||
{"comments,add_comment=some-comment", "cpu"},
|
||||
{"comments", "heap"},
|
||||
{"tags", "cpu"},
|
||||
{"tags,tagignore=tag[13],tagfocus=key[12]", "cpu"},
|
||||
{"tags", "heap"},
|
||||
{"tags,unit=bytes", "heap"},
|
||||
{"traces", "cpu"},
|
||||
{"traces", "heap_tags"},
|
||||
{"dot,alloc_space,flat,focus=[234]00", "heap_alloc"},
|
||||
{"dot,alloc_space,flat,tagshow=[2]00", "heap_alloc"},
|
||||
{"dot,alloc_space,flat,hide=line.*1?23?", "heap_alloc"},
|
||||
{"dot,inuse_space,flat,tagfocus=1mb:2gb", "heap"},
|
||||
{"dot,inuse_space,flat,tagfocus=30kb:,tagignore=1mb:2mb", "heap"},
|
||||
{"disasm=line[13],addresses,flat", "cpu"},
|
||||
{"peek=line.*01", "cpu"},
|
||||
{"weblist=line[13],addresses,flat", "cpu"},
|
||||
{"tags,tagfocus=400kb:", "heap_request"},
|
||||
}
|
||||
|
||||
baseVars := pprofVariables
|
||||
@ -99,6 +112,7 @@ func TestParse(t *testing.T) {
|
||||
if err != nil {
|
||||
t.Errorf("cannot create tempfile: %v", err)
|
||||
}
|
||||
defer os.Remove(protoTempFile.Name())
|
||||
defer protoTempFile.Close()
|
||||
f.strings["output"] = protoTempFile.Name()
|
||||
|
||||
@ -124,6 +138,7 @@ func TestParse(t *testing.T) {
|
||||
if err != nil {
|
||||
t.Errorf("cannot create tempfile: %v", err)
|
||||
}
|
||||
defer os.Remove(outputTempFile.Name())
|
||||
defer outputTempFile.Close()
|
||||
f.strings["output"] = outputTempFile.Name()
|
||||
f.args = []string{protoTempFile.Name()}
|
||||
@ -140,6 +155,8 @@ func TestParse(t *testing.T) {
|
||||
addFlags(&f, flags[:1])
|
||||
solution = solutionFilename(tc.source, &f)
|
||||
}
|
||||
// The add_comment flag is not idempotent so only apply it on the first run.
|
||||
delete(f.strings, "add_comment")
|
||||
|
||||
// Second pprof invocation to read the profile from profile.proto
|
||||
// and generate a report.
|
||||
@ -180,6 +197,12 @@ func TestParse(t *testing.T) {
|
||||
t.Fatalf("diff %s %v", solution, err)
|
||||
}
|
||||
t.Errorf("%s\n%s\n", solution, d)
|
||||
if *updateFlag {
|
||||
err := ioutil.WriteFile(solution, b, 0644)
|
||||
if err != nil {
|
||||
t.Errorf("failed to update the solution file %q: %v", solution, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -214,14 +237,19 @@ func addFlags(f *testFlags, flags []string) {
|
||||
}
|
||||
}
|
||||
|
||||
func testSourceURL(port int) string {
|
||||
return fmt.Sprintf("http://%s/", net.JoinHostPort(testSourceAddress, strconv.Itoa(port)))
|
||||
}
|
||||
|
||||
// solutionFilename returns the name of the solution file for the test
|
||||
func solutionFilename(source string, f *testFlags) string {
|
||||
name := []string{"pprof", strings.TrimPrefix(source, "http://host:8000/")}
|
||||
name := []string{"pprof", strings.TrimPrefix(source, testSourceURL(8000))}
|
||||
name = addString(name, f, []string{"flat", "cum"})
|
||||
name = addString(name, f, []string{"functions", "files", "lines", "addresses"})
|
||||
name = addString(name, f, []string{"inuse_space", "inuse_objects", "alloc_space", "alloc_objects"})
|
||||
name = addString(name, f, []string{"relative_percentages"})
|
||||
name = addString(name, f, []string{"seconds"})
|
||||
name = addString(name, f, []string{"call_tree"})
|
||||
name = addString(name, f, []string{"text", "tree", "callgrind", "dot", "svg", "tags", "dot", "traces", "disasm", "peek", "weblist", "topproto", "comments"})
|
||||
if f.strings["focus"] != "" || f.strings["tagfocus"] != "" {
|
||||
name = append(name, "focus")
|
||||
@ -252,6 +280,7 @@ type testFlags struct {
|
||||
floats map[string]float64
|
||||
strings map[string]string
|
||||
args []string
|
||||
stringLists map[string][]*string
|
||||
}
|
||||
|
||||
func (testFlags) ExtraUsage() string { return "" }
|
||||
@ -317,6 +346,9 @@ func (f testFlags) StringVar(p *string, s, d, c string) {
|
||||
}
|
||||
|
||||
func (f testFlags) StringList(s, d, c string) *[]*string {
|
||||
if t, ok := f.stringLists[s]; ok {
|
||||
return &t
|
||||
}
|
||||
return &[]*string{}
|
||||
}
|
||||
|
||||
@ -345,9 +377,6 @@ func baseFlags() testFlags {
|
||||
}
|
||||
}
|
||||
|
||||
type testProfile struct {
|
||||
}
|
||||
|
||||
const testStart = 0x1000
|
||||
const testOffset = 0x5000
|
||||
|
||||
@ -355,7 +384,6 @@ type testFetcher struct{}
|
||||
|
||||
func (testFetcher) Fetch(s string, d, t time.Duration) (*profile.Profile, string, error) {
|
||||
var p *profile.Profile
|
||||
s = strings.TrimPrefix(s, "http://host:8000/")
|
||||
switch s {
|
||||
case "cpu", "unknown":
|
||||
p = cpuProfile()
|
||||
@ -369,21 +397,36 @@ func (testFetcher) Fetch(s string, d, t time.Duration) (*profile.Profile, string
|
||||
{Type: "alloc_objects", Unit: "count"},
|
||||
{Type: "alloc_space", Unit: "bytes"},
|
||||
}
|
||||
case "heap_request":
|
||||
p = heapProfile()
|
||||
for _, s := range p.Sample {
|
||||
s.NumLabel["request"] = s.NumLabel["bytes"]
|
||||
}
|
||||
case "heap_sizetags":
|
||||
p = heapProfile()
|
||||
tags := []int64{2, 4, 8, 16, 32, 64, 128, 256}
|
||||
for _, s := range p.Sample {
|
||||
numValues := append(s.NumLabel["bytes"], tags...)
|
||||
s.NumLabel["bytes"] = numValues
|
||||
}
|
||||
case "heap_tags":
|
||||
p = heapProfile()
|
||||
for i := 0; i < len(p.Sample); i += 2 {
|
||||
s := p.Sample[i]
|
||||
if s.Label == nil {
|
||||
s.Label = make(map[string][]string)
|
||||
}
|
||||
s.NumLabel["request"] = s.NumLabel["bytes"]
|
||||
s.Label["key1"] = []string{"tag"}
|
||||
}
|
||||
case "contention":
|
||||
p = contentionProfile()
|
||||
case "symbolz":
|
||||
p = symzProfile()
|
||||
case "http://host2/symbolz":
|
||||
p = symzProfile()
|
||||
p.Mapping[0].Start += testOffset
|
||||
p.Mapping[0].Limit += testOffset
|
||||
for i := range p.Location {
|
||||
p.Location[i].Address += testOffset
|
||||
}
|
||||
default:
|
||||
return nil, "", fmt.Errorf("unexpected source: %s", s)
|
||||
}
|
||||
return p, s, nil
|
||||
return p, testSourceURL(8000) + s, nil
|
||||
}
|
||||
|
||||
type testSymbolizer struct{}
|
||||
@ -406,18 +449,8 @@ func (testSymbolizeDemangler) Symbolize(_ string, _ plugin.MappingSources, p *pr
|
||||
func testFetchSymbols(source, post string) ([]byte, error) {
|
||||
var buf bytes.Buffer
|
||||
|
||||
if source == "http://host2/symbolz" {
|
||||
for _, address := range strings.Split(post, "+") {
|
||||
a, _ := strconv.ParseInt(address, 0, 64)
|
||||
fmt.Fprintf(&buf, "%v\t", address)
|
||||
if a-testStart < testOffset {
|
||||
fmt.Fprintf(&buf, "wrong_source_%v_", address)
|
||||
continue
|
||||
}
|
||||
fmt.Fprintf(&buf, "%#x\n", a-testStart-testOffset)
|
||||
}
|
||||
return buf.Bytes(), nil
|
||||
}
|
||||
switch source {
|
||||
case testSourceURL(8000) + "symbolz":
|
||||
for _, address := range strings.Split(post, "+") {
|
||||
a, _ := strconv.ParseInt(address, 0, 64)
|
||||
fmt.Fprintf(&buf, "%v\t", address)
|
||||
@ -428,12 +461,26 @@ func testFetchSymbols(source, post string) ([]byte, error) {
|
||||
fmt.Fprintf(&buf, "%#x\n", a-testStart)
|
||||
}
|
||||
return buf.Bytes(), nil
|
||||
case testSourceURL(8001) + "symbolz":
|
||||
for _, address := range strings.Split(post, "+") {
|
||||
a, _ := strconv.ParseInt(address, 0, 64)
|
||||
fmt.Fprintf(&buf, "%v\t", address)
|
||||
if a-testStart < testOffset {
|
||||
fmt.Fprintf(&buf, "wrong_source_%v_", address)
|
||||
continue
|
||||
}
|
||||
fmt.Fprintf(&buf, "%#x\n", a-testStart-testOffset)
|
||||
}
|
||||
return buf.Bytes(), nil
|
||||
default:
|
||||
return nil, fmt.Errorf("unexpected source: %s", source)
|
||||
}
|
||||
}
|
||||
|
||||
type testSymbolzSymbolizer struct{}
|
||||
|
||||
func (testSymbolzSymbolizer) Symbolize(variables string, sources plugin.MappingSources, p *profile.Profile) error {
|
||||
return symbolz.Symbolize(sources, testFetchSymbols, p, nil)
|
||||
return symbolz.Symbolize(p, false, sources, testFetchSymbols, nil)
|
||||
}
|
||||
|
||||
func fakeDemangler(name string) string {
|
||||
@ -543,32 +590,32 @@ func cpuProfile() *profile.Profile {
|
||||
Location: []*profile.Location{cpuL[0], cpuL[1], cpuL[2]},
|
||||
Value: []int64{1000, 1000},
|
||||
Label: map[string][]string{
|
||||
"key1": []string{"tag1"},
|
||||
"key2": []string{"tag1"},
|
||||
"key1": {"tag1"},
|
||||
"key2": {"tag1"},
|
||||
},
|
||||
},
|
||||
{
|
||||
Location: []*profile.Location{cpuL[0], cpuL[3]},
|
||||
Value: []int64{100, 100},
|
||||
Label: map[string][]string{
|
||||
"key1": []string{"tag2"},
|
||||
"key3": []string{"tag2"},
|
||||
"key1": {"tag2"},
|
||||
"key3": {"tag2"},
|
||||
},
|
||||
},
|
||||
{
|
||||
Location: []*profile.Location{cpuL[1], cpuL[4]},
|
||||
Value: []int64{10, 10},
|
||||
Label: map[string][]string{
|
||||
"key1": []string{"tag3"},
|
||||
"key2": []string{"tag2"},
|
||||
"key1": {"tag3"},
|
||||
"key2": {"tag2"},
|
||||
},
|
||||
},
|
||||
{
|
||||
Location: []*profile.Location{cpuL[2]},
|
||||
Value: []int64{10, 10},
|
||||
Label: map[string][]string{
|
||||
"key1": []string{"tag4"},
|
||||
"key2": []string{"tag1"},
|
||||
"key1": {"tag4"},
|
||||
"key2": {"tag1"},
|
||||
},
|
||||
},
|
||||
},
|
||||
@ -744,30 +791,22 @@ func heapProfile() *profile.Profile {
|
||||
{
|
||||
Location: []*profile.Location{heapL[0], heapL[1], heapL[2]},
|
||||
Value: []int64{10, 1024000},
|
||||
NumLabel: map[string][]int64{
|
||||
"bytes": []int64{102400},
|
||||
},
|
||||
NumLabel: map[string][]int64{"bytes": {102400}},
|
||||
},
|
||||
{
|
||||
Location: []*profile.Location{heapL[0], heapL[3]},
|
||||
Value: []int64{20, 4096000},
|
||||
NumLabel: map[string][]int64{
|
||||
"bytes": []int64{204800},
|
||||
},
|
||||
NumLabel: map[string][]int64{"bytes": {204800}},
|
||||
},
|
||||
{
|
||||
Location: []*profile.Location{heapL[1], heapL[4]},
|
||||
Value: []int64{40, 65536000},
|
||||
NumLabel: map[string][]int64{
|
||||
"bytes": []int64{1638400},
|
||||
},
|
||||
NumLabel: map[string][]int64{"bytes": {1638400}},
|
||||
},
|
||||
{
|
||||
Location: []*profile.Location{heapL[2]},
|
||||
Value: []int64{80, 32768000},
|
||||
NumLabel: map[string][]int64{
|
||||
"bytes": []int64{409600},
|
||||
},
|
||||
NumLabel: map[string][]int64{"bytes": {409600}},
|
||||
},
|
||||
},
|
||||
DropFrames: ".*operator new.*|malloc",
|
||||
@ -950,31 +989,394 @@ func TestAutoComplete(t *testing.T) {
|
||||
|
||||
func TestTagFilter(t *testing.T) {
|
||||
var tagFilterTests = []struct {
|
||||
name, value string
|
||||
desc, value string
|
||||
tags map[string][]string
|
||||
want bool
|
||||
}{
|
||||
{"test1", "tag2", map[string][]string{"value1": {"tag1", "tag2"}}, true},
|
||||
{"test2", "tag3", map[string][]string{"value1": {"tag1", "tag2"}}, false},
|
||||
{"test3", "tag1,tag3", map[string][]string{"value1": {"tag1", "tag2"}, "value2": {"tag3"}}, true},
|
||||
{"test4", "t..[12],t..3", map[string][]string{"value1": {"tag1", "tag2"}, "value2": {"tag3"}}, true},
|
||||
{"test5", "tag2,tag3", map[string][]string{"value1": {"tag1", "tag2"}}, false},
|
||||
{
|
||||
"1 key with 1 matching value",
|
||||
"tag2",
|
||||
map[string][]string{"value1": {"tag1", "tag2"}},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"1 key with no matching values",
|
||||
"tag3",
|
||||
map[string][]string{"value1": {"tag1", "tag2"}},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"two keys, each with value matching different one value in list",
|
||||
"tag1,tag3",
|
||||
map[string][]string{"value1": {"tag1", "tag2"}, "value2": {"tag3"}},
|
||||
true,
|
||||
},
|
||||
{"two keys, all value matching different regex value in list",
|
||||
"t..[12],t..3",
|
||||
map[string][]string{"value1": {"tag1", "tag2"}, "value2": {"tag3"}},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"one key, not all values in list matched",
|
||||
"tag2,tag3",
|
||||
map[string][]string{"value1": {"tag1", "tag2"}},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"key specified, list of tags where all tags in list matched",
|
||||
"key1=tag1,tag2",
|
||||
map[string][]string{"key1": {"tag1", "tag2"}},
|
||||
true,
|
||||
},
|
||||
{"key specified, list of tag values where not all are matched",
|
||||
"key1=tag1,tag2",
|
||||
map[string][]string{"key1": {"tag1"}},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"key included for regex matching, list of values where all values in list matched",
|
||||
"key1:tag1,tag2",
|
||||
map[string][]string{"key1": {"tag1", "tag2"}},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"key included for regex matching, list of values where not only second value matched",
|
||||
"key1:tag1,tag2",
|
||||
map[string][]string{"key1": {"tag2"}},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"key included for regex matching, list of values where not only first value matched",
|
||||
"key1:tag1,tag2",
|
||||
map[string][]string{"key1": {"tag1"}},
|
||||
false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tagFilterTests {
|
||||
filter, err := compileTagFilter(test.name, test.value, &proftest.TestUI{T: t}, nil)
|
||||
t.Run(test.desc, func(*testing.T) {
|
||||
filter, err := compileTagFilter(test.desc, test.value, nil, &proftest.TestUI{T: t}, nil)
|
||||
if err != nil {
|
||||
t.Errorf("tagFilter %s:%v", test.name, err)
|
||||
continue
|
||||
t.Fatalf("tagFilter %s:%v", test.desc, err)
|
||||
}
|
||||
s := profile.Sample{
|
||||
Label: test.tags,
|
||||
}
|
||||
|
||||
if got := filter(&s); got != test.want {
|
||||
t.Errorf("tagFilter %s: got %v, want %v", test.name, got, test.want)
|
||||
t.Errorf("tagFilter %s: got %v, want %v", test.desc, got, test.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestIdentifyNumLabelUnits(t *testing.T) {
|
||||
var tagFilterTests = []struct {
|
||||
desc string
|
||||
tagVals []map[string][]int64
|
||||
tagUnits []map[string][]string
|
||||
wantUnits map[string]string
|
||||
allowedRx string
|
||||
wantIgnoreErrCount int
|
||||
}{
|
||||
{
|
||||
"Multiple keys, no units for all keys",
|
||||
[]map[string][]int64{{"keyA": {131072}, "keyB": {128}}},
|
||||
[]map[string][]string{{"keyA": {}, "keyB": {""}}},
|
||||
map[string]string{"keyA": "keyA", "keyB": "keyB"},
|
||||
"",
|
||||
0,
|
||||
},
|
||||
{
|
||||
"Multiple keys, different units for each key",
|
||||
[]map[string][]int64{{"keyA": {131072}, "keyB": {128}}},
|
||||
[]map[string][]string{{"keyA": {"bytes"}, "keyB": {"kilobytes"}}},
|
||||
map[string]string{"keyA": "bytes", "keyB": "kilobytes"},
|
||||
"",
|
||||
0,
|
||||
},
|
||||
{
|
||||
"Multiple keys with multiple values, different units for each key",
|
||||
[]map[string][]int64{{"keyC": {131072, 1}, "keyD": {128, 252}}},
|
||||
[]map[string][]string{{"keyC": {"bytes", "bytes"}, "keyD": {"kilobytes", "kilobytes"}}},
|
||||
map[string]string{"keyC": "bytes", "keyD": "kilobytes"},
|
||||
"",
|
||||
0,
|
||||
},
|
||||
{
|
||||
"Multiple keys with multiple values, some units missing",
|
||||
[]map[string][]int64{{"key1": {131072, 1}, "A": {128, 252}, "key3": {128}, "key4": {1}}, {"key3": {128}, "key4": {1}}},
|
||||
[]map[string][]string{{"key1": {"", "bytes"}, "A": {"kilobytes", ""}, "key3": {""}, "key4": {"hour"}}, {"key3": {"seconds"}, "key4": {""}}},
|
||||
map[string]string{"key1": "bytes", "A": "kilobytes", "key3": "seconds", "key4": "hour"},
|
||||
"",
|
||||
0,
|
||||
},
|
||||
{
|
||||
"One key with three units in same sample",
|
||||
[]map[string][]int64{{"key": {8, 8, 16}}},
|
||||
[]map[string][]string{{"key": {"bytes", "megabytes", "kilobytes"}}},
|
||||
map[string]string{"key": "bytes"},
|
||||
`(For tag key used unit bytes, also encountered unit\(s\) kilobytes, megabytes)`,
|
||||
1,
|
||||
},
|
||||
{
|
||||
"One key with four units in same sample",
|
||||
[]map[string][]int64{{"key": {8, 8, 16, 32}}},
|
||||
[]map[string][]string{{"key": {"bytes", "kilobytes", "a", "megabytes"}}},
|
||||
map[string]string{"key": "bytes"},
|
||||
`(For tag key used unit bytes, also encountered unit\(s\) a, kilobytes, megabytes)`,
|
||||
1,
|
||||
},
|
||||
{
|
||||
"One key with two units in same sample",
|
||||
[]map[string][]int64{{"key": {8, 8}}},
|
||||
[]map[string][]string{{"key": {"bytes", "seconds"}}},
|
||||
map[string]string{"key": "bytes"},
|
||||
`(For tag key used unit bytes, also encountered unit\(s\) seconds)`,
|
||||
1,
|
||||
},
|
||||
{
|
||||
"One key with different units in different samples",
|
||||
[]map[string][]int64{{"key1": {8}}, {"key1": {8}}, {"key1": {8}}},
|
||||
[]map[string][]string{{"key1": {"bytes"}}, {"key1": {"kilobytes"}}, {"key1": {"megabytes"}}},
|
||||
map[string]string{"key1": "bytes"},
|
||||
`(For tag key1 used unit bytes, also encountered unit\(s\) kilobytes, megabytes)`,
|
||||
1,
|
||||
},
|
||||
{
|
||||
"Key alignment, unit not specified",
|
||||
[]map[string][]int64{{"alignment": {8}}},
|
||||
[]map[string][]string{nil},
|
||||
map[string]string{"alignment": "bytes"},
|
||||
"",
|
||||
0,
|
||||
},
|
||||
{
|
||||
"Key request, unit not specified",
|
||||
[]map[string][]int64{{"request": {8}}, {"request": {8, 8}}},
|
||||
[]map[string][]string{nil, nil},
|
||||
map[string]string{"request": "bytes"},
|
||||
"",
|
||||
0,
|
||||
},
|
||||
{
|
||||
"Check units not over-written for keys with default units",
|
||||
[]map[string][]int64{{
|
||||
"alignment": {8},
|
||||
"request": {8},
|
||||
"bytes": {8},
|
||||
}},
|
||||
[]map[string][]string{{
|
||||
"alignment": {"seconds"},
|
||||
"request": {"minutes"},
|
||||
"bytes": {"hours"},
|
||||
}},
|
||||
map[string]string{
|
||||
"alignment": "seconds",
|
||||
"request": "minutes",
|
||||
"bytes": "hours",
|
||||
},
|
||||
"",
|
||||
0,
|
||||
},
|
||||
}
|
||||
for _, test := range tagFilterTests {
|
||||
t.Run(test.desc, func(*testing.T) {
|
||||
p := profile.Profile{Sample: make([]*profile.Sample, len(test.tagVals))}
|
||||
for i, numLabel := range test.tagVals {
|
||||
s := profile.Sample{
|
||||
NumLabel: numLabel,
|
||||
NumUnit: test.tagUnits[i],
|
||||
}
|
||||
p.Sample[i] = &s
|
||||
}
|
||||
testUI := &proftest.TestUI{T: t, AllowRx: test.allowedRx}
|
||||
units := identifyNumLabelUnits(&p, testUI)
|
||||
if !reflect.DeepEqual(test.wantUnits, units) {
|
||||
t.Errorf("got %v units, want %v", units, test.wantUnits)
|
||||
}
|
||||
if got, want := testUI.NumAllowRxMatches, test.wantIgnoreErrCount; want != got {
|
||||
t.Errorf("got %d errors logged, want %d errors logged", got, want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestNumericTagFilter(t *testing.T) {
|
||||
var tagFilterTests = []struct {
|
||||
desc, value string
|
||||
tags map[string][]int64
|
||||
identifiedUnits map[string]string
|
||||
want bool
|
||||
}{
|
||||
{
|
||||
"Match when unit conversion required",
|
||||
"128kb",
|
||||
map[string][]int64{"key1": {131072}, "key2": {128}},
|
||||
map[string]string{"key1": "bytes", "key2": "kilobytes"},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"Match only when values equal after unit conversion",
|
||||
"512kb",
|
||||
map[string][]int64{"key1": {512}, "key2": {128}},
|
||||
map[string]string{"key1": "bytes", "key2": "kilobytes"},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"Match when values and units initially equal",
|
||||
"10bytes",
|
||||
map[string][]int64{"key1": {10}, "key2": {128}},
|
||||
map[string]string{"key1": "bytes", "key2": "kilobytes"},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"Match range without lower bound, no unit conversion required",
|
||||
":10bytes",
|
||||
map[string][]int64{"key1": {8}},
|
||||
map[string]string{"key1": "bytes"},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"Match range without lower bound, unit conversion required",
|
||||
":10kb",
|
||||
map[string][]int64{"key1": {8}},
|
||||
map[string]string{"key1": "bytes"},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"Match range without upper bound, unit conversion required",
|
||||
"10b:",
|
||||
map[string][]int64{"key1": {8}},
|
||||
map[string]string{"key1": "kilobytes"},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"Match range without upper bound, no unit conversion required",
|
||||
"10b:",
|
||||
map[string][]int64{"key1": {12}},
|
||||
map[string]string{"key1": "bytes"},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"Don't match range without upper bound, no unit conversion required",
|
||||
"10b:",
|
||||
map[string][]int64{"key1": {8}},
|
||||
map[string]string{"key1": "bytes"},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"Multiple keys with different units, don't match range without upper bound",
|
||||
"10kb:",
|
||||
map[string][]int64{"key1": {8}},
|
||||
map[string]string{"key1": "bytes", "key2": "kilobytes"},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"Match range without upper bound, unit conversion required",
|
||||
"10b:",
|
||||
map[string][]int64{"key1": {8}},
|
||||
map[string]string{"key1": "kilobytes"},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"Don't match range without lower bound, no unit conversion required",
|
||||
":10b",
|
||||
map[string][]int64{"key1": {12}},
|
||||
map[string]string{"key1": "bytes"},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"Match specific key, key present, one of two values match",
|
||||
"bytes=5b",
|
||||
map[string][]int64{"bytes": {10, 5}},
|
||||
map[string]string{"bytes": "bytes"},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"Match specific key, key present and value matches",
|
||||
"bytes=1024b",
|
||||
map[string][]int64{"bytes": {1024}},
|
||||
map[string]string{"bytes": "kilobytes"},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"Match specific key, matching key present and value matches, also non-matching key",
|
||||
"bytes=1024b",
|
||||
map[string][]int64{"bytes": {1024}, "key2": {5}},
|
||||
map[string]string{"bytes": "bytes", "key2": "bytes"},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"Match specific key and range of values, value matches",
|
||||
"bytes=512b:1024b",
|
||||
map[string][]int64{"bytes": {780}},
|
||||
map[string]string{"bytes": "bytes"},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"Match specific key and range of values, value too large",
|
||||
"key1=1kb:2kb",
|
||||
map[string][]int64{"key1": {4096}},
|
||||
map[string]string{"key1": "bytes"},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"Match specific key and range of values, value too small",
|
||||
"key1=1kb:2kb",
|
||||
map[string][]int64{"key1": {256}},
|
||||
map[string]string{"key1": "bytes"},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"Match specific key and value, unit conversion required",
|
||||
"bytes=1024b",
|
||||
map[string][]int64{"bytes": {1}},
|
||||
map[string]string{"bytes": "kilobytes"},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"Match specific key and value, key does not appear",
|
||||
"key2=256bytes",
|
||||
map[string][]int64{"key1": {256}},
|
||||
map[string]string{"key1": "bytes"},
|
||||
false,
|
||||
},
|
||||
}
|
||||
for _, test := range tagFilterTests {
|
||||
t.Run(test.desc, func(*testing.T) {
|
||||
wantErrMsg := strings.Join([]string{"(", test.desc, ":Interpreted '", test.value[strings.Index(test.value, "=")+1:], "' as range, not regexp", ")"}, "")
|
||||
filter, err := compileTagFilter(test.desc, test.value, test.identifiedUnits, &proftest.TestUI{T: t,
|
||||
AllowRx: wantErrMsg}, nil)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
s := profile.Sample{
|
||||
NumLabel: test.tags,
|
||||
}
|
||||
if got := filter(&s); got != test.want {
|
||||
t.Fatalf("got %v, want %v", got, test.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
type testSymbolzMergeFetcher struct{}
|
||||
|
||||
func (testSymbolzMergeFetcher) Fetch(s string, d, t time.Duration) (*profile.Profile, string, error) {
|
||||
var p *profile.Profile
|
||||
switch s {
|
||||
case testSourceURL(8000) + "symbolz":
|
||||
p = symzProfile()
|
||||
case testSourceURL(8001) + "symbolz":
|
||||
p = symzProfile()
|
||||
p.Mapping[0].Start += testOffset
|
||||
p.Mapping[0].Limit += testOffset
|
||||
for i := range p.Location {
|
||||
p.Location[i].Address += testOffset
|
||||
}
|
||||
default:
|
||||
return nil, "", fmt.Errorf("unexpected source: %s", s)
|
||||
}
|
||||
return p, s, nil
|
||||
}
|
||||
|
||||
func TestSymbolzAfterMerge(t *testing.T) {
|
||||
@ -983,7 +1385,10 @@ func TestSymbolzAfterMerge(t *testing.T) {
|
||||
defer func() { pprofVariables = baseVars }()
|
||||
|
||||
f := baseFlags()
|
||||
f.args = []string{"symbolz", "http://host2/symbolz"}
|
||||
f.args = []string{
|
||||
testSourceURL(8000) + "symbolz",
|
||||
testSourceURL(8001) + "symbolz",
|
||||
}
|
||||
|
||||
o := setDefaults(nil)
|
||||
o.Flagset = f
|
||||
@ -997,7 +1402,7 @@ func TestSymbolzAfterMerge(t *testing.T) {
|
||||
t.Fatalf("parseFlags returned command %v, want [proto]", cmd)
|
||||
}
|
||||
|
||||
o.Fetch = testFetcher{}
|
||||
o.Fetch = testSymbolzMergeFetcher{}
|
||||
o.Sym = testSymbolzSymbolizer{}
|
||||
p, err := fetchProfiles(src, o)
|
||||
if err != nil {
|
||||
@ -1028,10 +1433,10 @@ func (m *mockObjTool) Disasm(file string, start, end uint64) ([]plugin.Inst, err
|
||||
switch start {
|
||||
case 0x1000:
|
||||
return []plugin.Inst{
|
||||
{Addr: 0x1000, Text: "instruction one"},
|
||||
{Addr: 0x1001, Text: "instruction two"},
|
||||
{Addr: 0x1002, Text: "instruction three"},
|
||||
{Addr: 0x1003, Text: "instruction four"},
|
||||
{Addr: 0x1000, Text: "instruction one", File: "file1000.src", Line: 1},
|
||||
{Addr: 0x1001, Text: "instruction two", File: "file1000.src", Line: 1},
|
||||
{Addr: 0x1002, Text: "instruction three", File: "file1000.src", Line: 2},
|
||||
{Addr: 0x1003, Text: "instruction four", File: "file1000.src", Line: 1},
|
||||
}, nil
|
||||
case 0x3000:
|
||||
return []plugin.Inst{
|
||||
@ -1046,7 +1451,7 @@ func (m *mockObjTool) Disasm(file string, start, end uint64) ([]plugin.Inst, err
|
||||
}
|
||||
|
||||
type mockFile struct {
|
||||
name, buildId string
|
||||
name, buildID string
|
||||
base uint64
|
||||
}
|
||||
|
||||
@ -1062,7 +1467,7 @@ func (m *mockFile) Base() uint64 {
|
||||
|
||||
// BuildID returns the GNU build ID of the file, or an empty string.
|
||||
func (m *mockFile) BuildID() string {
|
||||
return m.buildId
|
||||
return m.buildID
|
||||
}
|
||||
|
||||
// SourceLine reports the source line information for a given
|
||||
|
108
src/cmd/vendor/github.com/google/pprof/internal/driver/fetch.go
generated
vendored
108
src/cmd/vendor/github.com/google/pprof/internal/driver/fetch.go
generated
vendored
@ -41,39 +41,52 @@ import (
|
||||
// there are some failures. It will return an error if it is unable to
|
||||
// fetch any profiles.
|
||||
func fetchProfiles(s *source, o *plugin.Options) (*profile.Profile, error) {
|
||||
sources := make([]profileSource, 0, len(s.Sources)+len(s.Base))
|
||||
sources := make([]profileSource, 0, len(s.Sources))
|
||||
for _, src := range s.Sources {
|
||||
sources = append(sources, profileSource{
|
||||
addr: src,
|
||||
source: s,
|
||||
scale: 1,
|
||||
})
|
||||
}
|
||||
|
||||
bases := make([]profileSource, 0, len(s.Base))
|
||||
for _, src := range s.Base {
|
||||
sources = append(sources, profileSource{
|
||||
bases = append(bases, profileSource{
|
||||
addr: src,
|
||||
source: s,
|
||||
scale: -1,
|
||||
})
|
||||
}
|
||||
p, msrcs, save, cnt, err := chunkedGrab(sources, o.Fetch, o.Obj, o.UI)
|
||||
|
||||
p, pbase, m, mbase, save, err := grabSourcesAndBases(sources, bases, o.Fetch, o.Obj, o.UI)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if cnt == 0 {
|
||||
return nil, fmt.Errorf("failed to fetch any profiles")
|
||||
|
||||
if pbase != nil {
|
||||
if s.Normalize {
|
||||
err := p.Normalize(pbase)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
pbase.Scale(-1)
|
||||
p, m, err = combineProfiles([]*profile.Profile{p, pbase}, []plugin.MappingSources{m, mbase})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if want, got := len(sources), cnt; want != got {
|
||||
o.UI.PrintErr(fmt.Sprintf("fetched %d profiles out of %d", got, want))
|
||||
}
|
||||
|
||||
// Symbolize the merged profile.
|
||||
if err := o.Sym.Symbolize(s.Symbolize, msrcs, p); err != nil {
|
||||
if err := o.Sym.Symbolize(s.Symbolize, m, p); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
p.RemoveUninteresting()
|
||||
unsourceMappings(p)
|
||||
|
||||
if s.Comment != "" {
|
||||
p.Comments = append(p.Comments, s.Comment)
|
||||
}
|
||||
|
||||
// Save a copy of the merged profile if there is at least one remote source.
|
||||
if save {
|
||||
dir, err := setTmpDir(o.UI)
|
||||
@ -107,6 +120,47 @@ func fetchProfiles(s *source, o *plugin.Options) (*profile.Profile, error) {
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func grabSourcesAndBases(sources, bases []profileSource, fetch plugin.Fetcher, obj plugin.ObjTool, ui plugin.UI) (*profile.Profile, *profile.Profile, plugin.MappingSources, plugin.MappingSources, bool, error) {
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(2)
|
||||
var psrc, pbase *profile.Profile
|
||||
var msrc, mbase plugin.MappingSources
|
||||
var savesrc, savebase bool
|
||||
var errsrc, errbase error
|
||||
var countsrc, countbase int
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
psrc, msrc, savesrc, countsrc, errsrc = chunkedGrab(sources, fetch, obj, ui)
|
||||
}()
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
pbase, mbase, savebase, countbase, errbase = chunkedGrab(bases, fetch, obj, ui)
|
||||
}()
|
||||
wg.Wait()
|
||||
save := savesrc || savebase
|
||||
|
||||
if errsrc != nil {
|
||||
return nil, nil, nil, nil, false, fmt.Errorf("problem fetching source profiles: %v", errsrc)
|
||||
}
|
||||
if errbase != nil {
|
||||
return nil, nil, nil, nil, false, fmt.Errorf("problem fetching base profiles: %v,", errbase)
|
||||
}
|
||||
if countsrc == 0 {
|
||||
return nil, nil, nil, nil, false, fmt.Errorf("failed to fetch any source profiles")
|
||||
}
|
||||
if countbase == 0 && len(bases) > 0 {
|
||||
return nil, nil, nil, nil, false, fmt.Errorf("failed to fetch any base profiles")
|
||||
}
|
||||
if want, got := len(sources), countsrc; want != got {
|
||||
ui.PrintErr(fmt.Sprintf("Fetched %d source profiles out of %d", got, want))
|
||||
}
|
||||
if want, got := len(bases), countbase; want != got {
|
||||
ui.PrintErr(fmt.Sprintf("Fetched %d base profiles out of %d", got, want))
|
||||
}
|
||||
|
||||
return psrc, pbase, msrc, mbase, save, nil
|
||||
}
|
||||
|
||||
// chunkedGrab fetches the profiles described in source and merges them into
|
||||
// a single profile. It fetches a chunk of profiles concurrently, with a maximum
|
||||
// chunk size to limit its memory usage.
|
||||
@ -142,6 +196,7 @@ func chunkedGrab(sources []profileSource, fetch plugin.Fetcher, obj plugin.ObjTo
|
||||
count += chunkCount
|
||||
}
|
||||
}
|
||||
|
||||
return p, msrc, save, count, nil
|
||||
}
|
||||
|
||||
@ -152,7 +207,7 @@ func concurrentGrab(sources []profileSource, fetch plugin.Fetcher, obj plugin.Ob
|
||||
for i := range sources {
|
||||
go func(s *profileSource) {
|
||||
defer wg.Done()
|
||||
s.p, s.msrc, s.remote, s.err = grabProfile(s.source, s.addr, s.scale, fetch, obj, ui)
|
||||
s.p, s.msrc, s.remote, s.err = grabProfile(s.source, s.addr, fetch, obj, ui)
|
||||
}(&sources[i])
|
||||
}
|
||||
wg.Wait()
|
||||
@ -207,7 +262,6 @@ func combineProfiles(profiles []*profile.Profile, msrcs []plugin.MappingSources)
|
||||
type profileSource struct {
|
||||
addr string
|
||||
source *source
|
||||
scale float64
|
||||
|
||||
p *profile.Profile
|
||||
msrc plugin.MappingSources
|
||||
@ -227,12 +281,18 @@ func homeEnv() string {
|
||||
}
|
||||
|
||||
// setTmpDir prepares the directory to use to save profiles retrieved
|
||||
// remotely. It is selected from PPROF_TMPDIR, defaults to $HOME/pprof.
|
||||
// remotely. It is selected from PPROF_TMPDIR, defaults to $HOME/pprof, and, if
|
||||
// $HOME is not set, falls back to os.TempDir().
|
||||
func setTmpDir(ui plugin.UI) (string, error) {
|
||||
var dirs []string
|
||||
if profileDir := os.Getenv("PPROF_TMPDIR"); profileDir != "" {
|
||||
return profileDir, nil
|
||||
dirs = append(dirs, profileDir)
|
||||
}
|
||||
for _, tmpDir := range []string{os.Getenv(homeEnv()) + "/pprof", os.TempDir()} {
|
||||
if homeDir := os.Getenv(homeEnv()); homeDir != "" {
|
||||
dirs = append(dirs, filepath.Join(homeDir, "pprof"))
|
||||
}
|
||||
dirs = append(dirs, os.TempDir())
|
||||
for _, tmpDir := range dirs {
|
||||
if err := os.MkdirAll(tmpDir, 0755); err != nil {
|
||||
ui.PrintErr("Could not use temp dir ", tmpDir, ": ", err.Error())
|
||||
continue
|
||||
@ -242,10 +302,12 @@ func setTmpDir(ui plugin.UI) (string, error) {
|
||||
return "", fmt.Errorf("failed to identify temp dir")
|
||||
}
|
||||
|
||||
const testSourceAddress = "pproftest.local"
|
||||
|
||||
// grabProfile fetches a profile. Returns the profile, sources for the
|
||||
// profile mappings, a bool indicating if the profile was fetched
|
||||
// remotely, and an error.
|
||||
func grabProfile(s *source, source string, scale float64, fetcher plugin.Fetcher, obj plugin.ObjTool, ui plugin.UI) (p *profile.Profile, msrc plugin.MappingSources, remote bool, err error) {
|
||||
func grabProfile(s *source, source string, fetcher plugin.Fetcher, obj plugin.ObjTool, ui plugin.UI) (p *profile.Profile, msrc plugin.MappingSources, remote bool, err error) {
|
||||
var src string
|
||||
duration, timeout := time.Duration(s.Seconds)*time.Second, time.Duration(s.Timeout)*time.Second
|
||||
if fetcher != nil {
|
||||
@ -266,9 +328,6 @@ func grabProfile(s *source, source string, scale float64, fetcher plugin.Fetcher
|
||||
return
|
||||
}
|
||||
|
||||
// Apply local changes to the profile.
|
||||
p.Scale(scale)
|
||||
|
||||
// Update the binary locations from command line and paths.
|
||||
locateBinaries(p, s, obj, ui)
|
||||
|
||||
@ -276,6 +335,11 @@ func grabProfile(s *source, source string, scale float64, fetcher plugin.Fetcher
|
||||
if src != "" {
|
||||
msrc = collectMappingSources(p, src)
|
||||
remote = true
|
||||
if strings.HasPrefix(src, "http://"+testSourceAddress) {
|
||||
// Treat test inputs as local to avoid saving
|
||||
// testcase profiles during driver testing.
|
||||
remote = false
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
@ -366,9 +430,6 @@ mapping:
|
||||
}
|
||||
}
|
||||
}
|
||||
// Replace executable filename/buildID with the overrides from source.
|
||||
// Assumes the executable is the first Mapping entry.
|
||||
if execName, buildID := s.ExecName, s.BuildID; execName != "" || buildID != "" {
|
||||
if len(p.Mapping) == 0 {
|
||||
// If there are no mappings, add a fake mapping to attempt symbolization.
|
||||
// This is useful for some profiles generated by the golang runtime, which
|
||||
@ -380,6 +441,9 @@ mapping:
|
||||
l.Mapping = m
|
||||
}
|
||||
}
|
||||
// Replace executable filename/buildID with the overrides from source.
|
||||
// Assumes the executable is the first Mapping entry.
|
||||
if execName, buildID := s.ExecName, s.BuildID; execName != "" || buildID != "" {
|
||||
m := p.Mapping[0]
|
||||
if execName != "" {
|
||||
m.File = execName
|
||||
|
245
src/cmd/vendor/github.com/google/pprof/internal/driver/fetch_test.go
generated
vendored
245
src/cmd/vendor/github.com/google/pprof/internal/driver/fetch_test.go
generated
vendored
@ -15,8 +15,15 @@
|
||||
package driver
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
"crypto/elliptic"
|
||||
"crypto/rand"
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"math/big"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
@ -24,11 +31,14 @@ import (
|
||||
"reflect"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/google/pprof/internal/binutils"
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
"github.com/google/pprof/internal/proftest"
|
||||
"github.com/google/pprof/internal/symbolizer"
|
||||
"github.com/google/pprof/profile"
|
||||
)
|
||||
|
||||
@ -165,6 +175,8 @@ func TestFetch(t *testing.T) {
|
||||
const path = "testdata/"
|
||||
|
||||
// Intercept http.Get calls from HTTPFetcher.
|
||||
savedHTTPGet := httpGet
|
||||
defer func() { httpGet = savedHTTPGet }()
|
||||
httpGet = stubHTTPGet
|
||||
|
||||
type testcase struct {
|
||||
@ -176,7 +188,7 @@ func TestFetch(t *testing.T) {
|
||||
{path + "go.nomappings.crash", "/bin/gotest.exe"},
|
||||
{"http://localhost/profile?file=cppbench.cpu", ""},
|
||||
} {
|
||||
p, _, _, err := grabProfile(&source{ExecName: tc.execName}, tc.source, 0, nil, testObj{}, &proftest.TestUI{T: t})
|
||||
p, _, _, err := grabProfile(&source{ExecName: tc.execName}, tc.source, nil, testObj{}, &proftest.TestUI{T: t})
|
||||
if err != nil {
|
||||
t.Fatalf("%s: %s", tc.source, err)
|
||||
}
|
||||
@ -194,6 +206,117 @@ func TestFetch(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchWithBase(t *testing.T) {
|
||||
baseVars := pprofVariables
|
||||
defer func() { pprofVariables = baseVars }()
|
||||
|
||||
const path = "testdata/"
|
||||
type testcase struct {
|
||||
desc string
|
||||
sources []string
|
||||
bases []string
|
||||
normalize bool
|
||||
expectedSamples [][]int64
|
||||
}
|
||||
|
||||
testcases := []testcase{
|
||||
{
|
||||
"not normalized base is same as source",
|
||||
[]string{path + "cppbench.contention"},
|
||||
[]string{path + "cppbench.contention"},
|
||||
false,
|
||||
[][]int64{},
|
||||
},
|
||||
{
|
||||
"not normalized single source, multiple base (all profiles same)",
|
||||
[]string{path + "cppbench.contention"},
|
||||
[]string{path + "cppbench.contention", path + "cppbench.contention"},
|
||||
false,
|
||||
[][]int64{{-2700, -608881724}, {-100, -23992}, {-200, -179943}, {-100, -17778444}, {-100, -75976}, {-300, -63568134}},
|
||||
},
|
||||
{
|
||||
"not normalized, different base and source",
|
||||
[]string{path + "cppbench.contention"},
|
||||
[]string{path + "cppbench.small.contention"},
|
||||
false,
|
||||
[][]int64{{1700, 608878600}, {100, 23992}, {200, 179943}, {100, 17778444}, {100, 75976}, {300, 63568134}},
|
||||
},
|
||||
{
|
||||
"normalized base is same as source",
|
||||
[]string{path + "cppbench.contention"},
|
||||
[]string{path + "cppbench.contention"},
|
||||
true,
|
||||
[][]int64{},
|
||||
},
|
||||
{
|
||||
"normalized single source, multiple base (all profiles same)",
|
||||
[]string{path + "cppbench.contention"},
|
||||
[]string{path + "cppbench.contention", path + "cppbench.contention"},
|
||||
true,
|
||||
[][]int64{},
|
||||
},
|
||||
{
|
||||
"normalized different base and source",
|
||||
[]string{path + "cppbench.contention"},
|
||||
[]string{path + "cppbench.small.contention"},
|
||||
true,
|
||||
[][]int64{{-229, -370}, {28, 0}, {57, 0}, {28, 80}, {28, 0}, {85, 287}},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testcases {
|
||||
t.Run(tc.desc, func(t *testing.T) {
|
||||
pprofVariables = baseVars.makeCopy()
|
||||
|
||||
base := make([]*string, len(tc.bases))
|
||||
for i, s := range tc.bases {
|
||||
base[i] = &s
|
||||
}
|
||||
|
||||
f := testFlags{
|
||||
stringLists: map[string][]*string{
|
||||
"base": base,
|
||||
},
|
||||
bools: map[string]bool{
|
||||
"normalize": tc.normalize,
|
||||
},
|
||||
}
|
||||
f.args = tc.sources
|
||||
|
||||
o := setDefaults(nil)
|
||||
o.Flagset = f
|
||||
src, _, err := parseFlags(o)
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("%s: %v", tc.desc, err)
|
||||
}
|
||||
|
||||
p, err := fetchProfiles(src, o)
|
||||
pprofVariables = baseVars
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if want, got := len(tc.expectedSamples), len(p.Sample); want != got {
|
||||
t.Fatalf("want %d samples got %d", want, got)
|
||||
}
|
||||
|
||||
if len(p.Sample) > 0 {
|
||||
for i, sample := range p.Sample {
|
||||
if want, got := len(tc.expectedSamples[i]), len(sample.Value); want != got {
|
||||
t.Errorf("want %d values for sample %d, got %d", want, i, got)
|
||||
}
|
||||
for j, value := range sample.Value {
|
||||
if want, got := tc.expectedSamples[i][j], value; want != got {
|
||||
t.Errorf("want value of %d for value %d of sample %d, got %d", want, j, i, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// mappingSources creates MappingSources map with a single item.
|
||||
func mappingSources(key, source string, start uint64) plugin.MappingSources {
|
||||
return plugin.MappingSources{
|
||||
@ -227,3 +350,123 @@ func stubHTTPGet(source string, _ time.Duration) (*http.Response, error) {
|
||||
c := &http.Client{Transport: t}
|
||||
return c.Get("file:///" + file)
|
||||
}
|
||||
|
||||
func TestHttpsInsecure(t *testing.T) {
|
||||
if runtime.GOOS == "nacl" {
|
||||
t.Skip("test assumes tcp available")
|
||||
}
|
||||
|
||||
baseVars := pprofVariables
|
||||
pprofVariables = baseVars.makeCopy()
|
||||
defer func() { pprofVariables = baseVars }()
|
||||
|
||||
tlsConfig := &tls.Config{Certificates: []tls.Certificate{selfSignedCert(t)}}
|
||||
|
||||
l, err := tls.Listen("tcp", "localhost:0", tlsConfig)
|
||||
if err != nil {
|
||||
t.Fatalf("net.Listen: got error %v, want no error", err)
|
||||
}
|
||||
|
||||
donec := make(chan error, 1)
|
||||
go func(donec chan<- error) {
|
||||
donec <- http.Serve(l, nil)
|
||||
}(donec)
|
||||
defer func() {
|
||||
if got, want := <-donec, "use of closed"; !strings.Contains(got.Error(), want) {
|
||||
t.Fatalf("Serve got error %v, want %q", got, want)
|
||||
}
|
||||
}()
|
||||
defer l.Close()
|
||||
|
||||
go func() {
|
||||
deadline := time.Now().Add(5 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
// Simulate a hotspot function. Spin in the inner loop for 100M iterations
|
||||
// to ensure we get most of the samples landed here rather than in the
|
||||
// library calls. We assume Go compiler won't elide the empty loop.
|
||||
for i := 0; i < 1e8; i++ {
|
||||
}
|
||||
runtime.Gosched()
|
||||
}
|
||||
}()
|
||||
|
||||
outputTempFile, err := ioutil.TempFile("", "profile_output")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create tempfile: %v", err)
|
||||
}
|
||||
defer os.Remove(outputTempFile.Name())
|
||||
defer outputTempFile.Close()
|
||||
|
||||
address := "https+insecure://" + l.Addr().String() + "/debug/pprof/profile"
|
||||
s := &source{
|
||||
Sources: []string{address},
|
||||
Seconds: 10,
|
||||
Timeout: 10,
|
||||
Symbolize: "remote",
|
||||
}
|
||||
o := &plugin.Options{
|
||||
Obj: &binutils.Binutils{},
|
||||
UI: &proftest.TestUI{T: t, AllowRx: "Saved profile in"},
|
||||
}
|
||||
o.Sym = &symbolizer.Symbolizer{Obj: o.Obj, UI: o.UI}
|
||||
p, err := fetchProfiles(s, o)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(p.SampleType) == 0 {
|
||||
t.Fatalf("fetchProfiles(%s) got empty profile: len(p.SampleType)==0", address)
|
||||
}
|
||||
if len(p.Function) == 0 {
|
||||
t.Fatalf("fetchProfiles(%s) got non-symbolized profile: len(p.Function)==0", address)
|
||||
}
|
||||
if err := checkProfileHasFunction(p, "TestHttpsInsecure"); !badSigprofOS[runtime.GOOS] && err != nil {
|
||||
t.Fatalf("fetchProfiles(%s) %v", address, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Some operating systems don't trigger the profiling signal right.
|
||||
// See https://github.com/golang/go/issues/13841.
|
||||
var badSigprofOS = map[string]bool{
|
||||
"darwin": true,
|
||||
"netbsd": true,
|
||||
"plan9": true,
|
||||
}
|
||||
|
||||
func checkProfileHasFunction(p *profile.Profile, fname string) error {
|
||||
for _, f := range p.Function {
|
||||
if strings.Contains(f.Name, fname) {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return fmt.Errorf("got %s, want function %q", p.String(), fname)
|
||||
}
|
||||
|
||||
func selfSignedCert(t *testing.T) tls.Certificate {
|
||||
privKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to generate private key: %v", err)
|
||||
}
|
||||
b, err := x509.MarshalECPrivateKey(privKey)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to marshal private key: %v", err)
|
||||
}
|
||||
bk := pem.EncodeToMemory(&pem.Block{Type: "EC PRIVATE KEY", Bytes: b})
|
||||
|
||||
tmpl := x509.Certificate{
|
||||
SerialNumber: big.NewInt(1),
|
||||
NotBefore: time.Now(),
|
||||
NotAfter: time.Now().Add(10 * time.Minute),
|
||||
}
|
||||
|
||||
b, err = x509.CreateCertificate(rand.Reader, &tmpl, &tmpl, privKey.Public(), privKey)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create cert: %v", err)
|
||||
}
|
||||
bc := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: b})
|
||||
|
||||
cert, err := tls.X509KeyPair(bc, bk)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create TLS key pair: %v", err)
|
||||
}
|
||||
return cert
|
||||
}
|
||||
|
3
src/cmd/vendor/github.com/google/pprof/internal/driver/interactive.go
generated
vendored
3
src/cmd/vendor/github.com/google/pprof/internal/driver/interactive.go
generated
vendored
@ -123,7 +123,8 @@ var generateReportWrapper = generateReport // For testing purposes.
|
||||
// greetings prints a brief welcome and some overall profile
|
||||
// information before accepting interactive commands.
|
||||
func greetings(p *profile.Profile, ui plugin.UI) {
|
||||
ropt, err := reportOptions(p, pprofVariables)
|
||||
numLabelUnits := identifyNumLabelUnits(p, ui)
|
||||
ropt, err := reportOptions(p, numLabelUnits, pprofVariables)
|
||||
if err == nil {
|
||||
ui.Print(strings.Join(report.ProfileLabels(report.New(p, ropt)), "\n"))
|
||||
}
|
||||
|
24
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/cppbench.contention
generated
vendored
Normal file
24
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/cppbench.contention
generated
vendored
Normal file
@ -0,0 +1,24 @@
|
||||
--- contentionz 1 ---
|
||||
cycles/second = 3201000000
|
||||
sampling period = 100
|
||||
ms since reset = 16502830
|
||||
discarded samples = 0
|
||||
19490304 27 @ 0xbccc97 0xc61202 0x42ed5f 0x42edc1 0x42e15a 0x5261af 0x526edf 0x5280ab 0x79e80a 0x7a251b 0x7a296d 0xa456e4 0x7fcdc2ff214e
|
||||
768 1 @ 0xbccc97 0xa42dc7 0xa456e4 0x7fcdc2ff214e
|
||||
5760 2 @ 0xbccc97 0xb82b73 0xb82bcb 0xb87eab 0xb8814c 0x4e969d 0x4faa17 0x4fc5f6 0x4fd028 0x4fd230 0x79e80a 0x7a251b 0x7a296d 0xa456e4 0x7fcdc2ff214e
|
||||
569088 1 @ 0xbccc97 0xb82b73 0xb82bcb 0xb87f08 0xb8814c 0x42ed5f 0x42edc1 0x42e15a 0x5261af 0x526edf 0x5280ab 0x79e80a 0x7a251b 0x7a296d 0xa456e4 0x7fcdc2ff214e
|
||||
2432 1 @ 0xbccc97 0xb82b73 0xb82bcb 0xb87eab 0xb8814c 0x7aa74c 0x7ab844 0x7ab914 0x79e9e9 0x79e326 0x4d299e 0x4d4b7b 0x4b7be8 0x4b7ff1 0x4d2dae 0x79e80a
|
||||
2034816 3 @ 0xbccc97 0xb82f0f 0xb83003 0xb87d50 0xc635f0 0x42ecc3 0x42e14c 0x5261af 0x526edf 0x5280ab 0x79e80a 0x7a251b 0x7a296d 0xa456e4 0x7fcdc2ff214e
|
||||
--- Memory map: ---
|
||||
00400000-00fcb000: cppbench_server_main
|
||||
7fcdc231e000-7fcdc2321000: /libnss_cache-2.15.so
|
||||
7fcdc2522000-7fcdc252e000: /libnss_files-2.15.so
|
||||
7fcdc272f000-7fcdc28dd000: /libc-2.15.so
|
||||
7fcdc2ae7000-7fcdc2be2000: /libm-2.15.so
|
||||
7fcdc2de3000-7fcdc2dea000: /librt-2.15.so
|
||||
7fcdc2feb000-7fcdc3003000: /libpthread-2.15.so
|
||||
7fcdc3208000-7fcdc320a000: /libdl-2.15.so
|
||||
7fcdc340c000-7fcdc3415000: /libcrypt-2.15.so
|
||||
7fcdc3645000-7fcdc3669000: /ld-2.15.so
|
||||
7fff86bff000-7fff86c00000: [vdso]
|
||||
ffffffffff600000-ffffffffff601000: [vsyscall]
|
19
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/cppbench.small.contention
generated
vendored
Normal file
19
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/cppbench.small.contention
generated
vendored
Normal file
@ -0,0 +1,19 @@
|
||||
--- contentionz 1 ---
|
||||
cycles/second = 3201000000
|
||||
sampling period = 100
|
||||
ms since reset = 16502830
|
||||
discarded samples = 0
|
||||
100 10 @ 0xbccc97 0xc61202 0x42ed5f 0x42edc1 0x42e15a 0x5261af 0x526edf 0x5280ab 0x79e80a 0x7a251b 0x7a296d 0xa456e4 0x7fcdc2ff214e
|
||||
--- Memory map: ---
|
||||
00400000-00fcb000: cppbench_server_main
|
||||
7fcdc231e000-7fcdc2321000: /libnss_cache-2.15.so
|
||||
7fcdc2522000-7fcdc252e000: /libnss_files-2.15.so
|
||||
7fcdc272f000-7fcdc28dd000: /libc-2.15.so
|
||||
7fcdc2ae7000-7fcdc2be2000: /libm-2.15.so
|
||||
7fcdc2de3000-7fcdc2dea000: /librt-2.15.so
|
||||
7fcdc2feb000-7fcdc3003000: /libpthread-2.15.so
|
||||
7fcdc3208000-7fcdc320a000: /libdl-2.15.so
|
||||
7fcdc340c000-7fcdc3415000: /libcrypt-2.15.so
|
||||
7fcdc3645000-7fcdc3669000: /ld-2.15.so
|
||||
7fff86bff000-7fff86c00000: [vdso]
|
||||
ffffffffff600000-ffffffffff601000: [vsyscall]
|
@ -1,9 +1,9 @@
|
||||
digraph "unnamed" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "Build ID: buildid-contention" [shape=box fontsize=16 label="Build ID: buildid-contention\lComment #1\lComment #2\lType: delay\lShowing nodes accounting for 149.50ms, 100% of 149.50ms total\l"] }
|
||||
N1 [label="file3000.src\n32.77ms (21.92%)\nof 149.50ms (100%)" fontsize=20 shape=box tooltip="testdata/file3000.src (149.50ms)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N2 [label="file1000.src\n51.20ms (34.25%)" fontsize=23 shape=box tooltip="testdata/file1000.src (51.20ms)" color="#b23100" fillcolor="#eddbd5"]
|
||||
N3 [label="file2000.src\n65.54ms (43.84%)\nof 75.78ms (50.68%)" fontsize=24 shape=box tooltip="testdata/file2000.src (75.78ms)" color="#b22000" fillcolor="#edd9d5"]
|
||||
N1 [label="file3000.src\n32.77ms (21.92%)\nof 149.50ms (100%)" id="node1" fontsize=20 shape=box tooltip="testdata/file3000.src (149.50ms)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N2 [label="file1000.src\n51.20ms (34.25%)" id="node2" fontsize=23 shape=box tooltip="testdata/file1000.src (51.20ms)" color="#b23100" fillcolor="#eddbd5"]
|
||||
N3 [label="file2000.src\n65.54ms (43.84%)\nof 75.78ms (50.68%)" id="node3" fontsize=24 shape=box tooltip="testdata/file2000.src (75.78ms)" color="#b22000" fillcolor="#edd9d5"]
|
||||
N1 -> N3 [label=" 75.78ms" weight=51 penwidth=3 color="#b22000" tooltip="testdata/file3000.src -> testdata/file2000.src (75.78ms)" labeltooltip="testdata/file3000.src -> testdata/file2000.src (75.78ms)"]
|
||||
N1 -> N2 [label=" 40.96ms" weight=28 penwidth=2 color="#b23900" tooltip="testdata/file3000.src -> testdata/file1000.src (40.96ms)" labeltooltip="testdata/file3000.src -> testdata/file1000.src (40.96ms)"]
|
||||
N3 -> N2 [label=" 10.24ms" weight=7 color="#b29775" tooltip="testdata/file2000.src -> testdata/file1000.src (10.24ms)" labeltooltip="testdata/file2000.src -> testdata/file1000.src (10.24ms)"]
|
||||
|
@ -1,9 +1,9 @@
|
||||
digraph "unnamed" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "Build ID: buildid-contention" [shape=box fontsize=16 label="Build ID: buildid-contention\lComment #1\lComment #2\lType: delay\lShowing nodes accounting for 40.96ms, 27.40% of 149.50ms total\l"] }
|
||||
N1 [label="0000000000001000\nline1000\nfile1000.src:1\n40.96ms (27.40%)" fontsize=24 shape=box tooltip="0000000000001000 line1000 testdata/file1000.src:1 (40.96ms)" color="#b23900" fillcolor="#edddd5"]
|
||||
N2 [label="0000000000003001\nline3000\nfile3000.src:5\n0 of 40.96ms (27.40%)" fontsize=8 shape=box tooltip="0000000000003001 line3000 testdata/file3000.src:5 (40.96ms)" color="#b23900" fillcolor="#edddd5"]
|
||||
N3 [label="0000000000003001\nline3001\nfile3000.src:3\n0 of 40.96ms (27.40%)" fontsize=8 shape=box tooltip="0000000000003001 line3001 testdata/file3000.src:3 (40.96ms)" color="#b23900" fillcolor="#edddd5"]
|
||||
subgraph cluster_L { "Build ID: buildid-contention" [shape=box fontsize=16 label="Build ID: buildid-contention\lComment #1\lComment #2\lType: delay\lActive filters:\l focus=[X1]000\l ignore=[X3]002\lShowing nodes accounting for 40.96ms, 27.40% of 149.50ms total\l"] }
|
||||
N1 [label="0000000000001000\nline1000\nfile1000.src:1\n40.96ms (27.40%)" id="node1" fontsize=24 shape=box tooltip="0000000000001000 line1000 testdata/file1000.src:1 (40.96ms)" color="#b23900" fillcolor="#edddd5"]
|
||||
N2 [label="0000000000003001\nline3000\nfile3000.src:5\n0 of 40.96ms (27.40%)" id="node2" fontsize=8 shape=box tooltip="0000000000003001 line3000 testdata/file3000.src:5 (40.96ms)" color="#b23900" fillcolor="#edddd5"]
|
||||
N3 [label="0000000000003001\nline3001\nfile3000.src:3\n0 of 40.96ms (27.40%)" id="node3" fontsize=8 shape=box tooltip="0000000000003001 line3001 testdata/file3000.src:3 (40.96ms)" color="#b23900" fillcolor="#edddd5"]
|
||||
N2 -> N3 [label=" 40.96ms\n (inline)" weight=28 penwidth=2 color="#b23900" tooltip="0000000000003001 line3000 testdata/file3000.src:5 -> 0000000000003001 line3001 testdata/file3000.src:3 (40.96ms)" labeltooltip="0000000000003001 line3000 testdata/file3000.src:5 -> 0000000000003001 line3001 testdata/file3000.src:3 (40.96ms)"]
|
||||
N3 -> N1 [label=" 40.96ms" weight=28 penwidth=2 color="#b23900" tooltip="0000000000003001 line3001 testdata/file3000.src:3 -> 0000000000001000 line1000 testdata/file1000.src:1 (40.96ms)" labeltooltip="0000000000003001 line3001 testdata/file3000.src:3 -> 0000000000001000 line1000 testdata/file1000.src:1 (40.96ms)"]
|
||||
}
|
||||
|
99
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.call_tree.callgrind
generated
vendored
Normal file
99
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.call_tree.callgrind
generated
vendored
Normal file
@ -0,0 +1,99 @@
|
||||
positions: instr line
|
||||
events: cpu(ms)
|
||||
|
||||
ob=(1) /path/to/testbinary
|
||||
fl=(1) testdata/file1000.src
|
||||
fn=(1) line1000
|
||||
0x1000 1 1000
|
||||
* 1 100
|
||||
|
||||
ob=(1)
|
||||
fl=(2) testdata/file2000.src
|
||||
fn=(2) line2001
|
||||
+4096 9 10
|
||||
|
||||
ob=(1)
|
||||
fl=(3) testdata/file3000.src
|
||||
fn=(3) line3002
|
||||
+4096 2 10
|
||||
cfl=(2)
|
||||
cfn=(4) line2000 [1/2]
|
||||
calls=0 * 4
|
||||
* * 1000
|
||||
|
||||
ob=(1)
|
||||
fl=(2)
|
||||
fn=(5) line2000
|
||||
-4096 4 0
|
||||
cfl=(2)
|
||||
cfn=(6) line2001 [2/2]
|
||||
calls=0 -4096 9
|
||||
* * 1000
|
||||
* 4 0
|
||||
cfl=(2)
|
||||
cfn=(7) line2001 [1/2]
|
||||
calls=0 * 9
|
||||
* * 10
|
||||
|
||||
ob=(1)
|
||||
fl=(2)
|
||||
fn=(2)
|
||||
* 9 0
|
||||
cfl=(1)
|
||||
cfn=(8) line1000 [1/2]
|
||||
calls=0 -4096 1
|
||||
* * 1000
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(9) line3000
|
||||
+4096 6 0
|
||||
cfl=(3)
|
||||
cfn=(10) line3001 [1/2]
|
||||
calls=0 +4096 5
|
||||
* * 1010
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(11) line3001
|
||||
* 5 0
|
||||
cfl=(3)
|
||||
cfn=(12) line3002 [1/2]
|
||||
calls=0 * 2
|
||||
* * 1010
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(9)
|
||||
+1 9 0
|
||||
cfl=(3)
|
||||
cfn=(13) line3001 [2/2]
|
||||
calls=0 +1 8
|
||||
* * 100
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(11)
|
||||
* 8 0
|
||||
cfl=(1)
|
||||
cfn=(14) line1000 [2/2]
|
||||
calls=0 -8193 1
|
||||
* * 100
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(9)
|
||||
+1 9 0
|
||||
cfl=(3)
|
||||
cfn=(15) line3002 [2/2]
|
||||
calls=0 +1 5
|
||||
* * 10
|
||||
|
||||
ob=(1)
|
||||
fl=(3)
|
||||
fn=(3)
|
||||
* 5 0
|
||||
cfl=(2)
|
||||
cfn=(16) line2000 [2/2]
|
||||
calls=0 -4098 4
|
||||
* * 10
|
1
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.comments
generated
vendored
1
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.comments
generated
vendored
@ -0,0 +1 @@
|
||||
some-comment
|
8
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.cum.lines.text.focus.hide
generated
vendored
Normal file
8
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.cum.lines.text.focus.hide
generated
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
Active filters:
|
||||
focus=[12]00
|
||||
hide=line[X3]0
|
||||
Showing nodes accounting for 1.11s, 99.11% of 1.12s total
|
||||
flat flat% sum% cum cum%
|
||||
1.10s 98.21% 98.21% 1.10s 98.21% line1000 testdata/file1000.src:1
|
||||
0 0% 98.21% 1.01s 90.18% line2000 testdata/file2000.src:4
|
||||
0.01s 0.89% 99.11% 1.01s 90.18% line2001 testdata/file2000.src:9 (inline)
|
@ -1,3 +1,5 @@
|
||||
Active filters:
|
||||
hide=line[X3]0
|
||||
Showing nodes accounting for 1.11s, 99.11% of 1.12s total
|
||||
flat flat% sum% cum cum%
|
||||
1.10s 98.21% 98.21% 1.10s 98.21% line1000 testdata/file1000.src:1
|
||||
|
@ -1,3 +1,5 @@
|
||||
Active filters:
|
||||
show=[12]00
|
||||
Showing nodes accounting for 1.11s, 99.11% of 1.12s total
|
||||
flat flat% sum% cum cum%
|
||||
1.10s 98.21% 98.21% 1.10s 98.21% line1000 testdata/file1000.src:1
|
||||
|
@ -1,3 +1,5 @@
|
||||
Active filters:
|
||||
hide=mangled[X3]0
|
||||
Showing nodes accounting for 1s, 100% of 1s total
|
||||
flat flat% sum% cum cum%
|
||||
1s 100% 100% 1s 100% mangled1000 testdata/file1000.src:1
|
||||
|
@ -2,9 +2,9 @@ Total: 1.12s
|
||||
ROUTINE ======================== line1000
|
||||
1.10s 1.10s (flat, cum) 98.21% of Total
|
||||
1.10s 1.10s 1000: instruction one ;line1000 file1000.src:1
|
||||
. . 1001: instruction two
|
||||
. . 1002: instruction three
|
||||
. . 1003: instruction four
|
||||
. . 1001: instruction two ;file1000.src:1
|
||||
. . 1002: instruction three ;file1000.src:2
|
||||
. . 1003: instruction four ;file1000.src:1
|
||||
ROUTINE ======================== line3000
|
||||
10ms 1.12s (flat, cum) 100% of Total
|
||||
10ms 1.01s 3000: instruction one ;line3000 file3000.src:6
|
||||
|
@ -2,6 +2,7 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<title>Pprof listing</title>
|
||||
<style type="text/css">
|
||||
body {
|
||||
@ -14,17 +15,11 @@ h1 {
|
||||
.legend {
|
||||
font-size: 1.25em;
|
||||
}
|
||||
.line {
|
||||
.line, .nop, .unimportant {
|
||||
color: #aaaaaa;
|
||||
}
|
||||
.nop {
|
||||
color: #aaaaaa;
|
||||
}
|
||||
.unimportant {
|
||||
color: #cccccc;
|
||||
}
|
||||
.disasmloc {
|
||||
color: #000000;
|
||||
.inlinesrc {
|
||||
color: #000066;
|
||||
}
|
||||
.deadsrc {
|
||||
cursor: pointer;
|
||||
@ -69,16 +64,18 @@ Type: cpu<br>
|
||||
Duration: 10s, Total samples = 1.12s (11.20%)<br>Total: 1.12s</div><h1>line1000</h1>testdata/file1000.src
|
||||
<pre onClick="pprof_toggle_asm(event)">
|
||||
Total: 1.10s 1.10s (flat, cum) 98.21%
|
||||
<span class=line> 1</span> <span class=deadsrc> 1.10s 1.10s line1 </span><span class=asm> 1.10s 1.10s 1000: instruction one <span class=disasmloc>file1000.src:1</span>
|
||||
. . 1001: instruction two <span class=disasmloc></span>
|
||||
. . 1002: instruction three <span class=disasmloc></span>
|
||||
. . 1003: instruction four <span class=disasmloc></span>
|
||||
<span class=line> 1</span> <span class=deadsrc> 1.10s 1.10s line1 </span><span class=asm> 1.10s 1.10s 1000: instruction one <span class=unimportant>file1000.src:1</span>
|
||||
. . 1001: instruction two <span class=unimportant>file1000.src:1</span>
|
||||
⋮
|
||||
. . 1003: instruction four <span class=unimportant>file1000.src:1</span>
|
||||
</span>
|
||||
<span class=line> 2</span> <span class=deadsrc> . . line2 </span><span class=asm> . . 1002: instruction three <span class=unimportant>file1000.src:2</span>
|
||||
</span>
|
||||
<span class=line> 2</span> <span class=nop> . . line2 </span>
|
||||
<span class=line> 3</span> <span class=nop> . . line3 </span>
|
||||
<span class=line> 4</span> <span class=nop> . . line4 </span>
|
||||
<span class=line> 5</span> <span class=nop> . . line5 </span>
|
||||
<span class=line> 6</span> <span class=nop> . . line6 </span>
|
||||
<span class=line> 7</span> <span class=nop> . . line7 </span>
|
||||
</pre>
|
||||
<h1>line3000</h1>testdata/file3000.src
|
||||
<pre onClick="pprof_toggle_asm(event)">
|
||||
@ -88,14 +85,14 @@ Duration: 10s, Total samples = 1.12s (11.20%)<br>Total: 1.12s</div><h1>line1000<
|
||||
<span class=line> 3</span> <span class=nop> . . line3 </span>
|
||||
<span class=line> 4</span> <span class=nop> . . line4 </span>
|
||||
<span class=line> 5</span> <span class=nop> . . line5 </span>
|
||||
<span class=line> 6</span> <span class=deadsrc> 10ms 1.01s line6 </span><span class=asm> 10ms 1.01s 3000: instruction one <span class=disasmloc>file3000.src:6</span>
|
||||
<span class=line> 6</span> <span class=deadsrc> 10ms 1.01s line6 </span><span class=asm> 10ms 1.01s 3000: instruction one <span class=unimportant>file3000.src:6</span>
|
||||
</span>
|
||||
<span class=line> 7</span> <span class=nop> . . line7 </span>
|
||||
<span class=line> 8</span> <span class=nop> . . line8 </span>
|
||||
<span class=line> 9</span> <span class=deadsrc> . 110ms line9 </span><span class=asm> . 100ms 3001: instruction two <span class=disasmloc>file3000.src:9</span>
|
||||
. 10ms 3002: instruction three <span class=disasmloc>file3000.src:9</span>
|
||||
. . 3003: instruction four <span class=disasmloc></span>
|
||||
. . 3004: instruction five <span class=disasmloc></span>
|
||||
<span class=line> 9</span> <span class=deadsrc> . 110ms line9 </span><span class=asm> . 100ms 3001: instruction two <span class=unimportant>file3000.src:9</span>
|
||||
. 10ms 3002: instruction three <span class=unimportant>file3000.src:9</span>
|
||||
. . 3003: instruction four <span class=unimportant></span>
|
||||
. . 3004: instruction five <span class=unimportant></span>
|
||||
</span>
|
||||
<span class=line> 10</span> <span class=nop> . . line0 </span>
|
||||
<span class=line> 11</span> <span class=nop> . . line1 </span>
|
||||
|
21
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.functions.call_tree.dot
generated
vendored
Normal file
21
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.flat.functions.call_tree.dot
generated
vendored
Normal file
@ -0,0 +1,21 @@
|
||||
digraph "testbinary" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "File: testbinary" [shape=box fontsize=16 label="File: testbinary\lType: cpu\lDuration: 10s, Total samples = 1.12s (11.20%)\lShowing nodes accounting for 1.11s, 99.11% of 1.12s total\lDropped 3 nodes (cum <= 0.06s)\l" tooltip="testbinary"] }
|
||||
N1 [label="line1000\n1s (89.29%)" id="node1" fontsize=24 shape=box tooltip="line1000 (1s)" color="#b20500" fillcolor="#edd6d5"]
|
||||
N1_0 [label = "key1:tag1\nkey2:tag1" id="N1_0" fontsize=8 shape=box3d tooltip="1s"]
|
||||
N1 -> N1_0 [label=" 1s" weight=100 tooltip="1s" labeltooltip="1s"]
|
||||
N2 [label="line3000\n0 of 1.12s (100%)" id="node2" fontsize=8 shape=box tooltip="line3000 (1.12s)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N3 [label="line3001\n0 of 1.11s (99.11%)" id="node3" fontsize=8 shape=box tooltip="line3001 (1.11s)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N4 [label="line1000\n0.10s (8.93%)" id="node4" fontsize=14 shape=box tooltip="line1000 (0.10s)" color="#b28b62" fillcolor="#ede8e2"]
|
||||
N4_0 [label = "key1:tag2\nkey3:tag2" id="N4_0" fontsize=8 shape=box3d tooltip="0.10s"]
|
||||
N4 -> N4_0 [label=" 0.10s" weight=100 tooltip="0.10s" labeltooltip="0.10s"]
|
||||
N5 [label="line3002\n0.01s (0.89%)\nof 1.01s (90.18%)" id="node5" fontsize=10 shape=box tooltip="line3002 (1.01s)" color="#b20500" fillcolor="#edd6d5"]
|
||||
N6 [label="line2000\n0 of 1s (89.29%)" id="node6" fontsize=8 shape=box tooltip="line2000 (1s)" color="#b20500" fillcolor="#edd6d5"]
|
||||
N7 [label="line2001\n0 of 1s (89.29%)" id="node7" fontsize=8 shape=box tooltip="line2001 (1s)" color="#b20500" fillcolor="#edd6d5"]
|
||||
N2 -> N3 [label=" 1.11s\n (inline)" weight=100 penwidth=5 color="#b20000" tooltip="line3000 -> line3001 (1.11s)" labeltooltip="line3000 -> line3001 (1.11s)"]
|
||||
N3 -> N5 [label=" 1.01s\n (inline)" weight=91 penwidth=5 color="#b20500" tooltip="line3001 -> line3002 (1.01s)" labeltooltip="line3001 -> line3002 (1.01s)"]
|
||||
N6 -> N7 [label=" 1s\n (inline)" weight=90 penwidth=5 color="#b20500" tooltip="line2000 -> line2001 (1s)" labeltooltip="line2000 -> line2001 (1s)"]
|
||||
N7 -> N1 [label=" 1s" weight=90 penwidth=5 color="#b20500" tooltip="line2001 -> line1000 (1s)" labeltooltip="line2001 -> line1000 (1s)"]
|
||||
N5 -> N6 [label=" 1s" weight=90 penwidth=5 color="#b20500" tooltip="line3002 -> line2000 (1s)" labeltooltip="line3002 -> line2000 (1s)"]
|
||||
N3 -> N4 [label=" 0.10s" weight=9 color="#b28b62" tooltip="line3001 -> line1000 (0.10s)" labeltooltip="line3001 -> line1000 (0.10s)"]
|
||||
}
|
@ -1,20 +1,20 @@
|
||||
digraph "testbinary" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "File: testbinary" [shape=box fontsize=16 label="File: testbinary\lType: cpu\lDuration: 10s, Total samples = 1.12s (11.20%)\lShowing nodes accounting for 1.12s, 100% of 1.12s total\l"] }
|
||||
N1 [label="line1000\nfile1000.src\n1.10s (98.21%)" fontsize=24 shape=box tooltip="line1000 testdata/file1000.src (1.10s)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N1_0 [label = "key1:tag1\nkey2:tag1" fontsize=8 shape=box3d tooltip="1s"]
|
||||
subgraph cluster_L { "File: testbinary" [shape=box fontsize=16 label="File: testbinary\lType: cpu\lDuration: 10s, Total samples = 1.12s (11.20%)\lShowing nodes accounting for 1.12s, 100% of 1.12s total\l" tooltip="testbinary"] }
|
||||
N1 [label="line1000\n1.10s (98.21%)" id="node1" fontsize=24 shape=box tooltip="line1000 (1.10s)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N1_0 [label = "key1:tag1\nkey2:tag1" id="N1_0" fontsize=8 shape=box3d tooltip="1s"]
|
||||
N1 -> N1_0 [label=" 1s" weight=100 tooltip="1s" labeltooltip="1s"]
|
||||
N1_1 [label = "key1:tag2\nkey3:tag2" fontsize=8 shape=box3d tooltip="0.10s"]
|
||||
N1_1 [label = "key1:tag2\nkey3:tag2" id="N1_1" fontsize=8 shape=box3d tooltip="0.10s"]
|
||||
N1 -> N1_1 [label=" 0.10s" weight=100 tooltip="0.10s" labeltooltip="0.10s"]
|
||||
N2 [label="line3000\nfile3000.src\n0 of 1.12s (100%)" fontsize=8 shape=box tooltip="line3000 testdata/file3000.src (1.12s)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N3 [label="line3001\nfile3000.src\n0 of 1.11s (99.11%)" fontsize=8 shape=box tooltip="line3001 testdata/file3000.src (1.11s)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N4 [label="line3002\nfile3000.src\n0.01s (0.89%)\nof 1.02s (91.07%)" fontsize=10 shape=box tooltip="line3002 testdata/file3000.src (1.02s)" color="#b20400" fillcolor="#edd6d5"]
|
||||
N5 [label="line2001\nfile2000.src\n0.01s (0.89%)\nof 1.01s (90.18%)" fontsize=10 shape=box tooltip="line2001 testdata/file2000.src (1.01s)" color="#b20500" fillcolor="#edd6d5"]
|
||||
N6 [label="line2000\nfile2000.src\n0 of 1.01s (90.18%)" fontsize=8 shape=box tooltip="line2000 testdata/file2000.src (1.01s)" color="#b20500" fillcolor="#edd6d5"]
|
||||
N2 -> N3 [label=" 1.11s\n (inline)" weight=100 penwidth=5 color="#b20000" tooltip="line3000 testdata/file3000.src -> line3001 testdata/file3000.src (1.11s)" labeltooltip="line3000 testdata/file3000.src -> line3001 testdata/file3000.src (1.11s)"]
|
||||
N6 -> N5 [label=" 1.01s\n (inline)" weight=91 penwidth=5 color="#b20500" tooltip="line2000 testdata/file2000.src -> line2001 testdata/file2000.src (1.01s)" labeltooltip="line2000 testdata/file2000.src -> line2001 testdata/file2000.src (1.01s)"]
|
||||
N3 -> N4 [label=" 1.01s\n (inline)" weight=91 penwidth=5 color="#b20500" tooltip="line3001 testdata/file3000.src -> line3002 testdata/file3000.src (1.01s)" labeltooltip="line3001 testdata/file3000.src -> line3002 testdata/file3000.src (1.01s)"]
|
||||
N4 -> N6 [label=" 1.01s" weight=91 penwidth=5 color="#b20500" tooltip="line3002 testdata/file3000.src -> line2000 testdata/file2000.src (1.01s)" labeltooltip="line3002 testdata/file3000.src -> line2000 testdata/file2000.src (1.01s)"]
|
||||
N5 -> N1 [label=" 1s" weight=90 penwidth=5 color="#b20500" tooltip="line2001 testdata/file2000.src -> line1000 testdata/file1000.src (1s)" labeltooltip="line2001 testdata/file2000.src -> line1000 testdata/file1000.src (1s)"]
|
||||
N3 -> N1 [label=" 0.10s" weight=9 color="#b28b62" tooltip="line3001 testdata/file3000.src -> line1000 testdata/file1000.src (0.10s)" labeltooltip="line3001 testdata/file3000.src -> line1000 testdata/file1000.src (0.10s)"]
|
||||
N2 [label="line3000\n0 of 1.12s (100%)" id="node2" fontsize=8 shape=box tooltip="line3000 (1.12s)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N3 [label="line3001\n0 of 1.11s (99.11%)" id="node3" fontsize=8 shape=box tooltip="line3001 (1.11s)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N4 [label="line3002\n0.01s (0.89%)\nof 1.02s (91.07%)" id="node4" fontsize=10 shape=box tooltip="line3002 (1.02s)" color="#b20400" fillcolor="#edd6d5"]
|
||||
N5 [label="line2001\n0.01s (0.89%)\nof 1.01s (90.18%)" id="node5" fontsize=10 shape=box tooltip="line2001 (1.01s)" color="#b20500" fillcolor="#edd6d5"]
|
||||
N6 [label="line2000\n0 of 1.01s (90.18%)" id="node6" fontsize=8 shape=box tooltip="line2000 (1.01s)" color="#b20500" fillcolor="#edd6d5"]
|
||||
N2 -> N3 [label=" 1.11s\n (inline)" weight=100 penwidth=5 color="#b20000" tooltip="line3000 -> line3001 (1.11s)" labeltooltip="line3000 -> line3001 (1.11s)"]
|
||||
N6 -> N5 [label=" 1.01s\n (inline)" weight=91 penwidth=5 color="#b20500" tooltip="line2000 -> line2001 (1.01s)" labeltooltip="line2000 -> line2001 (1.01s)"]
|
||||
N3 -> N4 [label=" 1.01s\n (inline)" weight=91 penwidth=5 color="#b20500" tooltip="line3001 -> line3002 (1.01s)" labeltooltip="line3001 -> line3002 (1.01s)"]
|
||||
N4 -> N6 [label=" 1.01s" weight=91 penwidth=5 color="#b20500" tooltip="line3002 -> line2000 (1.01s)" labeltooltip="line3002 -> line2000 (1.01s)"]
|
||||
N5 -> N1 [label=" 1s" weight=90 penwidth=5 color="#b20500" tooltip="line2001 -> line1000 (1s)" labeltooltip="line2001 -> line1000 (1s)"]
|
||||
N3 -> N1 [label=" 0.10s" weight=9 color="#b28b62" tooltip="line3001 -> line1000 (0.10s)" labeltooltip="line3001 -> line1000 (0.10s)"]
|
||||
}
|
||||
|
@ -1,8 +1,8 @@
|
||||
Showing nodes accounting for 1.12s, 100% of 1.12s total
|
||||
flat flat% sum% cum cum%
|
||||
1.10s 98.21% 98.21% 1.10s 98.21% line1000 testdata/file1000.src
|
||||
0.01s 0.89% 99.11% 1.01s 90.18% line2001 testdata/file2000.src (inline)
|
||||
0.01s 0.89% 100% 1.02s 91.07% line3002 testdata/file3000.src (inline)
|
||||
0 0% 100% 1.01s 90.18% line2000 testdata/file2000.src
|
||||
0 0% 100% 1.12s 100% line3000 testdata/file3000.src
|
||||
0 0% 100% 1.11s 99.11% line3001 testdata/file3000.src (inline)
|
||||
1.10s 98.21% 98.21% 1.10s 98.21% line1000
|
||||
0.01s 0.89% 99.11% 1.01s 90.18% line2001 (inline)
|
||||
0.01s 0.89% 100% 1.02s 91.07% line3002 (inline)
|
||||
0 0% 100% 1.01s 90.18% line2000
|
||||
0 0% 100% 1.12s 100% line3000
|
||||
0 0% 100% 1.11s 99.11% line3001 (inline)
|
||||
|
14
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.peek
generated
vendored
14
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.peek
generated
vendored
@ -2,12 +2,12 @@ Showing nodes accounting for 1.12s, 100% of 1.12s total
|
||||
----------------------------------------------------------+-------------
|
||||
flat flat% sum% cum cum% calls calls% + context
|
||||
----------------------------------------------------------+-------------
|
||||
1.01s 100% | line2000 testdata/file2000.src (inline)
|
||||
0.01s 0.89% 0.89% 1.01s 90.18% | line2001 testdata/file2000.src
|
||||
1s 99.01% | line1000 testdata/file1000.src
|
||||
1.01s 100% | line2000 (inline)
|
||||
0.01s 0.89% 0.89% 1.01s 90.18% | line2001
|
||||
1s 99.01% | line1000
|
||||
----------------------------------------------------------+-------------
|
||||
1.11s 100% | line3000 testdata/file3000.src (inline)
|
||||
0 0% 0.89% 1.11s 99.11% | line3001 testdata/file3000.src
|
||||
1.01s 90.99% | line3002 testdata/file3000.src (inline)
|
||||
0.10s 9.01% | line1000 testdata/file1000.src
|
||||
1.11s 100% | line3000 (inline)
|
||||
0 0% 0.89% 1.11s 99.11% | line3001
|
||||
1.01s 90.99% | line3002 (inline)
|
||||
0.10s 9.01% | line1000
|
||||
----------------------------------------------------------+-------------
|
||||
|
20
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.tags
generated
vendored
20
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.tags
generated
vendored
@ -1,13 +1,13 @@
|
||||
key1: Total 1120
|
||||
1000 (89.29%): tag1
|
||||
100 ( 8.93%): tag2
|
||||
10 ( 0.89%): tag3
|
||||
10 ( 0.89%): tag4
|
||||
key1: Total 1.1s
|
||||
1.0s (89.29%): tag1
|
||||
100.0ms ( 8.93%): tag2
|
||||
10.0ms ( 0.89%): tag3
|
||||
10.0ms ( 0.89%): tag4
|
||||
|
||||
key2: Total 1020
|
||||
1010 (99.02%): tag1
|
||||
10 ( 0.98%): tag2
|
||||
key2: Total 1.0s
|
||||
1.0s (99.02%): tag1
|
||||
10.0ms ( 0.98%): tag2
|
||||
|
||||
key3: Total 100
|
||||
100 ( 100%): tag2
|
||||
key3: Total 100.0ms
|
||||
100.0ms ( 100%): tag2
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
key1: Total 100
|
||||
100 ( 100%): tag2
|
||||
key1: Total 100.0ms
|
||||
100.0ms ( 100%): tag2
|
||||
|
||||
key3: Total 100
|
||||
100 ( 100%): tag2
|
||||
key3: Total 100.0ms
|
||||
100.0ms ( 100%): tag2
|
||||
|
||||
|
32
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.traces
generated
vendored
32
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.cpu.traces
generated
vendored
@ -4,29 +4,29 @@ Duration: 10s, Total samples = 1.12s (11.20%)
|
||||
-----------+-------------------------------------------------------
|
||||
key1: tag1
|
||||
key2: tag1
|
||||
1s line1000 testdata/file1000.src
|
||||
line2001 testdata/file2000.src
|
||||
line2000 testdata/file2000.src
|
||||
line3002 testdata/file3000.src
|
||||
line3001 testdata/file3000.src
|
||||
line3000 testdata/file3000.src
|
||||
1s line1000
|
||||
line2001
|
||||
line2000
|
||||
line3002
|
||||
line3001
|
||||
line3000
|
||||
-----------+-------------------------------------------------------
|
||||
key1: tag2
|
||||
key3: tag2
|
||||
100ms line1000 testdata/file1000.src
|
||||
line3001 testdata/file3000.src
|
||||
line3000 testdata/file3000.src
|
||||
100ms line1000
|
||||
line3001
|
||||
line3000
|
||||
-----------+-------------------------------------------------------
|
||||
key1: tag3
|
||||
key2: tag2
|
||||
10ms line2001 testdata/file2000.src
|
||||
line2000 testdata/file2000.src
|
||||
line3002 testdata/file3000.src
|
||||
line3000 testdata/file3000.src
|
||||
10ms line2001
|
||||
line2000
|
||||
line3002
|
||||
line3000
|
||||
-----------+-------------------------------------------------------
|
||||
key1: tag4
|
||||
key2: tag1
|
||||
10ms line3002 testdata/file3000.src
|
||||
line3001 testdata/file3000.src
|
||||
line3000 testdata/file3000.src
|
||||
10ms line3002
|
||||
line3001
|
||||
line3000
|
||||
-----------+-------------------------------------------------------
|
||||
|
@ -1,3 +1,5 @@
|
||||
Active filters:
|
||||
focus=[24]00
|
||||
Showing nodes accounting for 62.50MB, 63.37% of 98.63MB total
|
||||
Dropped 2 nodes (cum <= 4.93MB)
|
||||
----------------------------------------------------------+-------------
|
||||
|
@ -1,19 +1,21 @@
|
||||
Active filters:
|
||||
focus=[24]00
|
||||
Showing nodes accounting for 62.50MB, 98.46% of 63.48MB total
|
||||
Dropped 2 nodes (cum <= 3.17MB)
|
||||
----------------------------------------------------------+-------------
|
||||
flat flat% sum% cum cum% calls calls% + context
|
||||
----------------------------------------------------------+-------------
|
||||
63.48MB 100% | line3002 testdata/file3000.src
|
||||
0 0% 0% 63.48MB 100% | line2000 testdata/file2000.src
|
||||
63.48MB 100% | line2001 testdata/file2000.src (inline)
|
||||
63.48MB 100% | line3002
|
||||
0 0% 0% 63.48MB 100% | line2000
|
||||
63.48MB 100% | line2001 (inline)
|
||||
----------------------------------------------------------+-------------
|
||||
63.48MB 100% | line2000 testdata/file2000.src (inline)
|
||||
62.50MB 98.46% 98.46% 63.48MB 100% | line2001 testdata/file2000.src
|
||||
63.48MB 100% | line2000 (inline)
|
||||
62.50MB 98.46% 98.46% 63.48MB 100% | line2001
|
||||
----------------------------------------------------------+-------------
|
||||
0 0% 98.46% 63.48MB 100% | line3000 testdata/file3000.src
|
||||
63.48MB 100% | line3002 testdata/file3000.src (inline)
|
||||
0 0% 98.46% 63.48MB 100% | line3000
|
||||
63.48MB 100% | line3002 (inline)
|
||||
----------------------------------------------------------+-------------
|
||||
63.48MB 100% | line3000 testdata/file3000.src (inline)
|
||||
0 0% 98.46% 63.48MB 100% | line3002 testdata/file3000.src
|
||||
63.48MB 100% | line2000 testdata/file2000.src
|
||||
63.48MB 100% | line3000 (inline)
|
||||
0 0% 98.46% 63.48MB 100% | line3002
|
||||
63.48MB 100% | line2000
|
||||
----------------------------------------------------------+-------------
|
||||
|
8
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.flat.files.text.focus
generated
vendored
Normal file
8
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.flat.files.text.focus
generated
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
Active filters:
|
||||
focus=[12]00
|
||||
taghide=[X3]00
|
||||
Showing nodes accounting for 67.38MB, 68.32% of 98.63MB total
|
||||
flat flat% sum% cum cum%
|
||||
62.50MB 63.37% 63.37% 63.48MB 64.36% testdata/file2000.src
|
||||
4.88MB 4.95% 68.32% 4.88MB 4.95% testdata/file1000.src
|
||||
0 0% 68.32% 67.38MB 68.32% testdata/file3000.src
|
@ -1,8 +1,8 @@
|
||||
Showing nodes accounting for 150, 100% of 150 total
|
||||
flat flat% sum% cum cum%
|
||||
80 53.33% 53.33% 130 86.67% line3002 testdata/file3000.src (inline)
|
||||
40 26.67% 80.00% 50 33.33% line2001 testdata/file2000.src (inline)
|
||||
30 20.00% 100% 30 20.00% line1000 testdata/file1000.src
|
||||
0 0% 100% 50 33.33% line2000 testdata/file2000.src
|
||||
0 0% 100% 150 100% line3000 testdata/file3000.src
|
||||
0 0% 100% 110 73.33% line3001 testdata/file3000.src (inline)
|
||||
80 53.33% 53.33% 130 86.67% line3002 (inline)
|
||||
40 26.67% 80.00% 50 33.33% line2001 (inline)
|
||||
30 20.00% 100% 30 20.00% line1000
|
||||
0 0% 100% 50 33.33% line2000
|
||||
0 0% 100% 150 100% line3000
|
||||
0 0% 100% 110 73.33% line3001 (inline)
|
||||
|
@ -1,13 +1,13 @@
|
||||
digraph "unnamed" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "Build ID: buildid" [shape=box fontsize=16 label="Build ID: buildid\lcomment\lType: inuse_space\lShowing nodes accounting for 62.50MB, 63.37% of 98.63MB total\l"] }
|
||||
N1 [label="line2001\nfile2000.src\n62.50MB (63.37%)" fontsize=24 shape=box tooltip="line2001 testdata/file2000.src (62.50MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
NN1_0 [label = "1.56MB" fontsize=8 shape=box3d tooltip="62.50MB"]
|
||||
subgraph cluster_L { "Build ID: buildid" [shape=box fontsize=16 label="Build ID: buildid\lcomment\lType: inuse_space\lActive filters:\l tagfocus=1mb:2gb\lShowing nodes accounting for 62.50MB, 63.37% of 98.63MB total\l"] }
|
||||
N1 [label="line2001\n62.50MB (63.37%)" id="node1" fontsize=24 shape=box tooltip="line2001 (62.50MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
NN1_0 [label = "1.56MB" id="NN1_0" fontsize=8 shape=box3d tooltip="62.50MB"]
|
||||
N1 -> NN1_0 [label=" 62.50MB" weight=100 tooltip="62.50MB" labeltooltip="62.50MB"]
|
||||
N2 [label="line3000\nfile3000.src\n0 of 62.50MB (63.37%)" fontsize=8 shape=box tooltip="line3000 testdata/file3000.src (62.50MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N3 [label="line2000\nfile2000.src\n0 of 62.50MB (63.37%)" fontsize=8 shape=box tooltip="line2000 testdata/file2000.src (62.50MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N4 [label="line3002\nfile3000.src\n0 of 62.50MB (63.37%)" fontsize=8 shape=box tooltip="line3002 testdata/file3000.src (62.50MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N3 -> N1 [label=" 62.50MB\n (inline)" weight=64 penwidth=4 color="#b21600" tooltip="line2000 testdata/file2000.src -> line2001 testdata/file2000.src (62.50MB)" labeltooltip="line2000 testdata/file2000.src -> line2001 testdata/file2000.src (62.50MB)"]
|
||||
N2 -> N4 [label=" 62.50MB\n (inline)" weight=64 penwidth=4 color="#b21600" tooltip="line3000 testdata/file3000.src -> line3002 testdata/file3000.src (62.50MB)" labeltooltip="line3000 testdata/file3000.src -> line3002 testdata/file3000.src (62.50MB)"]
|
||||
N4 -> N3 [label=" 62.50MB" weight=64 penwidth=4 color="#b21600" tooltip="line3002 testdata/file3000.src -> line2000 testdata/file2000.src (62.50MB)" labeltooltip="line3002 testdata/file3000.src -> line2000 testdata/file2000.src (62.50MB)"]
|
||||
N2 [label="line3000\n0 of 62.50MB (63.37%)" id="node2" fontsize=8 shape=box tooltip="line3000 (62.50MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N3 [label="line2000\n0 of 62.50MB (63.37%)" id="node3" fontsize=8 shape=box tooltip="line2000 (62.50MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N4 [label="line3002\n0 of 62.50MB (63.37%)" id="node4" fontsize=8 shape=box tooltip="line3002 (62.50MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N3 -> N1 [label=" 62.50MB\n (inline)" weight=64 penwidth=4 color="#b21600" tooltip="line2000 -> line2001 (62.50MB)" labeltooltip="line2000 -> line2001 (62.50MB)"]
|
||||
N2 -> N4 [label=" 62.50MB\n (inline)" weight=64 penwidth=4 color="#b21600" tooltip="line3000 -> line3002 (62.50MB)" labeltooltip="line3000 -> line3002 (62.50MB)"]
|
||||
N4 -> N3 [label=" 62.50MB" weight=64 penwidth=4 color="#b21600" tooltip="line3002 -> line2000 (62.50MB)" labeltooltip="line3002 -> line2000 (62.50MB)"]
|
||||
}
|
||||
|
@ -1,16 +1,16 @@
|
||||
digraph "unnamed" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "Build ID: buildid" [shape=box fontsize=16 label="Build ID: buildid\lcomment\lType: inuse_space\lShowing nodes accounting for 36.13MB, 36.63% of 98.63MB total\lDropped 2 nodes (cum <= 4.93MB)\l"] }
|
||||
N1 [label="line3002\nfile3000.src\n31.25MB (31.68%)\nof 32.23MB (32.67%)" fontsize=24 shape=box tooltip="line3002 testdata/file3000.src (32.23MB)" color="#b23200" fillcolor="#eddcd5"]
|
||||
NN1_0 [label = "400kB" fontsize=8 shape=box3d tooltip="31.25MB"]
|
||||
subgraph cluster_L { "Build ID: buildid" [shape=box fontsize=16 label="Build ID: buildid\lcomment\lType: inuse_space\lActive filters:\l tagfocus=30kb:\l tagignore=1mb:2mb\lShowing nodes accounting for 36.13MB, 36.63% of 98.63MB total\lDropped 2 nodes (cum <= 4.93MB)\l"] }
|
||||
N1 [label="line3002\n31.25MB (31.68%)\nof 32.23MB (32.67%)" id="node1" fontsize=24 shape=box tooltip="line3002 (32.23MB)" color="#b23200" fillcolor="#eddcd5"]
|
||||
NN1_0 [label = "400kB" id="NN1_0" fontsize=8 shape=box3d tooltip="31.25MB"]
|
||||
N1 -> NN1_0 [label=" 31.25MB" weight=100 tooltip="31.25MB" labeltooltip="31.25MB"]
|
||||
N2 [label="line3000\nfile3000.src\n0 of 36.13MB (36.63%)" fontsize=8 shape=box tooltip="line3000 testdata/file3000.src (36.13MB)" color="#b22e00" fillcolor="#eddbd5"]
|
||||
N3 [label="line3001\nfile3000.src\n0 of 36.13MB (36.63%)" fontsize=8 shape=box tooltip="line3001 testdata/file3000.src (36.13MB)" color="#b22e00" fillcolor="#eddbd5"]
|
||||
N4 [label="line1000\nfile1000.src\n4.88MB (4.95%)" fontsize=15 shape=box tooltip="line1000 testdata/file1000.src (4.88MB)" color="#b2a086" fillcolor="#edeae7"]
|
||||
NN4_0 [label = "200kB" fontsize=8 shape=box3d tooltip="3.91MB"]
|
||||
N2 [label="line3000\n0 of 36.13MB (36.63%)" id="node2" fontsize=8 shape=box tooltip="line3000 (36.13MB)" color="#b22e00" fillcolor="#eddbd5"]
|
||||
N3 [label="line3001\n0 of 36.13MB (36.63%)" id="node3" fontsize=8 shape=box tooltip="line3001 (36.13MB)" color="#b22e00" fillcolor="#eddbd5"]
|
||||
N4 [label="line1000\n4.88MB (4.95%)" id="node4" fontsize=15 shape=box tooltip="line1000 (4.88MB)" color="#b2a086" fillcolor="#edeae7"]
|
||||
NN4_0 [label = "200kB" id="NN4_0" fontsize=8 shape=box3d tooltip="3.91MB"]
|
||||
N4 -> NN4_0 [label=" 3.91MB" weight=100 tooltip="3.91MB" labeltooltip="3.91MB"]
|
||||
N2 -> N3 [label=" 36.13MB\n (inline)" weight=37 penwidth=2 color="#b22e00" tooltip="line3000 testdata/file3000.src -> line3001 testdata/file3000.src (36.13MB)" labeltooltip="line3000 testdata/file3000.src -> line3001 testdata/file3000.src (36.13MB)"]
|
||||
N3 -> N1 [label=" 32.23MB\n (inline)" weight=33 penwidth=2 color="#b23200" tooltip="line3001 testdata/file3000.src -> line3002 testdata/file3000.src (32.23MB)" labeltooltip="line3001 testdata/file3000.src -> line3002 testdata/file3000.src (32.23MB)"]
|
||||
N3 -> N4 [label=" 3.91MB" weight=4 color="#b2a58f" tooltip="line3001 testdata/file3000.src -> line1000 testdata/file1000.src (3.91MB)" labeltooltip="line3001 testdata/file3000.src -> line1000 testdata/file1000.src (3.91MB)"]
|
||||
N1 -> N4 [label=" 0.98MB" color="#b2b0a9" tooltip="line3002 testdata/file3000.src ... line1000 testdata/file1000.src (0.98MB)" labeltooltip="line3002 testdata/file3000.src ... line1000 testdata/file1000.src (0.98MB)" style="dotted" minlen=2]
|
||||
N2 -> N3 [label=" 36.13MB\n (inline)" weight=37 penwidth=2 color="#b22e00" tooltip="line3000 -> line3001 (36.13MB)" labeltooltip="line3000 -> line3001 (36.13MB)"]
|
||||
N3 -> N1 [label=" 32.23MB\n (inline)" weight=33 penwidth=2 color="#b23200" tooltip="line3001 -> line3002 (32.23MB)" labeltooltip="line3001 -> line3002 (32.23MB)"]
|
||||
N3 -> N4 [label=" 3.91MB" weight=4 color="#b2a58f" tooltip="line3001 -> line1000 (3.91MB)" labeltooltip="line3001 -> line1000 (3.91MB)"]
|
||||
N1 -> N4 [label=" 0.98MB" color="#b2b0a9" tooltip="line3002 ... line1000 (0.98MB)" labeltooltip="line3002 ... line1000 (0.98MB)" style="dotted" minlen=2]
|
||||
}
|
||||
|
@ -1,16 +1,16 @@
|
||||
digraph "unnamed" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "Build ID: buildid" [shape=box fontsize=16 label="Build ID: buildid\lcomment\lType: inuse_space\lShowing nodes accounting for 67.38MB, 68.32% of 98.63MB total\l"] }
|
||||
N1 [label="line3000\nfile3000.src:4\n0 of 67.38MB (68.32%)" fontsize=8 shape=box tooltip="line3000 testdata/file3000.src:4 (67.38MB)" color="#b21300" fillcolor="#edd7d5"]
|
||||
N2 [label="line2001\nfile2000.src:2\n62.50MB (63.37%)\nof 63.48MB (64.36%)" fontsize=24 shape=box tooltip="line2001 testdata/file2000.src:2 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
NN2_0 [label = "1.56MB" fontsize=8 shape=box3d tooltip="62.50MB"]
|
||||
subgraph cluster_L { "Build ID: buildid" [shape=box fontsize=16 label="Build ID: buildid\lcomment\lType: inuse_space\lActive filters:\l focus=[12]00\lShowing nodes accounting for 67.38MB, 68.32% of 98.63MB total\l"] }
|
||||
N1 [label="line3000\nfile3000.src:4\n0 of 67.38MB (68.32%)" id="node1" fontsize=8 shape=box tooltip="line3000 testdata/file3000.src:4 (67.38MB)" color="#b21300" fillcolor="#edd7d5"]
|
||||
N2 [label="line2001\nfile2000.src:2\n62.50MB (63.37%)\nof 63.48MB (64.36%)" id="node2" fontsize=24 shape=box tooltip="line2001 testdata/file2000.src:2 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
NN2_0 [label = "1.56MB" id="NN2_0" fontsize=8 shape=box3d tooltip="62.50MB"]
|
||||
N2 -> NN2_0 [label=" 62.50MB" weight=100 tooltip="62.50MB" labeltooltip="62.50MB"]
|
||||
N3 [label="line1000\nfile1000.src:1\n4.88MB (4.95%)" fontsize=13 shape=box tooltip="line1000 testdata/file1000.src:1 (4.88MB)" color="#b2a086" fillcolor="#edeae7"]
|
||||
NN3_0 [label = "200kB" fontsize=8 shape=box3d tooltip="3.91MB"]
|
||||
N3 [label="line1000\nfile1000.src:1\n4.88MB (4.95%)" id="node3" fontsize=13 shape=box tooltip="line1000 testdata/file1000.src:1 (4.88MB)" color="#b2a086" fillcolor="#edeae7"]
|
||||
NN3_0 [label = "200kB" id="NN3_0" fontsize=8 shape=box3d tooltip="3.91MB"]
|
||||
N3 -> NN3_0 [label=" 3.91MB" weight=100 tooltip="3.91MB" labeltooltip="3.91MB"]
|
||||
N4 [label="line3002\nfile3000.src:3\n0 of 63.48MB (64.36%)" fontsize=8 shape=box tooltip="line3002 testdata/file3000.src:3 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N5 [label="line3001\nfile3000.src:2\n0 of 4.88MB (4.95%)" fontsize=8 shape=box tooltip="line3001 testdata/file3000.src:2 (4.88MB)" color="#b2a086" fillcolor="#edeae7"]
|
||||
N6 [label="line2000\nfile2000.src:3\n0 of 63.48MB (64.36%)" fontsize=8 shape=box tooltip="line2000 testdata/file2000.src:3 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N4 [label="line3002\nfile3000.src:3\n0 of 63.48MB (64.36%)" id="node4" fontsize=8 shape=box tooltip="line3002 testdata/file3000.src:3 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N5 [label="line3001\nfile3000.src:2\n0 of 4.88MB (4.95%)" id="node5" fontsize=8 shape=box tooltip="line3001 testdata/file3000.src:2 (4.88MB)" color="#b2a086" fillcolor="#edeae7"]
|
||||
N6 [label="line2000\nfile2000.src:3\n0 of 63.48MB (64.36%)" id="node6" fontsize=8 shape=box tooltip="line2000 testdata/file2000.src:3 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N6 -> N2 [label=" 63.48MB\n (inline)" weight=65 penwidth=4 color="#b21600" tooltip="line2000 testdata/file2000.src:3 -> line2001 testdata/file2000.src:2 (63.48MB)" labeltooltip="line2000 testdata/file2000.src:3 -> line2001 testdata/file2000.src:2 (63.48MB)"]
|
||||
N4 -> N6 [label=" 63.48MB" weight=65 penwidth=4 color="#b21600" tooltip="line3002 testdata/file3000.src:3 -> line2000 testdata/file2000.src:3 (63.48MB)" labeltooltip="line3002 testdata/file3000.src:3 -> line2000 testdata/file2000.src:3 (63.48MB)"]
|
||||
N1 -> N4 [label=" 62.50MB\n (inline)" weight=64 penwidth=4 color="#b21600" tooltip="line3000 testdata/file3000.src:4 -> line3002 testdata/file3000.src:3 (62.50MB)" labeltooltip="line3000 testdata/file3000.src:4 -> line3002 testdata/file3000.src:3 (62.50MB)"]
|
||||
|
10
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.tags
generated
vendored
10
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.tags
generated
vendored
@ -1,6 +1,6 @@
|
||||
bytes: Total 150
|
||||
80 (53.33%): 400kB
|
||||
40 (26.67%): 1.56MB
|
||||
20 (13.33%): 200kB
|
||||
10 ( 6.67%): 100kB
|
||||
bytes: Total 98.6MB
|
||||
62.5MB (63.37%): 1.56MB
|
||||
31.2MB (31.68%): 400kB
|
||||
3.9MB ( 3.96%): 200kB
|
||||
1000.0kB ( 0.99%): 100kB
|
||||
|
||||
|
10
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.tags.unit
generated
vendored
10
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap.tags.unit
generated
vendored
@ -1,6 +1,6 @@
|
||||
bytes: Total 150
|
||||
80 (53.33%): 409600B
|
||||
40 (26.67%): 1638400B
|
||||
20 (13.33%): 204800B
|
||||
10 ( 6.67%): 102400B
|
||||
bytes: Total 103424000.0B
|
||||
65536000.0B (63.37%): 1638400B
|
||||
32768000.0B (31.68%): 409600B
|
||||
4096000.0B ( 3.96%): 204800B
|
||||
1024000.0B ( 0.99%): 102400B
|
||||
|
||||
|
@ -1,8 +1,8 @@
|
||||
Showing nodes accounting for 150, 100% of 150 total
|
||||
flat flat% sum% cum cum%
|
||||
80 53.33% 53.33% 130 86.67% line3002 testdata/file3000.src (inline)
|
||||
40 26.67% 80.00% 50 33.33% line2001 testdata/file2000.src (inline)
|
||||
30 20.00% 100% 30 20.00% line1000 testdata/file1000.src
|
||||
0 0% 100% 50 33.33% line2000 testdata/file2000.src
|
||||
0 0% 100% 150 100% line3000 testdata/file3000.src
|
||||
0 0% 100% 110 73.33% line3001 testdata/file3000.src (inline)
|
||||
80 53.33% 53.33% 130 86.67% line3002 (inline)
|
||||
40 26.67% 80.00% 50 33.33% line2001 (inline)
|
||||
30 20.00% 100% 30 20.00% line1000
|
||||
0 0% 100% 50 33.33% line2000
|
||||
0 0% 100% 150 100% line3000
|
||||
0 0% 100% 110 73.33% line3001 (inline)
|
||||
|
14
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap_alloc.flat.alloc_space.dot
generated
vendored
Normal file
14
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap_alloc.flat.alloc_space.dot
generated
vendored
Normal file
@ -0,0 +1,14 @@
|
||||
digraph "unnamed" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "Build ID: buildid" [shape=box fontsize=16 label="Build ID: buildid\lcomment\lType: alloc_space\lActive filters:\l tagshow=[2]00\lShowing nodes accounting for 93.75MB, 95.05% of 98.63MB total\lDropped 1 node (cum <= 4.93MB)\l"] }
|
||||
N1 [label="line3002\n31.25MB (31.68%)\nof 94.73MB (96.04%)" id="node1" fontsize=20 shape=box tooltip="line3002 (94.73MB)" color="#b20200" fillcolor="#edd5d5"]
|
||||
N2 [label="line3000\n0 of 98.63MB (100%)" id="node2" fontsize=8 shape=box tooltip="line3000 (98.63MB)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N3 [label="line2001\n62.50MB (63.37%)\nof 63.48MB (64.36%)" id="node3" fontsize=24 shape=box tooltip="line2001 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N4 [label="line2000\n0 of 63.48MB (64.36%)" id="node4" fontsize=8 shape=box tooltip="line2000 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N5 [label="line3001\n0 of 36.13MB (36.63%)" id="node5" fontsize=8 shape=box tooltip="line3001 (36.13MB)" color="#b22e00" fillcolor="#eddbd5"]
|
||||
N4 -> N3 [label=" 63.48MB\n (inline)" weight=65 penwidth=4 color="#b21600" tooltip="line2000 -> line2001 (63.48MB)" labeltooltip="line2000 -> line2001 (63.48MB)"]
|
||||
N1 -> N4 [label=" 63.48MB" weight=65 penwidth=4 color="#b21600" tooltip="line3002 -> line2000 (63.48MB)" labeltooltip="line3002 -> line2000 (63.48MB)"]
|
||||
N2 -> N1 [label=" 62.50MB\n (inline)" weight=64 penwidth=4 color="#b21600" tooltip="line3000 -> line3002 (62.50MB)" labeltooltip="line3000 -> line3002 (62.50MB)"]
|
||||
N2 -> N5 [label=" 36.13MB\n (inline)" weight=37 penwidth=2 color="#b22e00" tooltip="line3000 -> line3001 (36.13MB)" labeltooltip="line3000 -> line3001 (36.13MB)"]
|
||||
N5 -> N1 [label=" 32.23MB\n (inline)" weight=33 penwidth=2 color="#b23200" tooltip="line3001 -> line3002 (32.23MB)" labeltooltip="line3001 -> line3002 (32.23MB)"]
|
||||
}
|
@ -1,18 +1,18 @@
|
||||
digraph "unnamed" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "Build ID: buildid" [shape=box fontsize=16 label="Build ID: buildid\lcomment\lType: alloc_space\lShowing nodes accounting for 93.75MB, 95.05% of 98.63MB total\lDropped 1 node (cum <= 4.93MB)\l"] }
|
||||
N1 [label="line3002\nfile3000.src\n31.25MB (31.68%)\nof 94.73MB (96.04%)" fontsize=20 shape=box tooltip="line3002 testdata/file3000.src (94.73MB)" color="#b20200" fillcolor="#edd5d5"]
|
||||
NN1_0 [label = "400kB" fontsize=8 shape=box3d tooltip="31.25MB"]
|
||||
subgraph cluster_L { "Build ID: buildid" [shape=box fontsize=16 label="Build ID: buildid\lcomment\lType: alloc_space\lActive filters:\l focus=[234]00\lShowing nodes accounting for 93.75MB, 95.05% of 98.63MB total\lDropped 1 node (cum <= 4.93MB)\l"] }
|
||||
N1 [label="line3002\n31.25MB (31.68%)\nof 94.73MB (96.04%)" id="node1" fontsize=20 shape=box tooltip="line3002 (94.73MB)" color="#b20200" fillcolor="#edd5d5"]
|
||||
NN1_0 [label = "400kB" id="NN1_0" fontsize=8 shape=box3d tooltip="31.25MB"]
|
||||
N1 -> NN1_0 [label=" 31.25MB" weight=100 tooltip="31.25MB" labeltooltip="31.25MB"]
|
||||
N2 [label="line3000\nfile3000.src\n0 of 98.63MB (100%)" fontsize=8 shape=box tooltip="line3000 testdata/file3000.src (98.63MB)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N3 [label="line2001\nfile2000.src\n62.50MB (63.37%)\nof 63.48MB (64.36%)" fontsize=24 shape=box tooltip="line2001 testdata/file2000.src (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
NN3_0 [label = "1.56MB" fontsize=8 shape=box3d tooltip="62.50MB"]
|
||||
N2 [label="line3000\n0 of 98.63MB (100%)" id="node2" fontsize=8 shape=box tooltip="line3000 (98.63MB)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N3 [label="line2001\n62.50MB (63.37%)\nof 63.48MB (64.36%)" id="node3" fontsize=24 shape=box tooltip="line2001 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
NN3_0 [label = "1.56MB" id="NN3_0" fontsize=8 shape=box3d tooltip="62.50MB"]
|
||||
N3 -> NN3_0 [label=" 62.50MB" weight=100 tooltip="62.50MB" labeltooltip="62.50MB"]
|
||||
N4 [label="line2000\nfile2000.src\n0 of 63.48MB (64.36%)" fontsize=8 shape=box tooltip="line2000 testdata/file2000.src (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N5 [label="line3001\nfile3000.src\n0 of 36.13MB (36.63%)" fontsize=8 shape=box tooltip="line3001 testdata/file3000.src (36.13MB)" color="#b22e00" fillcolor="#eddbd5"]
|
||||
N4 -> N3 [label=" 63.48MB\n (inline)" weight=65 penwidth=4 color="#b21600" tooltip="line2000 testdata/file2000.src -> line2001 testdata/file2000.src (63.48MB)" labeltooltip="line2000 testdata/file2000.src -> line2001 testdata/file2000.src (63.48MB)"]
|
||||
N1 -> N4 [label=" 63.48MB" weight=65 penwidth=4 color="#b21600" tooltip="line3002 testdata/file3000.src -> line2000 testdata/file2000.src (63.48MB)" labeltooltip="line3002 testdata/file3000.src -> line2000 testdata/file2000.src (63.48MB)" minlen=2]
|
||||
N2 -> N1 [label=" 62.50MB\n (inline)" weight=64 penwidth=4 color="#b21600" tooltip="line3000 testdata/file3000.src -> line3002 testdata/file3000.src (62.50MB)" labeltooltip="line3000 testdata/file3000.src -> line3002 testdata/file3000.src (62.50MB)"]
|
||||
N2 -> N5 [label=" 36.13MB\n (inline)" weight=37 penwidth=2 color="#b22e00" tooltip="line3000 testdata/file3000.src -> line3001 testdata/file3000.src (36.13MB)" labeltooltip="line3000 testdata/file3000.src -> line3001 testdata/file3000.src (36.13MB)"]
|
||||
N5 -> N1 [label=" 32.23MB\n (inline)" weight=33 penwidth=2 color="#b23200" tooltip="line3001 testdata/file3000.src -> line3002 testdata/file3000.src (32.23MB)" labeltooltip="line3001 testdata/file3000.src -> line3002 testdata/file3000.src (32.23MB)"]
|
||||
N4 [label="line2000\n0 of 63.48MB (64.36%)" id="node4" fontsize=8 shape=box tooltip="line2000 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N5 [label="line3001\n0 of 36.13MB (36.63%)" id="node5" fontsize=8 shape=box tooltip="line3001 (36.13MB)" color="#b22e00" fillcolor="#eddbd5"]
|
||||
N4 -> N3 [label=" 63.48MB\n (inline)" weight=65 penwidth=4 color="#b21600" tooltip="line2000 -> line2001 (63.48MB)" labeltooltip="line2000 -> line2001 (63.48MB)"]
|
||||
N1 -> N4 [label=" 63.48MB" weight=65 penwidth=4 color="#b21600" tooltip="line3002 -> line2000 (63.48MB)" labeltooltip="line3002 -> line2000 (63.48MB)" minlen=2]
|
||||
N2 -> N1 [label=" 62.50MB\n (inline)" weight=64 penwidth=4 color="#b21600" tooltip="line3000 -> line3002 (62.50MB)" labeltooltip="line3000 -> line3002 (62.50MB)"]
|
||||
N2 -> N5 [label=" 36.13MB\n (inline)" weight=37 penwidth=2 color="#b22e00" tooltip="line3000 -> line3001 (36.13MB)" labeltooltip="line3000 -> line3001 (36.13MB)"]
|
||||
N5 -> N1 [label=" 32.23MB\n (inline)" weight=33 penwidth=2 color="#b23200" tooltip="line3001 -> line3002 (32.23MB)" labeltooltip="line3001 -> line3002 (32.23MB)"]
|
||||
}
|
||||
|
@ -1,11 +1,11 @@
|
||||
digraph "unnamed" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "Build ID: buildid" [shape=box fontsize=16 label="Build ID: buildid\lcomment\lType: alloc_space\lShowing nodes accounting for 93.75MB, 95.05% of 98.63MB total\lDropped 1 node (cum <= 4.93MB)\l"] }
|
||||
N1 [label="line3000\nfile3000.src\n62.50MB (63.37%)\nof 98.63MB (100%)" fontsize=24 shape=box tooltip="line3000 testdata/file3000.src (98.63MB)" color="#b20000" fillcolor="#edd5d5"]
|
||||
NN1_0 [label = "1.56MB" fontsize=8 shape=box3d tooltip="62.50MB"]
|
||||
subgraph cluster_L { "Build ID: buildid" [shape=box fontsize=16 label="Build ID: buildid\lcomment\lType: alloc_space\lActive filters:\l hide=line.*1?23?\lShowing nodes accounting for 93.75MB, 95.05% of 98.63MB total\lDropped 1 node (cum <= 4.93MB)\l"] }
|
||||
N1 [label="line3000\n62.50MB (63.37%)\nof 98.63MB (100%)" id="node1" fontsize=24 shape=box tooltip="line3000 (98.63MB)" color="#b20000" fillcolor="#edd5d5"]
|
||||
NN1_0 [label = "1.56MB" id="NN1_0" fontsize=8 shape=box3d tooltip="62.50MB"]
|
||||
N1 -> NN1_0 [label=" 62.50MB" weight=100 tooltip="62.50MB" labeltooltip="62.50MB"]
|
||||
N2 [label="line3001\nfile3000.src\n31.25MB (31.68%)\nof 36.13MB (36.63%)" fontsize=20 shape=box tooltip="line3001 testdata/file3000.src (36.13MB)" color="#b22e00" fillcolor="#eddbd5"]
|
||||
NN2_0 [label = "400kB" fontsize=8 shape=box3d tooltip="31.25MB"]
|
||||
N2 [label="line3001\n31.25MB (31.68%)\nof 36.13MB (36.63%)" id="node2" fontsize=20 shape=box tooltip="line3001 (36.13MB)" color="#b22e00" fillcolor="#eddbd5"]
|
||||
NN2_0 [label = "400kB" id="NN2_0" fontsize=8 shape=box3d tooltip="31.25MB"]
|
||||
N2 -> NN2_0 [label=" 31.25MB" weight=100 tooltip="31.25MB" labeltooltip="31.25MB"]
|
||||
N1 -> N2 [label=" 36.13MB\n (inline)" weight=37 penwidth=2 color="#b22e00" tooltip="line3000 testdata/file3000.src -> line3001 testdata/file3000.src (36.13MB)" labeltooltip="line3000 testdata/file3000.src -> line3001 testdata/file3000.src (36.13MB)" minlen=2]
|
||||
N1 -> N2 [label=" 36.13MB\n (inline)" weight=37 penwidth=2 color="#b22e00" tooltip="line3000 -> line3001 (36.13MB)" labeltooltip="line3000 -> line3001 (36.13MB)" minlen=2]
|
||||
}
|
||||
|
8
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap_request.tags.focus
generated
vendored
Normal file
8
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap_request.tags.focus
generated
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
bytes: Total 93.8MB
|
||||
62.5MB (66.67%): 1.56MB
|
||||
31.2MB (33.33%): 400kB
|
||||
|
||||
request: Total 93.8MB
|
||||
62.5MB (66.67%): 1.56MB
|
||||
31.2MB (33.33%): 400kB
|
||||
|
30
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap_sizetags.dot
generated
vendored
Normal file
30
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap_sizetags.dot
generated
vendored
Normal file
@ -0,0 +1,30 @@
|
||||
digraph "unnamed" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "Build ID: buildid" [shape=box fontsize=16 label="Build ID: buildid\lcomment\lType: inuse_space\lShowing nodes accounting for 93.75MB, 95.05% of 98.63MB total\lDropped 1 node (cum <= 4.93MB)\l"] }
|
||||
N1 [label="line3002\n31.25MB (31.68%)\nof 94.73MB (96.04%)" id="node1" fontsize=20 shape=box tooltip="line3002 (94.73MB)" color="#b20200" fillcolor="#edd5d5"]
|
||||
NN1_0 [label = "16B..64B" id="NN1_0" fontsize=8 shape=box3d tooltip="93.75MB"]
|
||||
N1 -> NN1_0 [label=" 93.75MB" weight=100 tooltip="93.75MB" labeltooltip="93.75MB"]
|
||||
NN1_1 [label = "2B..8B" id="NN1_1" fontsize=8 shape=box3d tooltip="93.75MB"]
|
||||
N1 -> NN1_1 [label=" 93.75MB" weight=100 tooltip="93.75MB" labeltooltip="93.75MB"]
|
||||
NN1_2 [label = "256B..1.56MB" id="NN1_2" fontsize=8 shape=box3d tooltip="62.50MB"]
|
||||
N1 -> NN1_2 [label=" 62.50MB" weight=100 tooltip="62.50MB" labeltooltip="62.50MB"]
|
||||
NN1_3 [label = "128B" id="NN1_3" fontsize=8 shape=box3d tooltip="31.25MB"]
|
||||
N1 -> NN1_3 [label=" 31.25MB" weight=100 tooltip="31.25MB" labeltooltip="31.25MB"]
|
||||
N2 [label="line3000\n0 of 98.63MB (100%)" id="node2" fontsize=8 shape=box tooltip="line3000 (98.63MB)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N3 [label="line2001\n62.50MB (63.37%)\nof 63.48MB (64.36%)" id="node3" fontsize=24 shape=box tooltip="line2001 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
NN3_0 [label = "16B..64B" id="NN3_0" fontsize=8 shape=box3d tooltip="190.43MB"]
|
||||
N3 -> NN3_0 [label=" 190.43MB" weight=100 tooltip="190.43MB" labeltooltip="190.43MB" style="dotted"]
|
||||
NN3_1 [label = "2B..8B" id="NN3_1" fontsize=8 shape=box3d tooltip="190.43MB"]
|
||||
N3 -> NN3_1 [label=" 190.43MB" weight=100 tooltip="190.43MB" labeltooltip="190.43MB" style="dotted"]
|
||||
NN3_2 [label = "256B..1.56MB" id="NN3_2" fontsize=8 shape=box3d tooltip="125.98MB"]
|
||||
N3 -> NN3_2 [label=" 125.98MB" weight=100 tooltip="125.98MB" labeltooltip="125.98MB" style="dotted"]
|
||||
NN3_3 [label = "128B" id="NN3_3" fontsize=8 shape=box3d tooltip="63.48MB"]
|
||||
N3 -> NN3_3 [label=" 63.48MB" weight=100 tooltip="63.48MB" labeltooltip="63.48MB" style="dotted"]
|
||||
N4 [label="line2000\n0 of 63.48MB (64.36%)" id="node4" fontsize=8 shape=box tooltip="line2000 (63.48MB)" color="#b21600" fillcolor="#edd8d5"]
|
||||
N5 [label="line3001\n0 of 36.13MB (36.63%)" id="node5" fontsize=8 shape=box tooltip="line3001 (36.13MB)" color="#b22e00" fillcolor="#eddbd5"]
|
||||
N4 -> N3 [label=" 63.48MB\n (inline)" weight=65 penwidth=4 color="#b21600" tooltip="line2000 -> line2001 (63.48MB)" labeltooltip="line2000 -> line2001 (63.48MB)"]
|
||||
N1 -> N4 [label=" 63.48MB" weight=65 penwidth=4 color="#b21600" tooltip="line3002 -> line2000 (63.48MB)" labeltooltip="line3002 -> line2000 (63.48MB)" minlen=2]
|
||||
N2 -> N1 [label=" 62.50MB\n (inline)" weight=64 penwidth=4 color="#b21600" tooltip="line3000 -> line3002 (62.50MB)" labeltooltip="line3000 -> line3002 (62.50MB)"]
|
||||
N2 -> N5 [label=" 36.13MB\n (inline)" weight=37 penwidth=2 color="#b22e00" tooltip="line3000 -> line3001 (36.13MB)" labeltooltip="line3000 -> line3001 (36.13MB)"]
|
||||
N5 -> N1 [label=" 32.23MB\n (inline)" weight=33 penwidth=2 color="#b23200" tooltip="line3001 -> line3002 (32.23MB)" labeltooltip="line3001 -> line3002 (32.23MB)"]
|
||||
}
|
32
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap_tags.traces
generated
vendored
Normal file
32
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.heap_tags.traces
generated
vendored
Normal file
@ -0,0 +1,32 @@
|
||||
Build ID: buildid
|
||||
comment
|
||||
Type: inuse_space
|
||||
-----------+-------------------------------------------------------
|
||||
key1: tag
|
||||
bytes: 100kB
|
||||
request: 100kB
|
||||
1000kB line1000
|
||||
line2001
|
||||
line2000
|
||||
line3002
|
||||
line3001
|
||||
line3000
|
||||
-----------+-------------------------------------------------------
|
||||
bytes: 200kB
|
||||
3.91MB line1000
|
||||
line3001
|
||||
line3000
|
||||
-----------+-------------------------------------------------------
|
||||
key1: tag
|
||||
bytes: 1.56MB
|
||||
request: 1.56MB
|
||||
62.50MB line2001
|
||||
line2000
|
||||
line3002
|
||||
line3000
|
||||
-----------+-------------------------------------------------------
|
||||
bytes: 400kB
|
||||
31.25MB line3002
|
||||
line3001
|
||||
line3000
|
||||
-----------+-------------------------------------------------------
|
8
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.unknown.flat.functions.call_tree.text
generated
vendored
Normal file
8
src/cmd/vendor/github.com/google/pprof/internal/driver/testdata/pprof.unknown.flat.functions.call_tree.text
generated
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
Showing nodes accounting for 1.12s, 100% of 1.12s total
|
||||
Showing top 5 nodes out of 6
|
||||
flat flat% sum% cum cum%
|
||||
1.10s 98.21% 98.21% 1.10s 98.21% line1000
|
||||
0.01s 0.89% 99.11% 1.01s 90.18% line2001 (inline)
|
||||
0.01s 0.89% 100% 1.02s 91.07% line3002 (inline)
|
||||
0 0% 100% 1.01s 90.18% line2000
|
||||
0 0% 100% 1.12s 100% line3000
|
@ -1,8 +0,0 @@
|
||||
Showing nodes accounting for 1.12s, 100% of 1.12s total
|
||||
flat flat% sum% cum cum%
|
||||
1.10s 98.21% 98.21% 1.10s 98.21% line1000 testdata/file1000.src
|
||||
0.01s 0.89% 99.11% 1.01s 90.18% line2001 testdata/file2000.src (inline)
|
||||
0.01s 0.89% 100% 1.02s 91.07% line3002 testdata/file3000.src (inline)
|
||||
0 0% 100% 1.01s 90.18% line2000 testdata/file2000.src
|
||||
0 0% 100% 1.12s 100% line3000 testdata/file3000.src
|
||||
0 0% 100% 1.11s 99.11% line3001 testdata/file3000.src (inline)
|
965
src/cmd/vendor/github.com/google/pprof/internal/driver/webhtml.go
generated
vendored
Normal file
965
src/cmd/vendor/github.com/google/pprof/internal/driver/webhtml.go
generated
vendored
Normal file
@ -0,0 +1,965 @@
|
||||
// Copyright 2017 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package driver
|
||||
|
||||
import "html/template"
|
||||
|
||||
// addTemplates adds a set of template definitions to templates.
|
||||
func addTemplates(templates *template.Template) {
|
||||
template.Must(templates.Parse(`
|
||||
{{define "css"}}
|
||||
<style type="text/css">
|
||||
html {
|
||||
height: 100%;
|
||||
min-height: 100%;
|
||||
margin: 0px;
|
||||
}
|
||||
body {
|
||||
margin: 0px;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
min-height: 100%;
|
||||
overflow: hidden;
|
||||
}
|
||||
#graphcontainer {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
height: 100%;
|
||||
min-height: 100%;
|
||||
width: 100%;
|
||||
min-width: 100%;
|
||||
margin: 0px;
|
||||
}
|
||||
#graph {
|
||||
flex: 1 1 auto;
|
||||
overflow: hidden;
|
||||
}
|
||||
svg {
|
||||
width: 100%;
|
||||
height: auto;
|
||||
}
|
||||
button {
|
||||
margin-top: 5px;
|
||||
margin-bottom: 5px;
|
||||
}
|
||||
#detailtext {
|
||||
display: none;
|
||||
position: fixed;
|
||||
top: 20px;
|
||||
right: 10px;
|
||||
background-color: #ffffff;
|
||||
min-width: 160px;
|
||||
border: 1px solid #888;
|
||||
box-shadow: 4px 4px 4px 0px rgba(0,0,0,0.2);
|
||||
z-index: 1;
|
||||
}
|
||||
#closedetails {
|
||||
float: right;
|
||||
margin: 2px;
|
||||
}
|
||||
#home {
|
||||
font-size: 14pt;
|
||||
padding-left: 0.5em;
|
||||
padding-right: 0.5em;
|
||||
float: right;
|
||||
}
|
||||
.menubar {
|
||||
display: inline-block;
|
||||
background-color: #f8f8f8;
|
||||
border: 1px solid #ccc;
|
||||
width: 100%;
|
||||
}
|
||||
.menu-header {
|
||||
position: relative;
|
||||
display: inline-block;
|
||||
padding: 2px 2px;
|
||||
font-size: 14pt;
|
||||
}
|
||||
.menu {
|
||||
display: none;
|
||||
position: absolute;
|
||||
background-color: #f8f8f8;
|
||||
border: 1px solid #888;
|
||||
box-shadow: 4px 4px 4px 0px rgba(0,0,0,0.2);
|
||||
z-index: 1;
|
||||
margin-top: 2px;
|
||||
left: 0px;
|
||||
min-width: 5em;
|
||||
}
|
||||
.menu-header, .menu {
|
||||
cursor: default;
|
||||
user-select: none;
|
||||
-moz-user-select: none;
|
||||
-ms-user-select: none;
|
||||
-webkit-user-select: none;
|
||||
}
|
||||
.menu hr {
|
||||
background-color: #fff;
|
||||
margin-top: 0px;
|
||||
margin-bottom: 0px;
|
||||
}
|
||||
.menu a, .menu button {
|
||||
display: block;
|
||||
width: 100%;
|
||||
margin: 0px;
|
||||
padding: 2px 0px 2px 0px;
|
||||
text-align: left;
|
||||
text-decoration: none;
|
||||
color: #000;
|
||||
background-color: #f8f8f8;
|
||||
font-size: 12pt;
|
||||
border: none;
|
||||
}
|
||||
.menu-header:hover {
|
||||
background-color: #ccc;
|
||||
}
|
||||
.menu a:hover, .menu button:hover {
|
||||
background-color: #ccc;
|
||||
}
|
||||
.menu a.disabled {
|
||||
color: gray;
|
||||
pointer-events: none;
|
||||
}
|
||||
#searchbox {
|
||||
margin-left: 10pt;
|
||||
}
|
||||
#bodycontainer {
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
max-height: 100%;
|
||||
overflow: scroll;
|
||||
padding-top: 5px;
|
||||
}
|
||||
#toptable {
|
||||
border-spacing: 0px;
|
||||
width: 100%;
|
||||
padding-bottom: 1em;
|
||||
}
|
||||
#toptable tr th {
|
||||
border-bottom: 1px solid black;
|
||||
text-align: right;
|
||||
padding-left: 1em;
|
||||
padding-top: 0.2em;
|
||||
padding-bottom: 0.2em;
|
||||
}
|
||||
#toptable tr td {
|
||||
padding-left: 1em;
|
||||
font: monospace;
|
||||
text-align: right;
|
||||
white-space: nowrap;
|
||||
cursor: default;
|
||||
}
|
||||
#toptable tr th:nth-child(6),
|
||||
#toptable tr th:nth-child(7),
|
||||
#toptable tr td:nth-child(6),
|
||||
#toptable tr td:nth-child(7) {
|
||||
text-align: left;
|
||||
}
|
||||
#toptable tr td:nth-child(6) {
|
||||
max-width: 30em; // Truncate very long names
|
||||
overflow: hidden;
|
||||
}
|
||||
#flathdr1, #flathdr2, #cumhdr1, #cumhdr2, #namehdr {
|
||||
cursor: ns-resize;
|
||||
}
|
||||
.hilite {
|
||||
background-color: #ccf;
|
||||
}
|
||||
</style>
|
||||
{{end}}
|
||||
|
||||
{{define "header"}}
|
||||
<div id="detailtext">
|
||||
<button id="closedetails">Close</button>
|
||||
{{range .Legend}}<div>{{.}}</div>{{end}}
|
||||
</div>
|
||||
|
||||
<div class="menubar">
|
||||
|
||||
<div class="menu-header">
|
||||
View
|
||||
<div class="menu">
|
||||
<a title="{{.Help.top}}" href="/top" id="topbtn">Top</a>
|
||||
<a title="{{.Help.graph}}" href="/" id="graphbtn">Graph</a>
|
||||
<a title="{{.Help.peek}}" href="/peek" id="peek">Peek</a>
|
||||
<a title="{{.Help.list}}" href="/source" id="list">Source</a>
|
||||
<a title="{{.Help.disasm}}" href="/disasm" id="disasm">Disassemble</a>
|
||||
<hr>
|
||||
<button title="{{.Help.details}}" id="details">Details</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="menu-header">
|
||||
Refine
|
||||
<div class="menu">
|
||||
<a title="{{.Help.focus}}" href="{{.BaseURL}}" id="focus">Focus</a>
|
||||
<a title="{{.Help.ignore}}" href="{{.BaseURL}}" id="ignore">Ignore</a>
|
||||
<a title="{{.Help.hide}}" href="{{.BaseURL}}" id="hide">Hide</a>
|
||||
<a title="{{.Help.show}}" href="{{.BaseURL}}" id="show">Show</a>
|
||||
<hr>
|
||||
<a title="{{.Help.reset}}" href="{{.BaseURL}}">Reset</a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<input id="searchbox" type="text" placeholder="Search regexp" autocomplete="off" autocapitalize="none" size=40>
|
||||
|
||||
<span id="home">{{.Title}}</span>
|
||||
|
||||
</div> <!-- menubar -->
|
||||
|
||||
<div id="errors">{{range .Errors}}<div>{{.}}</div>{{end}}</div>
|
||||
{{end}}
|
||||
|
||||
{{define "graph" -}}
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<title>{{.Title}}</title>
|
||||
{{template "css" .}}
|
||||
</head>
|
||||
<body>
|
||||
|
||||
{{template "header" .}}
|
||||
<div id="graphcontainer">
|
||||
<div id="graph">
|
||||
{{.HTMLBody}}
|
||||
</div>
|
||||
|
||||
</div>
|
||||
{{template "script" .}}
|
||||
<script>viewer({{.BaseURL}}, {{.Nodes}})</script>
|
||||
</body>
|
||||
</html>
|
||||
{{end}}
|
||||
|
||||
{{define "script"}}
|
||||
<script>
|
||||
// Make svg pannable and zoomable.
|
||||
// Call clickHandler(t) if a click event is caught by the pan event handlers.
|
||||
function initPanAndZoom(svg, clickHandler) {
|
||||
'use strict';
|
||||
|
||||
// Current mouse/touch handling mode
|
||||
const IDLE = 0
|
||||
const MOUSEPAN = 1
|
||||
const TOUCHPAN = 2
|
||||
const TOUCHZOOM = 3
|
||||
let mode = IDLE
|
||||
|
||||
// State needed to implement zooming.
|
||||
let currentScale = 1.0
|
||||
const initWidth = svg.viewBox.baseVal.width
|
||||
const initHeight = svg.viewBox.baseVal.height
|
||||
|
||||
// State needed to implement panning.
|
||||
let panLastX = 0 // Last event X coordinate
|
||||
let panLastY = 0 // Last event Y coordinate
|
||||
let moved = false // Have we seen significant movement
|
||||
let touchid = null // Current touch identifier
|
||||
|
||||
// State needed for pinch zooming
|
||||
let touchid2 = null // Second id for pinch zooming
|
||||
let initGap = 1.0 // Starting gap between two touches
|
||||
let initScale = 1.0 // currentScale when pinch zoom started
|
||||
let centerPoint = null // Center point for scaling
|
||||
|
||||
// Convert event coordinates to svg coordinates.
|
||||
function toSvg(x, y) {
|
||||
const p = svg.createSVGPoint()
|
||||
p.x = x
|
||||
p.y = y
|
||||
let m = svg.getCTM()
|
||||
if (m == null) m = svg.getScreenCTM() // Firefox workaround.
|
||||
return p.matrixTransform(m.inverse())
|
||||
}
|
||||
|
||||
// Change the scaling for the svg to s, keeping the point denoted
|
||||
// by u (in svg coordinates]) fixed at the same screen location.
|
||||
function rescale(s, u) {
|
||||
// Limit to a good range.
|
||||
if (s < 0.2) s = 0.2
|
||||
if (s > 10.0) s = 10.0
|
||||
|
||||
currentScale = s
|
||||
|
||||
// svg.viewBox defines the visible portion of the user coordinate
|
||||
// system. So to magnify by s, divide the visible portion by s,
|
||||
// which will then be stretched to fit the viewport.
|
||||
const vb = svg.viewBox
|
||||
const w1 = vb.baseVal.width
|
||||
const w2 = initWidth / s
|
||||
const h1 = vb.baseVal.height
|
||||
const h2 = initHeight / s
|
||||
vb.baseVal.width = w2
|
||||
vb.baseVal.height = h2
|
||||
|
||||
// We also want to adjust vb.baseVal.x so that u.x remains at same
|
||||
// screen X coordinate. In other words, want to change it from x1 to x2
|
||||
// so that:
|
||||
// (u.x - x1) / w1 = (u.x - x2) / w2
|
||||
// Simplifying that, we get
|
||||
// (u.x - x1) * (w2 / w1) = u.x - x2
|
||||
// x2 = u.x - (u.x - x1) * (w2 / w1)
|
||||
vb.baseVal.x = u.x - (u.x - vb.baseVal.x) * (w2 / w1)
|
||||
vb.baseVal.y = u.y - (u.y - vb.baseVal.y) * (h2 / h1)
|
||||
}
|
||||
|
||||
function handleWheel(e) {
|
||||
if (e.deltaY == 0) return
|
||||
// Change scale factor by 1.1 or 1/1.1
|
||||
rescale(currentScale * (e.deltaY < 0 ? 1.1 : (1/1.1)),
|
||||
toSvg(e.offsetX, e.offsetY))
|
||||
}
|
||||
|
||||
function setMode(m) {
|
||||
mode = m
|
||||
touchid = null
|
||||
touchid2 = null
|
||||
}
|
||||
|
||||
function panStart(x, y) {
|
||||
moved = false
|
||||
panLastX = x
|
||||
panLastY = y
|
||||
}
|
||||
|
||||
function panMove(x, y) {
|
||||
let dx = x - panLastX
|
||||
let dy = y - panLastY
|
||||
if (Math.abs(dx) <= 2 && Math.abs(dy) <= 2) return // Ignore tiny moves
|
||||
|
||||
moved = true
|
||||
panLastX = x
|
||||
panLastY = y
|
||||
|
||||
// Firefox workaround: get dimensions from parentNode.
|
||||
const swidth = svg.clientWidth || svg.parentNode.clientWidth
|
||||
const sheight = svg.clientHeight || svg.parentNode.clientHeight
|
||||
|
||||
// Convert deltas from screen space to svg space.
|
||||
dx *= (svg.viewBox.baseVal.width / swidth)
|
||||
dy *= (svg.viewBox.baseVal.height / sheight)
|
||||
|
||||
svg.viewBox.baseVal.x -= dx
|
||||
svg.viewBox.baseVal.y -= dy
|
||||
}
|
||||
|
||||
function handleScanStart(e) {
|
||||
if (e.button != 0) return // Do not catch right-clicks etc.
|
||||
setMode(MOUSEPAN)
|
||||
panStart(e.clientX, e.clientY)
|
||||
e.preventDefault()
|
||||
svg.addEventListener("mousemove", handleScanMove)
|
||||
}
|
||||
|
||||
function handleScanMove(e) {
|
||||
if (e.buttons == 0) {
|
||||
// Missed an end event, perhaps because mouse moved outside window.
|
||||
setMode(IDLE)
|
||||
svg.removeEventListener("mousemove", handleScanMove)
|
||||
return
|
||||
}
|
||||
if (mode == MOUSEPAN) panMove(e.clientX, e.clientY)
|
||||
}
|
||||
|
||||
function handleScanEnd(e) {
|
||||
if (mode == MOUSEPAN) panMove(e.clientX, e.clientY)
|
||||
setMode(IDLE)
|
||||
svg.removeEventListener("mousemove", handleScanMove)
|
||||
if (!moved) clickHandler(e.target)
|
||||
}
|
||||
|
||||
// Find touch object with specified identifier.
|
||||
function findTouch(tlist, id) {
|
||||
for (const t of tlist) {
|
||||
if (t.identifier == id) return t
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
// Return distance between two touch points
|
||||
function touchGap(t1, t2) {
|
||||
const dx = t1.clientX - t2.clientX
|
||||
const dy = t1.clientY - t2.clientY
|
||||
return Math.hypot(dx, dy)
|
||||
}
|
||||
|
||||
function handleTouchStart(e) {
|
||||
if (mode == IDLE && e.changedTouches.length == 1) {
|
||||
// Start touch based panning
|
||||
const t = e.changedTouches[0]
|
||||
setMode(TOUCHPAN)
|
||||
touchid = t.identifier
|
||||
panStart(t.clientX, t.clientY)
|
||||
e.preventDefault()
|
||||
} else if (mode == TOUCHPAN && e.touches.length == 2) {
|
||||
// Start pinch zooming
|
||||
setMode(TOUCHZOOM)
|
||||
const t1 = e.touches[0]
|
||||
const t2 = e.touches[1]
|
||||
touchid = t1.identifier
|
||||
touchid2 = t2.identifier
|
||||
initScale = currentScale
|
||||
initGap = touchGap(t1, t2)
|
||||
centerPoint = toSvg((t1.clientX + t2.clientX) / 2,
|
||||
(t1.clientY + t2.clientY) / 2)
|
||||
e.preventDefault()
|
||||
}
|
||||
}
|
||||
|
||||
function handleTouchMove(e) {
|
||||
if (mode == TOUCHPAN) {
|
||||
const t = findTouch(e.changedTouches, touchid)
|
||||
if (t == null) return
|
||||
if (e.touches.length != 1) {
|
||||
setMode(IDLE)
|
||||
return
|
||||
}
|
||||
panMove(t.clientX, t.clientY)
|
||||
e.preventDefault()
|
||||
} else if (mode == TOUCHZOOM) {
|
||||
// Get two touches; new gap; rescale to ratio.
|
||||
const t1 = findTouch(e.touches, touchid)
|
||||
const t2 = findTouch(e.touches, touchid2)
|
||||
if (t1 == null || t2 == null) return
|
||||
const gap = touchGap(t1, t2)
|
||||
rescale(initScale * gap / initGap, centerPoint)
|
||||
e.preventDefault()
|
||||
}
|
||||
}
|
||||
|
||||
function handleTouchEnd(e) {
|
||||
if (mode == TOUCHPAN) {
|
||||
const t = findTouch(e.changedTouches, touchid)
|
||||
if (t == null) return
|
||||
panMove(t.clientX, t.clientY)
|
||||
setMode(IDLE)
|
||||
e.preventDefault()
|
||||
if (!moved) clickHandler(t.target)
|
||||
} else if (mode == TOUCHZOOM) {
|
||||
setMode(IDLE)
|
||||
e.preventDefault()
|
||||
}
|
||||
}
|
||||
|
||||
svg.addEventListener("mousedown", handleScanStart)
|
||||
svg.addEventListener("mouseup", handleScanEnd)
|
||||
svg.addEventListener("touchstart", handleTouchStart)
|
||||
svg.addEventListener("touchmove", handleTouchMove)
|
||||
svg.addEventListener("touchend", handleTouchEnd)
|
||||
svg.addEventListener("wheel", handleWheel, true)
|
||||
}
|
||||
|
||||
function initMenus() {
|
||||
'use strict';
|
||||
|
||||
let activeMenu = null;
|
||||
let activeMenuHdr = null;
|
||||
|
||||
function cancelActiveMenu() {
|
||||
if (activeMenu == null) return;
|
||||
activeMenu.style.display = "none";
|
||||
activeMenu = null;
|
||||
activeMenuHdr = null;
|
||||
}
|
||||
|
||||
// Set click handlers on every menu header.
|
||||
for (const menu of document.getElementsByClassName("menu")) {
|
||||
const hdr = menu.parentElement;
|
||||
if (hdr == null) return;
|
||||
function showMenu(e) {
|
||||
// menu is a child of hdr, so this event can fire for clicks
|
||||
// inside menu. Ignore such clicks.
|
||||
if (e.target != hdr) return;
|
||||
activeMenu = menu;
|
||||
activeMenuHdr = hdr;
|
||||
menu.style.display = "block";
|
||||
}
|
||||
hdr.addEventListener("mousedown", showMenu);
|
||||
hdr.addEventListener("touchstart", showMenu);
|
||||
}
|
||||
|
||||
// If there is an active menu and a down event outside, retract the menu.
|
||||
for (const t of ["mousedown", "touchstart"]) {
|
||||
document.addEventListener(t, (e) => {
|
||||
// Note: to avoid unnecessary flicker, if the down event is inside
|
||||
// the active menu header, do not retract the menu.
|
||||
if (activeMenuHdr != e.target.closest(".menu-header")) {
|
||||
cancelActiveMenu();
|
||||
}
|
||||
}, { passive: true, capture: true });
|
||||
}
|
||||
|
||||
// If there is an active menu and an up event inside, retract the menu.
|
||||
document.addEventListener("mouseup", (e) => {
|
||||
if (activeMenu == e.target.closest(".menu")) {
|
||||
cancelActiveMenu();
|
||||
}
|
||||
}, { passive: true, capture: true });
|
||||
}
|
||||
|
||||
function viewer(baseUrl, nodes) {
|
||||
'use strict';
|
||||
|
||||
// Elements
|
||||
const search = document.getElementById("searchbox")
|
||||
const graph0 = document.getElementById("graph0")
|
||||
const svg = (graph0 == null ? null : graph0.parentElement)
|
||||
const toptable = document.getElementById("toptable")
|
||||
|
||||
let regexpActive = false
|
||||
let selected = new Map()
|
||||
let origFill = new Map()
|
||||
let searchAlarm = null
|
||||
let buttonsEnabled = true
|
||||
|
||||
function handleDetails() {
|
||||
const detailsText = document.getElementById("detailtext")
|
||||
if (detailsText != null) detailsText.style.display = "block"
|
||||
}
|
||||
|
||||
function handleCloseDetails() {
|
||||
const detailsText = document.getElementById("detailtext")
|
||||
if (detailsText != null) detailsText.style.display = "none"
|
||||
}
|
||||
|
||||
function handleKey(e) {
|
||||
if (e.keyCode != 13) return
|
||||
window.location.href =
|
||||
updateUrl(new URL({{.BaseURL}}, window.location.href), "f")
|
||||
e.preventDefault()
|
||||
}
|
||||
|
||||
function handleSearch() {
|
||||
// Delay expensive processing so a flurry of key strokes is handled once.
|
||||
if (searchAlarm != null) {
|
||||
clearTimeout(searchAlarm)
|
||||
}
|
||||
searchAlarm = setTimeout(selectMatching, 300)
|
||||
|
||||
regexpActive = true
|
||||
updateButtons()
|
||||
}
|
||||
|
||||
function selectMatching() {
|
||||
searchAlarm = null
|
||||
let re = null
|
||||
if (search.value != "") {
|
||||
try {
|
||||
re = new RegExp(search.value)
|
||||
} catch (e) {
|
||||
// TODO: Display error state in search box
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
function match(text) {
|
||||
return re != null && re.test(text)
|
||||
}
|
||||
|
||||
// drop currently selected items that do not match re.
|
||||
selected.forEach(function(v, n) {
|
||||
if (!match(nodes[n])) {
|
||||
unselect(n, document.getElementById("node" + n))
|
||||
}
|
||||
})
|
||||
|
||||
// add matching items that are not currently selected.
|
||||
for (let n = 0; n < nodes.length; n++) {
|
||||
if (!selected.has(n) && match(nodes[n])) {
|
||||
select(n, document.getElementById("node" + n))
|
||||
}
|
||||
}
|
||||
|
||||
updateButtons()
|
||||
}
|
||||
|
||||
function toggleSvgSelect(elem) {
|
||||
// Walk up to immediate child of graph0
|
||||
while (elem != null && elem.parentElement != graph0) {
|
||||
elem = elem.parentElement
|
||||
}
|
||||
if (!elem) return
|
||||
|
||||
// Disable regexp mode.
|
||||
regexpActive = false
|
||||
|
||||
const n = nodeId(elem)
|
||||
if (n < 0) return
|
||||
if (selected.has(n)) {
|
||||
unselect(n, elem)
|
||||
} else {
|
||||
select(n, elem)
|
||||
}
|
||||
updateButtons()
|
||||
}
|
||||
|
||||
function unselect(n, elem) {
|
||||
if (elem == null) return
|
||||
selected.delete(n)
|
||||
setBackground(elem, false)
|
||||
}
|
||||
|
||||
function select(n, elem) {
|
||||
if (elem == null) return
|
||||
selected.set(n, true)
|
||||
setBackground(elem, true)
|
||||
}
|
||||
|
||||
function nodeId(elem) {
|
||||
const id = elem.id
|
||||
if (!id) return -1
|
||||
if (!id.startsWith("node")) return -1
|
||||
const n = parseInt(id.slice(4), 10)
|
||||
if (isNaN(n)) return -1
|
||||
if (n < 0 || n >= nodes.length) return -1
|
||||
return n
|
||||
}
|
||||
|
||||
function setBackground(elem, set) {
|
||||
// Handle table row highlighting.
|
||||
if (elem.nodeName == "TR") {
|
||||
elem.classList.toggle("hilite", set)
|
||||
return
|
||||
}
|
||||
|
||||
// Handle svg element highlighting.
|
||||
const p = findPolygon(elem)
|
||||
if (p != null) {
|
||||
if (set) {
|
||||
origFill.set(p, p.style.fill)
|
||||
p.style.fill = "#ccccff"
|
||||
} else if (origFill.has(p)) {
|
||||
p.style.fill = origFill.get(p)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function findPolygon(elem) {
|
||||
if (elem.localName == "polygon") return elem
|
||||
for (const c of elem.children) {
|
||||
const p = findPolygon(c)
|
||||
if (p != null) return p
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
// convert a string to a regexp that matches that string.
|
||||
function quotemeta(str) {
|
||||
return str.replace(/([\\\.?+*\[\](){}|^$])/g, '\\$1')
|
||||
}
|
||||
|
||||
// Update id's href to reflect current selection whenever it is
|
||||
// liable to be followed.
|
||||
function makeLinkDynamic(id) {
|
||||
const elem = document.getElementById(id)
|
||||
if (elem == null) return
|
||||
|
||||
// Most links copy current selection into the "f" parameter,
|
||||
// but Refine menu links are different.
|
||||
let param = "f"
|
||||
if (id == "ignore") param = "i"
|
||||
if (id == "hide") param = "h"
|
||||
if (id == "show") param = "s"
|
||||
|
||||
// We update on mouseenter so middle-click/right-click work properly.
|
||||
elem.addEventListener("mouseenter", updater)
|
||||
elem.addEventListener("touchstart", updater)
|
||||
|
||||
function updater() {
|
||||
elem.href = updateUrl(new URL(elem.href), param)
|
||||
}
|
||||
}
|
||||
|
||||
// Update URL to reflect current selection.
|
||||
function updateUrl(url, param) {
|
||||
url.hash = ""
|
||||
|
||||
// The selection can be in one of two modes: regexp-based or
|
||||
// list-based. Construct regular expression depending on mode.
|
||||
let re = regexpActive
|
||||
? search.value
|
||||
: Array.from(selected.keys()).map(key => quotemeta(nodes[key])).join("|")
|
||||
|
||||
// Copy params from this page's URL.
|
||||
const params = url.searchParams
|
||||
for (const p of new URLSearchParams(window.location.search)) {
|
||||
params.set(p[0], p[1])
|
||||
}
|
||||
|
||||
if (re != "") {
|
||||
// For focus/show, forget old parameter. For others, add to re.
|
||||
if (param != "f" && param != "s" && params.has(param)) {
|
||||
const old = params.get(param)
|
||||
if (old != "") {
|
||||
re += "|" + old
|
||||
}
|
||||
}
|
||||
params.set(param, re)
|
||||
} else {
|
||||
params.delete(param)
|
||||
}
|
||||
|
||||
return url.toString()
|
||||
}
|
||||
|
||||
function handleTopClick(e) {
|
||||
// Walk back until we find TR and then get the Name column (index 5)
|
||||
let elem = e.target
|
||||
while (elem != null && elem.nodeName != "TR") {
|
||||
elem = elem.parentElement
|
||||
}
|
||||
if (elem == null || elem.children.length < 6) return
|
||||
|
||||
e.preventDefault()
|
||||
const tr = elem
|
||||
const td = elem.children[5]
|
||||
if (td.nodeName != "TD") return
|
||||
const name = td.innerText
|
||||
const index = nodes.indexOf(name)
|
||||
if (index < 0) return
|
||||
|
||||
// Disable regexp mode.
|
||||
regexpActive = false
|
||||
|
||||
if (selected.has(index)) {
|
||||
unselect(index, elem)
|
||||
} else {
|
||||
select(index, elem)
|
||||
}
|
||||
updateButtons()
|
||||
}
|
||||
|
||||
function updateButtons() {
|
||||
const enable = (search.value != "" || selected.size != 0)
|
||||
if (buttonsEnabled == enable) return
|
||||
buttonsEnabled = enable
|
||||
for (const id of ["focus", "ignore", "hide", "show"]) {
|
||||
const link = document.getElementById(id)
|
||||
if (link != null) {
|
||||
link.classList.toggle("disabled", !enable)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize button states
|
||||
updateButtons()
|
||||
|
||||
// Setup event handlers
|
||||
initMenus()
|
||||
if (svg != null) {
|
||||
initPanAndZoom(svg, toggleSvgSelect)
|
||||
}
|
||||
if (toptable != null) {
|
||||
toptable.addEventListener("mousedown", handleTopClick)
|
||||
toptable.addEventListener("touchstart", handleTopClick)
|
||||
}
|
||||
|
||||
const ids = ["topbtn", "graphbtn", "peek", "list", "disasm",
|
||||
"focus", "ignore", "hide", "show"]
|
||||
ids.forEach(makeLinkDynamic)
|
||||
|
||||
// Bind action to button with specified id.
|
||||
function addAction(id, action) {
|
||||
const btn = document.getElementById(id)
|
||||
if (btn != null) {
|
||||
btn.addEventListener("click", action)
|
||||
btn.addEventListener("touchstart", action)
|
||||
}
|
||||
}
|
||||
|
||||
addAction("details", handleDetails)
|
||||
addAction("closedetails", handleCloseDetails)
|
||||
|
||||
search.addEventListener("input", handleSearch)
|
||||
search.addEventListener("keydown", handleKey)
|
||||
|
||||
// Give initial focus to main container so it can be scrolled using keys.
|
||||
const main = document.getElementById("bodycontainer")
|
||||
if (main) {
|
||||
main.focus()
|
||||
}
|
||||
}
|
||||
</script>
|
||||
{{end}}
|
||||
|
||||
{{define "top" -}}
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<title>{{.Title}}</title>
|
||||
{{template "css" .}}
|
||||
<style type="text/css">
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
|
||||
{{template "header" .}}
|
||||
|
||||
<div id="bodycontainer">
|
||||
<table id="toptable">
|
||||
<tr>
|
||||
<th id="flathdr1">Flat
|
||||
<th id="flathdr2">Flat%
|
||||
<th>Sum%
|
||||
<th id="cumhdr1">Cum
|
||||
<th id="cumhdr2">Cum%
|
||||
<th id="namehdr">Name
|
||||
<th>Inlined?</tr>
|
||||
<tbody id="rows">
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
|
||||
{{template "script" .}}
|
||||
<script>
|
||||
function makeTopTable(total, entries) {
|
||||
const rows = document.getElementById("rows")
|
||||
if (rows == null) return
|
||||
|
||||
// Store initial index in each entry so we have stable node ids for selection.
|
||||
for (let i = 0; i < entries.length; i++) {
|
||||
entries[i].Id = "node" + i
|
||||
}
|
||||
|
||||
// Which column are we currently sorted by and in what order?
|
||||
let currentColumn = ""
|
||||
let descending = false
|
||||
sortBy("Flat")
|
||||
|
||||
function sortBy(column) {
|
||||
// Update sort criteria
|
||||
if (column == currentColumn) {
|
||||
descending = !descending // Reverse order
|
||||
} else {
|
||||
currentColumn = column
|
||||
descending = (column != "Name")
|
||||
}
|
||||
|
||||
// Sort according to current criteria.
|
||||
function cmp(a, b) {
|
||||
const av = a[currentColumn]
|
||||
const bv = b[currentColumn]
|
||||
if (av < bv) return -1
|
||||
if (av > bv) return +1
|
||||
return 0
|
||||
}
|
||||
entries.sort(cmp)
|
||||
if (descending) entries.reverse()
|
||||
|
||||
function addCell(tr, val) {
|
||||
const td = document.createElement('td')
|
||||
td.textContent = val
|
||||
tr.appendChild(td)
|
||||
}
|
||||
|
||||
function percent(v) {
|
||||
return (v * 100.0 / total).toFixed(2) + "%"
|
||||
}
|
||||
|
||||
// Generate rows
|
||||
const fragment = document.createDocumentFragment()
|
||||
let sum = 0
|
||||
for (const row of entries) {
|
||||
const tr = document.createElement('tr')
|
||||
tr.id = row.Id
|
||||
sum += row.Flat
|
||||
addCell(tr, row.FlatFormat)
|
||||
addCell(tr, percent(row.Flat))
|
||||
addCell(tr, percent(sum))
|
||||
addCell(tr, row.CumFormat)
|
||||
addCell(tr, percent(row.Cum))
|
||||
addCell(tr, row.Name)
|
||||
addCell(tr, row.InlineLabel)
|
||||
fragment.appendChild(tr)
|
||||
}
|
||||
|
||||
rows.textContent = '' // Remove old rows
|
||||
rows.appendChild(fragment)
|
||||
}
|
||||
|
||||
// Make different column headers trigger sorting.
|
||||
function bindSort(id, column) {
|
||||
const hdr = document.getElementById(id)
|
||||
if (hdr == null) return
|
||||
const fn = function() { sortBy(column) }
|
||||
hdr.addEventListener("click", fn)
|
||||
hdr.addEventListener("touch", fn)
|
||||
}
|
||||
bindSort("flathdr1", "Flat")
|
||||
bindSort("flathdr2", "Flat")
|
||||
bindSort("cumhdr1", "Cum")
|
||||
bindSort("cumhdr2", "Cum")
|
||||
bindSort("namehdr", "Name")
|
||||
}
|
||||
|
||||
viewer({{.BaseURL}}, {{.Nodes}})
|
||||
makeTopTable({{.Total}}, {{.Top}})
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
{{end}}
|
||||
|
||||
{{define "sourcelisting" -}}
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<title>{{.Title}}</title>
|
||||
{{template "css" .}}
|
||||
{{template "weblistcss" .}}
|
||||
{{template "weblistjs" .}}
|
||||
</head>
|
||||
<body>
|
||||
|
||||
{{template "header" .}}
|
||||
|
||||
<div id="bodycontainer">
|
||||
{{.HTMLBody}}
|
||||
</div>
|
||||
|
||||
{{template "script" .}}
|
||||
<script>viewer({{.BaseURL}}, null)</script>
|
||||
</body>
|
||||
</html>
|
||||
{{end}}
|
||||
|
||||
{{define "plaintext" -}}
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<title>{{.Title}}</title>
|
||||
{{template "css" .}}
|
||||
</head>
|
||||
<body>
|
||||
|
||||
{{template "header" .}}
|
||||
|
||||
<div id="bodycontainer">
|
||||
<pre>
|
||||
{{.TextBody}}
|
||||
</pre>
|
||||
</div>
|
||||
|
||||
{{template "script" .}}
|
||||
<script>viewer({{.BaseURL}}, null)</script>
|
||||
</body>
|
||||
</html>
|
||||
{{end}}
|
||||
`))
|
||||
}
|
393
src/cmd/vendor/github.com/google/pprof/internal/driver/webui.go
generated
vendored
Normal file
393
src/cmd/vendor/github.com/google/pprof/internal/driver/webui.go
generated
vendored
Normal file
@ -0,0 +1,393 @@
|
||||
// Copyright 2017 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package driver
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"html/template"
|
||||
"net"
|
||||
"net/http"
|
||||
gourl "net/url"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/google/pprof/internal/graph"
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
"github.com/google/pprof/internal/report"
|
||||
"github.com/google/pprof/profile"
|
||||
)
|
||||
|
||||
// webInterface holds the state needed for serving a browser based interface.
|
||||
type webInterface struct {
|
||||
prof *profile.Profile
|
||||
options *plugin.Options
|
||||
help map[string]string
|
||||
templates *template.Template
|
||||
}
|
||||
|
||||
func makeWebInterface(p *profile.Profile, opt *plugin.Options) *webInterface {
|
||||
templates := template.New("templategroup")
|
||||
addTemplates(templates)
|
||||
report.AddSourceTemplates(templates)
|
||||
return &webInterface{
|
||||
prof: p,
|
||||
options: opt,
|
||||
help: make(map[string]string),
|
||||
templates: templates,
|
||||
}
|
||||
}
|
||||
|
||||
// maxEntries is the maximum number of entries to print for text interfaces.
|
||||
const maxEntries = 50
|
||||
|
||||
// errorCatcher is a UI that captures errors for reporting to the browser.
|
||||
type errorCatcher struct {
|
||||
plugin.UI
|
||||
errors []string
|
||||
}
|
||||
|
||||
func (ec *errorCatcher) PrintErr(args ...interface{}) {
|
||||
ec.errors = append(ec.errors, strings.TrimSuffix(fmt.Sprintln(args...), "\n"))
|
||||
ec.UI.PrintErr(args...)
|
||||
}
|
||||
|
||||
// webArgs contains arguments passed to templates in webhtml.go.
|
||||
type webArgs struct {
|
||||
BaseURL string
|
||||
Title string
|
||||
Errors []string
|
||||
Total int64
|
||||
Legend []string
|
||||
Help map[string]string
|
||||
Nodes []string
|
||||
HTMLBody template.HTML
|
||||
TextBody string
|
||||
Top []report.TextItem
|
||||
}
|
||||
|
||||
func serveWebInterface(hostport string, p *profile.Profile, o *plugin.Options) error {
|
||||
host, portStr, err := net.SplitHostPort(hostport)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not split http address: %v", err)
|
||||
}
|
||||
port, err := strconv.Atoi(portStr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid port number: %v", err)
|
||||
}
|
||||
if host == "" {
|
||||
host = "localhost"
|
||||
}
|
||||
|
||||
interactiveMode = true
|
||||
ui := makeWebInterface(p, o)
|
||||
for n, c := range pprofCommands {
|
||||
ui.help[n] = c.description
|
||||
}
|
||||
for n, v := range pprofVariables {
|
||||
ui.help[n] = v.help
|
||||
}
|
||||
ui.help["details"] = "Show information about the profile and this view"
|
||||
ui.help["graph"] = "Display profile as a directed graph"
|
||||
ui.help["reset"] = "Show the entire profile"
|
||||
|
||||
server := o.HTTPServer
|
||||
if server == nil {
|
||||
server = defaultWebServer
|
||||
}
|
||||
args := &plugin.HTTPServerArgs{
|
||||
Hostport: net.JoinHostPort(host, portStr),
|
||||
Host: host,
|
||||
Port: port,
|
||||
Handlers: map[string]http.Handler{
|
||||
"/": http.HandlerFunc(ui.dot),
|
||||
"/top": http.HandlerFunc(ui.top),
|
||||
"/disasm": http.HandlerFunc(ui.disasm),
|
||||
"/source": http.HandlerFunc(ui.source),
|
||||
"/peek": http.HandlerFunc(ui.peek),
|
||||
},
|
||||
}
|
||||
|
||||
go openBrowser("http://"+args.Hostport, o)
|
||||
return server(args)
|
||||
}
|
||||
|
||||
func defaultWebServer(args *plugin.HTTPServerArgs) error {
|
||||
ln, err := net.Listen("tcp", args.Hostport)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
isLocal := isLocalhost(args.Host)
|
||||
handler := http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
|
||||
if isLocal {
|
||||
// Only allow local clients
|
||||
host, _, err := net.SplitHostPort(req.RemoteAddr)
|
||||
if err != nil || !isLocalhost(host) {
|
||||
http.Error(w, "permission denied", http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
}
|
||||
h := args.Handlers[req.URL.Path]
|
||||
if h == nil {
|
||||
// Fall back to default behavior
|
||||
h = http.DefaultServeMux
|
||||
}
|
||||
h.ServeHTTP(w, req)
|
||||
})
|
||||
s := &http.Server{Handler: handler}
|
||||
return s.Serve(ln)
|
||||
}
|
||||
|
||||
func isLocalhost(host string) bool {
|
||||
for _, v := range []string{"localhost", "127.0.0.1", "[::1]", "::1"} {
|
||||
if host == v {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func openBrowser(url string, o *plugin.Options) {
|
||||
// Construct URL.
|
||||
u, _ := gourl.Parse(url)
|
||||
q := u.Query()
|
||||
for _, p := range []struct{ param, key string }{
|
||||
{"f", "focus"},
|
||||
{"s", "show"},
|
||||
{"i", "ignore"},
|
||||
{"h", "hide"},
|
||||
} {
|
||||
if v := pprofVariables[p.key].value; v != "" {
|
||||
q.Set(p.param, v)
|
||||
}
|
||||
}
|
||||
u.RawQuery = q.Encode()
|
||||
|
||||
// Give server a little time to get ready.
|
||||
time.Sleep(time.Millisecond * 500)
|
||||
|
||||
for _, b := range browsers() {
|
||||
args := strings.Split(b, " ")
|
||||
if len(args) == 0 {
|
||||
continue
|
||||
}
|
||||
viewer := exec.Command(args[0], append(args[1:], u.String())...)
|
||||
viewer.Stderr = os.Stderr
|
||||
if err := viewer.Start(); err == nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
// No visualizer succeeded, so just print URL.
|
||||
o.UI.PrintErr(u.String())
|
||||
}
|
||||
|
||||
func varsFromURL(u *gourl.URL) variables {
|
||||
vars := pprofVariables.makeCopy()
|
||||
vars["focus"].value = u.Query().Get("f")
|
||||
vars["show"].value = u.Query().Get("s")
|
||||
vars["ignore"].value = u.Query().Get("i")
|
||||
vars["hide"].value = u.Query().Get("h")
|
||||
return vars
|
||||
}
|
||||
|
||||
// makeReport generates a report for the specified command.
|
||||
func (ui *webInterface) makeReport(w http.ResponseWriter, req *http.Request,
|
||||
cmd []string, vars ...string) (*report.Report, []string) {
|
||||
v := varsFromURL(req.URL)
|
||||
for i := 0; i+1 < len(vars); i += 2 {
|
||||
v[vars[i]].value = vars[i+1]
|
||||
}
|
||||
catcher := &errorCatcher{UI: ui.options.UI}
|
||||
options := *ui.options
|
||||
options.UI = catcher
|
||||
_, rpt, err := generateRawReport(ui.prof, cmd, v, &options)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
ui.options.UI.PrintErr(err)
|
||||
return nil, nil
|
||||
}
|
||||
return rpt, catcher.errors
|
||||
}
|
||||
|
||||
// render generates html using the named template based on the contents of data.
|
||||
func (ui *webInterface) render(w http.ResponseWriter, baseURL, tmpl string,
|
||||
rpt *report.Report, errList, legend []string, data webArgs) {
|
||||
file := getFromLegend(legend, "File: ", "unknown")
|
||||
profile := getFromLegend(legend, "Type: ", "unknown")
|
||||
data.BaseURL = baseURL
|
||||
data.Title = file + " " + profile
|
||||
data.Errors = errList
|
||||
data.Total = rpt.Total()
|
||||
data.Legend = legend
|
||||
data.Help = ui.help
|
||||
html := &bytes.Buffer{}
|
||||
if err := ui.templates.ExecuteTemplate(html, tmpl, data); err != nil {
|
||||
http.Error(w, "internal template error", http.StatusInternalServerError)
|
||||
ui.options.UI.PrintErr(err)
|
||||
return
|
||||
}
|
||||
w.Header().Set("Content-Type", "text/html")
|
||||
w.Write(html.Bytes())
|
||||
}
|
||||
|
||||
// dot generates a web page containing an svg diagram.
|
||||
func (ui *webInterface) dot(w http.ResponseWriter, req *http.Request) {
|
||||
rpt, errList := ui.makeReport(w, req, []string{"svg"})
|
||||
if rpt == nil {
|
||||
return // error already reported
|
||||
}
|
||||
|
||||
// Generate dot graph.
|
||||
g, config := report.GetDOT(rpt)
|
||||
legend := config.Labels
|
||||
config.Labels = nil
|
||||
dot := &bytes.Buffer{}
|
||||
graph.ComposeDot(dot, g, &graph.DotAttributes{}, config)
|
||||
|
||||
// Convert to svg.
|
||||
svg, err := dotToSvg(dot.Bytes())
|
||||
if err != nil {
|
||||
http.Error(w, "Could not execute dot; may need to install graphviz.",
|
||||
http.StatusNotImplemented)
|
||||
ui.options.UI.PrintErr("Failed to execute dot. Is Graphviz installed?\n", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Get all node names into an array.
|
||||
nodes := []string{""} // dot starts with node numbered 1
|
||||
for _, n := range g.Nodes {
|
||||
nodes = append(nodes, n.Info.Name)
|
||||
}
|
||||
|
||||
ui.render(w, "/", "graph", rpt, errList, legend, webArgs{
|
||||
HTMLBody: template.HTML(string(svg)),
|
||||
Nodes: nodes,
|
||||
})
|
||||
}
|
||||
|
||||
func dotToSvg(dot []byte) ([]byte, error) {
|
||||
cmd := exec.Command("dot", "-Tsvg")
|
||||
out := &bytes.Buffer{}
|
||||
cmd.Stdin, cmd.Stdout, cmd.Stderr = bytes.NewBuffer(dot), out, os.Stderr
|
||||
if err := cmd.Run(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Fix dot bug related to unquoted amperands.
|
||||
svg := bytes.Replace(out.Bytes(), []byte("&;"), []byte("&;"), -1)
|
||||
|
||||
// Cleanup for embedding by dropping stuff before the <svg> start.
|
||||
if pos := bytes.Index(svg, []byte("<svg")); pos >= 0 {
|
||||
svg = svg[pos:]
|
||||
}
|
||||
return svg, nil
|
||||
}
|
||||
|
||||
func (ui *webInterface) top(w http.ResponseWriter, req *http.Request) {
|
||||
rpt, errList := ui.makeReport(w, req, []string{"top"}, "nodecount", "500")
|
||||
if rpt == nil {
|
||||
return // error already reported
|
||||
}
|
||||
top, legend := report.TextItems(rpt)
|
||||
var nodes []string
|
||||
for _, item := range top {
|
||||
nodes = append(nodes, item.Name)
|
||||
}
|
||||
|
||||
ui.render(w, "/top", "top", rpt, errList, legend, webArgs{
|
||||
Top: top,
|
||||
Nodes: nodes,
|
||||
})
|
||||
}
|
||||
|
||||
// disasm generates a web page containing disassembly.
|
||||
func (ui *webInterface) disasm(w http.ResponseWriter, req *http.Request) {
|
||||
args := []string{"disasm", req.URL.Query().Get("f")}
|
||||
rpt, errList := ui.makeReport(w, req, args)
|
||||
if rpt == nil {
|
||||
return // error already reported
|
||||
}
|
||||
|
||||
out := &bytes.Buffer{}
|
||||
if err := report.PrintAssembly(out, rpt, ui.options.Obj, maxEntries); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
ui.options.UI.PrintErr(err)
|
||||
return
|
||||
}
|
||||
|
||||
legend := report.ProfileLabels(rpt)
|
||||
ui.render(w, "/disasm", "plaintext", rpt, errList, legend, webArgs{
|
||||
TextBody: out.String(),
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
// source generates a web page containing source code annotated with profile
|
||||
// data.
|
||||
func (ui *webInterface) source(w http.ResponseWriter, req *http.Request) {
|
||||
args := []string{"weblist", req.URL.Query().Get("f")}
|
||||
rpt, errList := ui.makeReport(w, req, args)
|
||||
if rpt == nil {
|
||||
return // error already reported
|
||||
}
|
||||
|
||||
// Generate source listing.
|
||||
var body bytes.Buffer
|
||||
if err := report.PrintWebList(&body, rpt, ui.options.Obj, maxEntries); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
ui.options.UI.PrintErr(err)
|
||||
return
|
||||
}
|
||||
|
||||
legend := report.ProfileLabels(rpt)
|
||||
ui.render(w, "/source", "sourcelisting", rpt, errList, legend, webArgs{
|
||||
HTMLBody: template.HTML(body.String()),
|
||||
})
|
||||
}
|
||||
|
||||
// peek generates a web page listing callers/callers.
|
||||
func (ui *webInterface) peek(w http.ResponseWriter, req *http.Request) {
|
||||
args := []string{"peek", req.URL.Query().Get("f")}
|
||||
rpt, errList := ui.makeReport(w, req, args, "lines", "t")
|
||||
if rpt == nil {
|
||||
return // error already reported
|
||||
}
|
||||
|
||||
out := &bytes.Buffer{}
|
||||
if err := report.Generate(out, rpt, ui.options.Obj); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
ui.options.UI.PrintErr(err)
|
||||
return
|
||||
}
|
||||
|
||||
legend := report.ProfileLabels(rpt)
|
||||
ui.render(w, "/peek", "plaintext", rpt, errList, legend, webArgs{
|
||||
TextBody: out.String(),
|
||||
})
|
||||
}
|
||||
|
||||
// getFromLegend returns the suffix of an entry in legend that starts
|
||||
// with param. It returns def if no such entry is found.
|
||||
func getFromLegend(legend []string, param, def string) string {
|
||||
for _, s := range legend {
|
||||
if strings.HasPrefix(s, param) {
|
||||
return s[len(param):]
|
||||
}
|
||||
}
|
||||
return def
|
||||
}
|
232
src/cmd/vendor/github.com/google/pprof/internal/driver/webui_test.go
generated
vendored
Normal file
232
src/cmd/vendor/github.com/google/pprof/internal/driver/webui_test.go
generated
vendored
Normal file
@ -0,0 +1,232 @@
|
||||
// Copyright 2017 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package driver
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"net/url"
|
||||
"os/exec"
|
||||
"regexp"
|
||||
"sync"
|
||||
"testing"
|
||||
|
||||
"github.com/google/pprof/internal/plugin"
|
||||
"github.com/google/pprof/profile"
|
||||
"runtime"
|
||||
)
|
||||
|
||||
func TestWebInterface(t *testing.T) {
|
||||
if runtime.GOOS == "nacl" {
|
||||
t.Skip("test assumes tcp available")
|
||||
}
|
||||
|
||||
prof := makeFakeProfile()
|
||||
|
||||
// Custom http server creator
|
||||
var server *httptest.Server
|
||||
serverCreated := make(chan bool)
|
||||
creator := func(a *plugin.HTTPServerArgs) error {
|
||||
server = httptest.NewServer(http.HandlerFunc(
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
if h := a.Handlers[r.URL.Path]; h != nil {
|
||||
h.ServeHTTP(w, r)
|
||||
}
|
||||
}))
|
||||
serverCreated <- true
|
||||
return nil
|
||||
}
|
||||
|
||||
// Start server and wait for it to be initialized
|
||||
go serveWebInterface("unused:1234", prof, &plugin.Options{
|
||||
Obj: fakeObjTool{},
|
||||
UI: &stdUI{},
|
||||
HTTPServer: creator,
|
||||
})
|
||||
<-serverCreated
|
||||
defer server.Close()
|
||||
|
||||
haveDot := false
|
||||
if _, err := exec.LookPath("dot"); err == nil {
|
||||
haveDot = true
|
||||
}
|
||||
|
||||
type testCase struct {
|
||||
path string
|
||||
want []string
|
||||
needDot bool
|
||||
}
|
||||
testcases := []testCase{
|
||||
{"/", []string{"F1", "F2", "F3", "testbin", "cpu"}, true},
|
||||
{"/top", []string{`"Name":"F2","InlineLabel":"","Flat":200,"Cum":300,"FlatFormat":"200ms","CumFormat":"300ms"}`}, false},
|
||||
{"/source?f=" + url.QueryEscape("F[12]"),
|
||||
[]string{"F1", "F2", "300ms +line1"}, false},
|
||||
{"/peek?f=" + url.QueryEscape("F[12]"),
|
||||
[]string{"300ms.*F1", "200ms.*300ms.*F2"}, false},
|
||||
{"/disasm?f=" + url.QueryEscape("F[12]"),
|
||||
[]string{"f1:asm", "f2:asm"}, false},
|
||||
}
|
||||
for _, c := range testcases {
|
||||
if c.needDot && !haveDot {
|
||||
t.Log("skipping", c.path, "since dot (graphviz) does not seem to be installed")
|
||||
continue
|
||||
}
|
||||
|
||||
res, err := http.Get(server.URL + c.path)
|
||||
if err != nil {
|
||||
t.Error("could not fetch", c.path, err)
|
||||
continue
|
||||
}
|
||||
data, err := ioutil.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
t.Error("could not read response", c.path, err)
|
||||
continue
|
||||
}
|
||||
result := string(data)
|
||||
for _, w := range c.want {
|
||||
if match, _ := regexp.MatchString(w, result); !match {
|
||||
t.Errorf("response for %s does not match "+
|
||||
"expected pattern '%s'; "+
|
||||
"actual result:\n%s", c.path, w, result)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Also fetch all the test case URLs in parallel to test thread
|
||||
// safety when run under the race detector.
|
||||
var wg sync.WaitGroup
|
||||
for _, c := range testcases {
|
||||
if c.needDot && !haveDot {
|
||||
continue
|
||||
}
|
||||
path := server.URL + c.path
|
||||
for count := 0; count < 2; count++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
http.Get(path)
|
||||
wg.Done()
|
||||
}()
|
||||
}
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
// Implement fake object file support.
|
||||
|
||||
const addrBase = 0x1000
|
||||
const fakeSource = "testdata/file1000.src"
|
||||
|
||||
type fakeObj struct{}
|
||||
|
||||
func (f fakeObj) Close() error { return nil }
|
||||
func (f fakeObj) Name() string { return "testbin" }
|
||||
func (f fakeObj) Base() uint64 { return 0 }
|
||||
func (f fakeObj) BuildID() string { return "" }
|
||||
func (f fakeObj) SourceLine(addr uint64) ([]plugin.Frame, error) {
|
||||
return nil, fmt.Errorf("SourceLine unimplemented")
|
||||
}
|
||||
func (f fakeObj) Symbols(r *regexp.Regexp, addr uint64) ([]*plugin.Sym, error) {
|
||||
return []*plugin.Sym{
|
||||
{[]string{"F1"}, fakeSource, addrBase, addrBase + 10},
|
||||
{[]string{"F2"}, fakeSource, addrBase + 10, addrBase + 20},
|
||||
{[]string{"F3"}, fakeSource, addrBase + 20, addrBase + 30},
|
||||
}, nil
|
||||
}
|
||||
|
||||
type fakeObjTool struct{}
|
||||
|
||||
func (obj fakeObjTool) Open(file string, start, limit, offset uint64) (plugin.ObjFile, error) {
|
||||
return fakeObj{}, nil
|
||||
}
|
||||
|
||||
func (obj fakeObjTool) Disasm(file string, start, end uint64) ([]plugin.Inst, error) {
|
||||
return []plugin.Inst{
|
||||
{Addr: addrBase + 0, Text: "f1:asm", Function: "F1"},
|
||||
{Addr: addrBase + 10, Text: "f2:asm", Function: "F2"},
|
||||
{Addr: addrBase + 20, Text: "d3:asm", Function: "F3"},
|
||||
}, nil
|
||||
}
|
||||
|
||||
func makeFakeProfile() *profile.Profile {
|
||||
// Three functions: F1, F2, F3 with three lines, 11, 22, 33.
|
||||
funcs := []*profile.Function{
|
||||
{ID: 1, Name: "F1", Filename: fakeSource, StartLine: 3},
|
||||
{ID: 2, Name: "F2", Filename: fakeSource, StartLine: 5},
|
||||
{ID: 3, Name: "F3", Filename: fakeSource, StartLine: 7},
|
||||
}
|
||||
lines := []profile.Line{
|
||||
{Function: funcs[0], Line: 11},
|
||||
{Function: funcs[1], Line: 22},
|
||||
{Function: funcs[2], Line: 33},
|
||||
}
|
||||
mapping := []*profile.Mapping{
|
||||
{
|
||||
ID: 1,
|
||||
Start: addrBase,
|
||||
Limit: addrBase + 10,
|
||||
Offset: 0,
|
||||
File: "testbin",
|
||||
HasFunctions: true,
|
||||
HasFilenames: true,
|
||||
HasLineNumbers: true,
|
||||
},
|
||||
}
|
||||
|
||||
// Three interesting addresses: base+{10,20,30}
|
||||
locs := []*profile.Location{
|
||||
{ID: 1, Address: addrBase + 10, Line: lines[0:1], Mapping: mapping[0]},
|
||||
{ID: 2, Address: addrBase + 20, Line: lines[1:2], Mapping: mapping[0]},
|
||||
{ID: 3, Address: addrBase + 30, Line: lines[2:3], Mapping: mapping[0]},
|
||||
}
|
||||
|
||||
// Two stack traces.
|
||||
return &profile.Profile{
|
||||
PeriodType: &profile.ValueType{Type: "cpu", Unit: "milliseconds"},
|
||||
Period: 1,
|
||||
DurationNanos: 10e9,
|
||||
SampleType: []*profile.ValueType{
|
||||
{Type: "cpu", Unit: "milliseconds"},
|
||||
},
|
||||
Sample: []*profile.Sample{
|
||||
{
|
||||
Location: []*profile.Location{locs[2], locs[1], locs[0]},
|
||||
Value: []int64{100},
|
||||
},
|
||||
{
|
||||
Location: []*profile.Location{locs[1], locs[0]},
|
||||
Value: []int64{200},
|
||||
},
|
||||
},
|
||||
Location: locs,
|
||||
Function: funcs,
|
||||
Mapping: mapping,
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsLocalHost(t *testing.T) {
|
||||
for _, s := range []string{"localhost:10000", "[::1]:10000", "127.0.0.1:10000"} {
|
||||
host, _, err := net.SplitHostPort(s)
|
||||
if err != nil {
|
||||
t.Error("unexpected error when splitting", s)
|
||||
continue
|
||||
}
|
||||
if !isLocalhost(host) {
|
||||
t.Errorf("host %s from %s not considered local", host, s)
|
||||
}
|
||||
}
|
||||
}
|
25
src/cmd/vendor/github.com/google/pprof/internal/elfexec/elfexec.go
generated
vendored
25
src/cmd/vendor/github.com/google/pprof/internal/elfexec/elfexec.go
generated
vendored
@ -131,7 +131,7 @@ func GetBuildID(binary io.ReaderAt) ([]byte, error) {
|
||||
if buildID == nil {
|
||||
buildID = note.Desc
|
||||
} else {
|
||||
return nil, fmt.Errorf("multiple build ids found, don't know which to use!")
|
||||
return nil, fmt.Errorf("multiple build ids found, don't know which to use")
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -240,17 +240,22 @@ func GetBase(fh *elf.FileHeader, loadSegment *elf.ProgHeader, stextOffset *uint6
|
||||
}
|
||||
return start, nil
|
||||
case elf.ET_DYN:
|
||||
if offset != 0 {
|
||||
if loadSegment == nil || loadSegment.Vaddr == 0 {
|
||||
// The process mapping information, start = start of virtual address range,
|
||||
// and offset = offset in the executable file of the start address, tells us
|
||||
// that a runtime virtual address x maps to a file offset
|
||||
// fx = x - start + offset.
|
||||
if loadSegment == nil {
|
||||
return start - offset, nil
|
||||
}
|
||||
return 0, fmt.Errorf("Don't know how to handle mapping. Offset=%x, vaddr=%x",
|
||||
offset, loadSegment.Vaddr)
|
||||
}
|
||||
if loadSegment == nil {
|
||||
return start, nil
|
||||
}
|
||||
return start - loadSegment.Vaddr, nil
|
||||
// The program header, if not nil, indicates the offset in the file where
|
||||
// the executable segment is located (loadSegment.Off), and the base virtual
|
||||
// address where the first byte of the segment is loaded
|
||||
// (loadSegment.Vaddr). A file offset fx maps to a virtual (symbol) address
|
||||
// sx = fx - loadSegment.Off + loadSegment.Vaddr.
|
||||
//
|
||||
// Thus, a runtime virtual address x maps to a symbol address
|
||||
// sx = x - start + offset - loadSegment.Off + loadSegment.Vaddr.
|
||||
return start - offset + loadSegment.Off - loadSegment.Vaddr, nil
|
||||
}
|
||||
return 0, fmt.Errorf("Don't know how to handle FileHeader.Type %v", fh.Type)
|
||||
}
|
||||
|
3
src/cmd/vendor/github.com/google/pprof/internal/elfexec/elfexec_test.go
generated
vendored
3
src/cmd/vendor/github.com/google/pprof/internal/elfexec/elfexec_test.go
generated
vendored
@ -62,8 +62,9 @@ func TestGetBase(t *testing.T) {
|
||||
{"exec chromeos kernel 4", fhExec, kernelHeader, uint64p(0xffffffff81200198), 0x198, 0x100000, 0, 0x7ee00000, false},
|
||||
{"exec chromeos kernel unremapped", fhExec, kernelHeader, uint64p(0xffffffff810001c8), 0xffffffff834001c8, 0xffffffffc0000000, 0xffffffff834001c8, 0x2400000, false},
|
||||
{"dyn", fhDyn, nil, nil, 0x200000, 0x300000, 0, 0x200000, false},
|
||||
{"dyn offset", fhDyn, lsOffset, nil, 0x0, 0x300000, 0, 0xFFFFFFFFFFC00000, false},
|
||||
{"dyn map", fhDyn, lsOffset, nil, 0x0, 0x300000, 0, 0xFFFFFFFFFFE00000, false},
|
||||
{"dyn nomap", fhDyn, nil, nil, 0x0, 0x0, 0, 0, false},
|
||||
{"dyn map+offset", fhDyn, lsOffset, nil, 0x900000, 0xa00000, 0x200000, 0x500000, false},
|
||||
{"rel", fhRel, nil, nil, 0x2000000, 0x3000000, 0, 0x2000000, false},
|
||||
{"rel nomap", fhRel, nil, nil, 0x0, ^uint64(0), 0, 0, false},
|
||||
{"rel offset", fhRel, nil, nil, 0x100000, 0x200000, 0x1, 0, true},
|
||||
|
51
src/cmd/vendor/github.com/google/pprof/internal/graph/dotgraph.go
generated
vendored
51
src/cmd/vendor/github.com/google/pprof/internal/graph/dotgraph.go
generated
vendored
@ -43,14 +43,16 @@ type DotNodeAttributes struct {
|
||||
// constructed and how it should look.
|
||||
type DotConfig struct {
|
||||
Title string // The title of the DOT graph
|
||||
LegendURL string // The URL to link to from the legend.
|
||||
Labels []string // The labels for the DOT's legend
|
||||
|
||||
FormatValue func(int64) string // A formatting function for values
|
||||
FormatTag func(int64, string) string // A formatting function for numeric tags
|
||||
Total int64 // The total weight of the graph, used to compute percentages
|
||||
}
|
||||
|
||||
// Compose creates and writes a in the DOT format to the writer, using
|
||||
const maxNodelets = 4 // Number of nodelets for labels (both numeric and non)
|
||||
|
||||
// ComposeDot creates and writes a in the DOT format to the writer, using
|
||||
// the configurations given.
|
||||
func ComposeDot(w io.Writer, g *Graph, a *DotAttributes, c *DotConfig) {
|
||||
builder := &builder{w, a, c}
|
||||
@ -120,11 +122,19 @@ func (b *builder) finish() {
|
||||
// addLegend generates a legend in DOT format.
|
||||
func (b *builder) addLegend() {
|
||||
labels := b.config.Labels
|
||||
var title string
|
||||
if len(labels) > 0 {
|
||||
title = labels[0]
|
||||
if len(labels) == 0 {
|
||||
return
|
||||
}
|
||||
fmt.Fprintf(b, `subgraph cluster_L { "%s" [shape=box fontsize=16 label="%s\l"] }`+"\n", title, strings.Join(labels, `\l`))
|
||||
title := labels[0]
|
||||
fmt.Fprintf(b, `subgraph cluster_L { "%s" [shape=box fontsize=16`, title)
|
||||
fmt.Fprintf(b, ` label="%s\l"`, strings.Join(labels, `\l`))
|
||||
if b.config.LegendURL != "" {
|
||||
fmt.Fprintf(b, ` URL="%s" target="_blank"`, b.config.LegendURL)
|
||||
}
|
||||
if b.config.Title != "" {
|
||||
fmt.Fprintf(b, ` tooltip="%s"`, b.config.Title)
|
||||
}
|
||||
fmt.Fprintf(b, "] }\n")
|
||||
}
|
||||
|
||||
// addNode generates a graph node in DOT format.
|
||||
@ -176,8 +186,8 @@ func (b *builder) addNode(node *Node, nodeID int, maxFlat float64) {
|
||||
}
|
||||
|
||||
// Create DOT attribute for node.
|
||||
attr := fmt.Sprintf(`label="%s" fontsize=%d shape=%s tooltip="%s (%s)" color="%s" fillcolor="%s"`,
|
||||
label, fontSize, shape, node.Info.PrintableName(), cumValue,
|
||||
attr := fmt.Sprintf(`label="%s" id="node%d" fontsize=%d shape=%s tooltip="%s (%s)" color="%s" fillcolor="%s"`,
|
||||
label, nodeID, fontSize, shape, node.Info.PrintableName(), cumValue,
|
||||
dotColor(float64(node.CumValue())/float64(abs64(b.config.Total)), false),
|
||||
dotColor(float64(node.CumValue())/float64(abs64(b.config.Total)), true))
|
||||
|
||||
@ -204,13 +214,11 @@ func (b *builder) addNode(node *Node, nodeID int, maxFlat float64) {
|
||||
|
||||
// addNodelets generates the DOT boxes for the node tags if they exist.
|
||||
func (b *builder) addNodelets(node *Node, nodeID int) bool {
|
||||
const maxNodelets = 4 // Number of nodelets for alphanumeric labels
|
||||
const maxNumNodelets = 4 // Number of nodelets for numeric labels
|
||||
var nodelets string
|
||||
|
||||
// Populate two Tag slices, one for LabelTags and one for NumericTags.
|
||||
var ts []*Tag
|
||||
lnts := make(map[string][]*Tag, 0)
|
||||
lnts := make(map[string][]*Tag)
|
||||
for _, t := range node.LabelTags {
|
||||
ts = append(ts, t)
|
||||
}
|
||||
@ -239,15 +247,15 @@ func (b *builder) addNodelets(node *Node, nodeID int) bool {
|
||||
continue
|
||||
}
|
||||
weight := b.config.FormatValue(w)
|
||||
nodelets += fmt.Sprintf(`N%d_%d [label = "%s" fontsize=8 shape=box3d tooltip="%s"]`+"\n", nodeID, i, t.Name, weight)
|
||||
nodelets += fmt.Sprintf(`N%d_%d [label = "%s" id="N%d_%d" fontsize=8 shape=box3d tooltip="%s"]`+"\n", nodeID, i, t.Name, nodeID, i, weight)
|
||||
nodelets += fmt.Sprintf(`N%d -> N%d_%d [label=" %s" weight=100 tooltip="%s" labeltooltip="%s"]`+"\n", nodeID, nodeID, i, weight, weight, weight)
|
||||
if nts := lnts[t.Name]; nts != nil {
|
||||
nodelets += b.numericNodelets(nts, maxNumNodelets, flatTags, fmt.Sprintf(`N%d_%d`, nodeID, i))
|
||||
nodelets += b.numericNodelets(nts, maxNodelets, flatTags, fmt.Sprintf(`N%d_%d`, nodeID, i))
|
||||
}
|
||||
}
|
||||
|
||||
if nts := lnts[""]; nts != nil {
|
||||
nodelets += b.numericNodelets(nts, maxNumNodelets, flatTags, fmt.Sprintf(`N%d`, nodeID))
|
||||
nodelets += b.numericNodelets(nts, maxNodelets, flatTags, fmt.Sprintf(`N%d`, nodeID))
|
||||
}
|
||||
|
||||
fmt.Fprint(b, nodelets)
|
||||
@ -266,7 +274,7 @@ func (b *builder) numericNodelets(nts []*Tag, maxNumNodelets int, flatTags bool,
|
||||
}
|
||||
if w != 0 {
|
||||
weight := b.config.FormatValue(w)
|
||||
nodelets += fmt.Sprintf(`N%s_%d [label = "%s" fontsize=8 shape=box3d tooltip="%s"]`+"\n", source, j, t.Name, weight)
|
||||
nodelets += fmt.Sprintf(`N%s_%d [label = "%s" id="N%s_%d" fontsize=8 shape=box3d tooltip="%s"]`+"\n", source, j, t.Name, source, j, weight)
|
||||
nodelets += fmt.Sprintf(`%s -> N%s_%d [label=" %s" weight=100 tooltip="%s" labeltooltip="%s"%s]`+"\n", source, source, j, weight, weight, weight, attr)
|
||||
}
|
||||
}
|
||||
@ -441,14 +449,9 @@ func tagDistance(t, u *Tag) float64 {
|
||||
}
|
||||
|
||||
func (b *builder) tagGroupLabel(g []*Tag) (label string, flat, cum int64) {
|
||||
formatTag := b.config.FormatTag
|
||||
if formatTag == nil {
|
||||
formatTag = measurement.Label
|
||||
}
|
||||
|
||||
if len(g) == 1 {
|
||||
t := g[0]
|
||||
return formatTag(t.Value, t.Unit), t.FlatValue(), t.CumValue()
|
||||
return measurement.Label(t.Value, t.Unit), t.FlatValue(), t.CumValue()
|
||||
}
|
||||
min := g[0]
|
||||
max := g[0]
|
||||
@ -472,7 +475,11 @@ func (b *builder) tagGroupLabel(g []*Tag) (label string, flat, cum int64) {
|
||||
if dc != 0 {
|
||||
c = c / dc
|
||||
}
|
||||
return formatTag(min.Value, min.Unit) + ".." + formatTag(max.Value, max.Unit), f, c
|
||||
|
||||
// Tags are not scaled with the selected output unit because tags are often
|
||||
// much smaller than other values which appear, so the range of tag sizes
|
||||
// sometimes would appear to be "0..0" when scaled to the selected output unit.
|
||||
return measurement.Label(min.Value, min.Unit) + ".." + measurement.Label(max.Value, max.Unit), f, c
|
||||
}
|
||||
|
||||
func min64(a, b int64) int64 {
|
||||
|
121
src/cmd/vendor/github.com/google/pprof/internal/graph/dotgraph_test.go
generated
vendored
121
src/cmd/vendor/github.com/google/pprof/internal/graph/dotgraph_test.go
generated
vendored
@ -16,8 +16,10 @@ package graph
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
@ -26,7 +28,7 @@ import (
|
||||
"github.com/google/pprof/internal/proftest"
|
||||
)
|
||||
|
||||
const path = "testdata/"
|
||||
var updateFlag = flag.Bool("update", false, "Update the golden files")
|
||||
|
||||
func TestComposeWithStandardGraph(t *testing.T) {
|
||||
g := baseGraph()
|
||||
@ -35,12 +37,7 @@ func TestComposeWithStandardGraph(t *testing.T) {
|
||||
var buf bytes.Buffer
|
||||
ComposeDot(&buf, g, a, c)
|
||||
|
||||
want, err := ioutil.ReadFile(path + "compose1.dot")
|
||||
if err != nil {
|
||||
t.Fatalf("error reading test file: %v", err)
|
||||
}
|
||||
|
||||
compareGraphs(t, buf.Bytes(), want)
|
||||
compareGraphs(t, buf.Bytes(), "compose1.dot")
|
||||
}
|
||||
|
||||
func TestComposeWithNodeAttributesAndZeroFlat(t *testing.T) {
|
||||
@ -64,12 +61,7 @@ func TestComposeWithNodeAttributesAndZeroFlat(t *testing.T) {
|
||||
var buf bytes.Buffer
|
||||
ComposeDot(&buf, g, a, c)
|
||||
|
||||
want, err := ioutil.ReadFile(path + "compose2.dot")
|
||||
if err != nil {
|
||||
t.Fatalf("error reading test file: %v", err)
|
||||
}
|
||||
|
||||
compareGraphs(t, buf.Bytes(), want)
|
||||
compareGraphs(t, buf.Bytes(), "compose2.dot")
|
||||
}
|
||||
|
||||
func TestComposeWithTagsAndResidualEdge(t *testing.T) {
|
||||
@ -97,12 +89,7 @@ func TestComposeWithTagsAndResidualEdge(t *testing.T) {
|
||||
var buf bytes.Buffer
|
||||
ComposeDot(&buf, g, a, c)
|
||||
|
||||
want, err := ioutil.ReadFile(path + "compose3.dot")
|
||||
if err != nil {
|
||||
t.Fatalf("error reading test file: %v", err)
|
||||
}
|
||||
|
||||
compareGraphs(t, buf.Bytes(), want)
|
||||
compareGraphs(t, buf.Bytes(), "compose3.dot")
|
||||
}
|
||||
|
||||
func TestComposeWithNestedTags(t *testing.T) {
|
||||
@ -127,12 +114,7 @@ func TestComposeWithNestedTags(t *testing.T) {
|
||||
var buf bytes.Buffer
|
||||
ComposeDot(&buf, g, a, c)
|
||||
|
||||
want, err := ioutil.ReadFile(path + "compose5.dot")
|
||||
if err != nil {
|
||||
t.Fatalf("error reading test file: %v", err)
|
||||
}
|
||||
|
||||
compareGraphs(t, buf.Bytes(), want)
|
||||
compareGraphs(t, buf.Bytes(), "compose5.dot")
|
||||
}
|
||||
|
||||
func TestComposeWithEmptyGraph(t *testing.T) {
|
||||
@ -142,12 +124,18 @@ func TestComposeWithEmptyGraph(t *testing.T) {
|
||||
var buf bytes.Buffer
|
||||
ComposeDot(&buf, g, a, c)
|
||||
|
||||
want, err := ioutil.ReadFile(path + "compose4.dot")
|
||||
if err != nil {
|
||||
t.Fatalf("error reading test file: %v", err)
|
||||
compareGraphs(t, buf.Bytes(), "compose4.dot")
|
||||
}
|
||||
|
||||
compareGraphs(t, buf.Bytes(), want)
|
||||
func TestComposeWithStandardGraphAndURL(t *testing.T) {
|
||||
g := baseGraph()
|
||||
a, c := baseAttrsAndConfig()
|
||||
c.LegendURL = "http://example.com"
|
||||
|
||||
var buf bytes.Buffer
|
||||
ComposeDot(&buf, g, a, c)
|
||||
|
||||
compareGraphs(t, buf.Bytes(), "compose6.dot")
|
||||
}
|
||||
|
||||
func baseGraph() *Graph {
|
||||
@ -199,13 +187,78 @@ func baseAttrsAndConfig() (*DotAttributes, *DotConfig) {
|
||||
return a, c
|
||||
}
|
||||
|
||||
func compareGraphs(t *testing.T, got, want []byte) {
|
||||
func compareGraphs(t *testing.T, got []byte, wantFile string) {
|
||||
wantFile = filepath.Join("testdata", wantFile)
|
||||
want, err := ioutil.ReadFile(wantFile)
|
||||
if err != nil {
|
||||
t.Fatalf("error reading test file %s: %v", wantFile, err)
|
||||
}
|
||||
|
||||
if string(got) != string(want) {
|
||||
d, err := proftest.Diff(got, want)
|
||||
if err != nil {
|
||||
t.Fatalf("error finding diff: %v", err)
|
||||
}
|
||||
t.Errorf("Compose incorrectly wrote %s", string(d))
|
||||
if *updateFlag {
|
||||
err := ioutil.WriteFile(wantFile, got, 0644)
|
||||
if err != nil {
|
||||
t.Errorf("failed to update the golden file %q: %v", wantFile, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNodeletCountCapping(t *testing.T) {
|
||||
labelTags := make(TagMap)
|
||||
for i := 0; i < 10; i++ {
|
||||
name := fmt.Sprintf("tag-%d", i)
|
||||
labelTags[name] = &Tag{
|
||||
Name: name,
|
||||
Flat: 10,
|
||||
Cum: 10,
|
||||
}
|
||||
}
|
||||
numTags := make(TagMap)
|
||||
for i := 0; i < 10; i++ {
|
||||
name := fmt.Sprintf("num-tag-%d", i)
|
||||
numTags[name] = &Tag{
|
||||
Name: name,
|
||||
Unit: "mb",
|
||||
Value: 16,
|
||||
Flat: 10,
|
||||
Cum: 10,
|
||||
}
|
||||
}
|
||||
node1 := &Node{
|
||||
Info: NodeInfo{Name: "node1-with-tags"},
|
||||
Flat: 10,
|
||||
Cum: 10,
|
||||
NumericTags: map[string]TagMap{"": numTags},
|
||||
LabelTags: labelTags,
|
||||
}
|
||||
node2 := &Node{
|
||||
Info: NodeInfo{Name: "node2"},
|
||||
Flat: 15,
|
||||
Cum: 15,
|
||||
}
|
||||
node3 := &Node{
|
||||
Info: NodeInfo{Name: "node3"},
|
||||
Flat: 15,
|
||||
Cum: 15,
|
||||
}
|
||||
g := &Graph{
|
||||
Nodes: Nodes{
|
||||
node1,
|
||||
node2,
|
||||
node3,
|
||||
},
|
||||
}
|
||||
for n := 1; n <= 3; n++ {
|
||||
input := maxNodelets + n
|
||||
if got, want := len(g.SelectTopNodes(input, true)), n; got != want {
|
||||
t.Errorf("SelectTopNodes(%d): got %d nodes, want %d", input, got, want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -240,19 +293,19 @@ func TestTagCollapse(t *testing.T) {
|
||||
}
|
||||
|
||||
tagWant := [][]*Tag{
|
||||
[]*Tag{
|
||||
{
|
||||
makeTag("1B..2GB", "", 0, 2401, 2401),
|
||||
},
|
||||
[]*Tag{
|
||||
{
|
||||
makeTag("2GB", "", 0, 1000, 1000),
|
||||
makeTag("1B..12MB", "", 0, 1401, 1401),
|
||||
},
|
||||
[]*Tag{
|
||||
{
|
||||
makeTag("2GB", "", 0, 1000, 1000),
|
||||
makeTag("12MB", "", 0, 100, 100),
|
||||
makeTag("1B..1MB", "", 0, 1301, 1301),
|
||||
},
|
||||
[]*Tag{
|
||||
{
|
||||
makeTag("2GB", "", 0, 1000, 1000),
|
||||
makeTag("1MB", "", 0, 1000, 1000),
|
||||
makeTag("2B..1kB", "", 0, 201, 201),
|
||||
|
41
src/cmd/vendor/github.com/google/pprof/internal/graph/graph.go
generated
vendored
41
src/cmd/vendor/github.com/google/pprof/internal/graph/graph.go
generated
vendored
@ -240,6 +240,8 @@ type Edge struct {
|
||||
Inline bool
|
||||
}
|
||||
|
||||
// WeightValue returns the weight value for this edge, normalizing if a
|
||||
// divisor is available.
|
||||
func (e *Edge) WeightValue() int64 {
|
||||
if e.WeightDiv == 0 {
|
||||
return e.Weight
|
||||
@ -327,7 +329,7 @@ func newGraph(prof *profile.Profile, o *Options) (*Graph, map[uint64]Nodes) {
|
||||
// Add cum weight to all nodes in stack, avoiding double counting.
|
||||
if _, ok := seenNode[n]; !ok {
|
||||
seenNode[n] = true
|
||||
n.addSample(dw, w, labels, sample.NumLabel, o.FormatTag, false)
|
||||
n.addSample(dw, w, labels, sample.NumLabel, sample.NumUnit, o.FormatTag, false)
|
||||
}
|
||||
// Update edge weights for all edges in stack, avoiding double counting.
|
||||
if _, ok := seenEdge[nodePair{n, parent}]; !ok && parent != nil && n != parent {
|
||||
@ -340,7 +342,7 @@ func newGraph(prof *profile.Profile, o *Options) (*Graph, map[uint64]Nodes) {
|
||||
}
|
||||
if parent != nil && !residual {
|
||||
// Add flat weight to leaf node.
|
||||
parent.addSample(dw, w, labels, sample.NumLabel, o.FormatTag, true)
|
||||
parent.addSample(dw, w, labels, sample.NumLabel, sample.NumUnit, o.FormatTag, true)
|
||||
}
|
||||
}
|
||||
|
||||
@ -399,7 +401,7 @@ func newTree(prof *profile.Profile, o *Options) (g *Graph) {
|
||||
if n == nil {
|
||||
continue
|
||||
}
|
||||
n.addSample(dw, w, labels, sample.NumLabel, o.FormatTag, false)
|
||||
n.addSample(dw, w, labels, sample.NumLabel, sample.NumUnit, o.FormatTag, false)
|
||||
if parent != nil {
|
||||
parent.AddToEdgeDiv(n, dw, w, false, lidx != len(lines)-1)
|
||||
}
|
||||
@ -407,7 +409,7 @@ func newTree(prof *profile.Profile, o *Options) (g *Graph) {
|
||||
}
|
||||
}
|
||||
if parent != nil {
|
||||
parent.addSample(dw, w, labels, sample.NumLabel, o.FormatTag, true)
|
||||
parent.addSample(dw, w, labels, sample.NumLabel, sample.NumUnit, o.FormatTag, true)
|
||||
}
|
||||
}
|
||||
|
||||
@ -600,7 +602,7 @@ func (ns Nodes) Sum() (flat int64, cum int64) {
|
||||
return
|
||||
}
|
||||
|
||||
func (n *Node) addSample(dw, w int64, labels string, numLabel map[string][]int64, format func(int64, string) string, flat bool) {
|
||||
func (n *Node) addSample(dw, w int64, labels string, numLabel map[string][]int64, numUnit map[string][]string, format func(int64, string) string, flat bool) {
|
||||
// Update sample value
|
||||
if flat {
|
||||
n.FlatDiv += dw
|
||||
@ -631,9 +633,15 @@ func (n *Node) addSample(dw, w int64, labels string, numLabel map[string][]int64
|
||||
if format == nil {
|
||||
format = defaultLabelFormat
|
||||
}
|
||||
for key, nvals := range numLabel {
|
||||
for _, v := range nvals {
|
||||
t := numericTags.findOrAddTag(format(v, key), key, v)
|
||||
for k, nvals := range numLabel {
|
||||
units := numUnit[k]
|
||||
for i, v := range nvals {
|
||||
var t *Tag
|
||||
if len(units) > 0 {
|
||||
t = numericTags.findOrAddTag(format(v, units[i]), units[i], v)
|
||||
} else {
|
||||
t = numericTags.findOrAddTag(format(v, k), k, v)
|
||||
}
|
||||
if flat {
|
||||
t.FlatDiv += dw
|
||||
t.Flat += w
|
||||
@ -800,7 +808,11 @@ func (g *Graph) selectTopNodes(maxNodes int, visualMode bool) Nodes {
|
||||
// If generating a visual graph, count tags as nodes. Update
|
||||
// maxNodes to account for them.
|
||||
for i, n := range g.Nodes {
|
||||
if count += countTags(n) + 1; count >= maxNodes {
|
||||
tags := countTags(n)
|
||||
if tags > maxNodelets {
|
||||
tags = maxNodelets
|
||||
}
|
||||
if count += tags + 1; count >= maxNodes {
|
||||
maxNodes = i + 1
|
||||
break
|
||||
}
|
||||
@ -832,17 +844,6 @@ func countTags(n *Node) int {
|
||||
return count
|
||||
}
|
||||
|
||||
// countEdges counts the number of edges below the specified cutoff.
|
||||
func countEdges(el EdgeMap, cutoff int64) int {
|
||||
count := 0
|
||||
for _, e := range el {
|
||||
if e.Weight > cutoff {
|
||||
count++
|
||||
}
|
||||
}
|
||||
return count
|
||||
}
|
||||
|
||||
// RemoveRedundantEdges removes residual edges if the destination can
|
||||
// be reached through another path. This is done to simplify the graph
|
||||
// while preserving connectivity.
|
||||
|
4
src/cmd/vendor/github.com/google/pprof/internal/graph/graph_test.go
generated
vendored
4
src/cmd/vendor/github.com/google/pprof/internal/graph/graph_test.go
generated
vendored
@ -171,7 +171,7 @@ func createExpectedEdges(parent expectedNode, children ...expectedNode) {
|
||||
}
|
||||
}
|
||||
|
||||
// createTestCase1 creates a test case that initally looks like:
|
||||
// createTestCase1 creates a test case that initially looks like:
|
||||
// 0
|
||||
// |(5)
|
||||
// 1
|
||||
@ -255,7 +255,7 @@ func createTestCase2() trimTreeTestcase {
|
||||
}
|
||||
}
|
||||
|
||||
// createTestCase3 creates an initally empty graph and expects an empty graph
|
||||
// createTestCase3 creates an initially empty graph and expects an empty graph
|
||||
// after trimming.
|
||||
func createTestCase3() trimTreeTestcase {
|
||||
graph := &Graph{make(Nodes, 0)}
|
||||
|
6
src/cmd/vendor/github.com/google/pprof/internal/graph/testdata/compose1.dot
generated
vendored
6
src/cmd/vendor/github.com/google/pprof/internal/graph/testdata/compose1.dot
generated
vendored
@ -1,7 +1,7 @@
|
||||
digraph "testtitle" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "label1" [shape=box fontsize=16 label="label1\llabel2\l"] }
|
||||
N1 [label="src\n10 (10.00%)\nof 25 (25.00%)" fontsize=22 shape=box tooltip="src (25)" color="#b23c00" fillcolor="#edddd5"]
|
||||
N2 [label="dest\n15 (15.00%)\nof 25 (25.00%)" fontsize=24 shape=box tooltip="dest (25)" color="#b23c00" fillcolor="#edddd5"]
|
||||
subgraph cluster_L { "label1" [shape=box fontsize=16 label="label1\llabel2\l" tooltip="testtitle"] }
|
||||
N1 [label="src\n10 (10.00%)\nof 25 (25.00%)" id="node1" fontsize=22 shape=box tooltip="src (25)" color="#b23c00" fillcolor="#edddd5"]
|
||||
N2 [label="dest\n15 (15.00%)\nof 25 (25.00%)" id="node2" fontsize=24 shape=box tooltip="dest (25)" color="#b23c00" fillcolor="#edddd5"]
|
||||
N1 -> N2 [label=" 10" weight=11 color="#b28559" tooltip="src -> dest (10)" labeltooltip="src -> dest (10)"]
|
||||
}
|
||||
|
6
src/cmd/vendor/github.com/google/pprof/internal/graph/testdata/compose2.dot
generated
vendored
6
src/cmd/vendor/github.com/google/pprof/internal/graph/testdata/compose2.dot
generated
vendored
@ -1,7 +1,7 @@
|
||||
digraph "testtitle" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "label1" [shape=box fontsize=16 label="label1\llabel2\l"] }
|
||||
N1 [label="SRC10 (10.00%)\nof 25 (25.00%)" fontsize=24 shape=folder tooltip="src (25)" color="#b23c00" fillcolor="#edddd5" style="bold,filled" peripheries=2 URL="www.google.com" target="_blank"]
|
||||
N2 [label="dest\n0 of 25 (25.00%)" fontsize=8 shape=box tooltip="dest (25)" color="#b23c00" fillcolor="#edddd5"]
|
||||
subgraph cluster_L { "label1" [shape=box fontsize=16 label="label1\llabel2\l" tooltip="testtitle"] }
|
||||
N1 [label="SRC10 (10.00%)\nof 25 (25.00%)" id="node1" fontsize=24 shape=folder tooltip="src (25)" color="#b23c00" fillcolor="#edddd5" style="bold,filled" peripheries=2 URL="www.google.com" target="_blank"]
|
||||
N2 [label="dest\n0 of 25 (25.00%)" id="node2" fontsize=8 shape=box tooltip="dest (25)" color="#b23c00" fillcolor="#edddd5"]
|
||||
N1 -> N2 [label=" 10" weight=11 color="#b28559" tooltip="src -> dest (10)" labeltooltip="src -> dest (10)"]
|
||||
}
|
||||
|
10
src/cmd/vendor/github.com/google/pprof/internal/graph/testdata/compose3.dot
generated
vendored
10
src/cmd/vendor/github.com/google/pprof/internal/graph/testdata/compose3.dot
generated
vendored
@ -1,11 +1,11 @@
|
||||
digraph "testtitle" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "label1" [shape=box fontsize=16 label="label1\llabel2\l"] }
|
||||
N1 [label="src\n10 (10.00%)\nof 25 (25.00%)" fontsize=22 shape=box tooltip="src (25)" color="#b23c00" fillcolor="#edddd5"]
|
||||
N1_0 [label = "tag1" fontsize=8 shape=box3d tooltip="10"]
|
||||
subgraph cluster_L { "label1" [shape=box fontsize=16 label="label1\llabel2\l" tooltip="testtitle"] }
|
||||
N1 [label="src\n10 (10.00%)\nof 25 (25.00%)" id="node1" fontsize=22 shape=box tooltip="src (25)" color="#b23c00" fillcolor="#edddd5"]
|
||||
N1_0 [label = "tag1" id="N1_0" fontsize=8 shape=box3d tooltip="10"]
|
||||
N1 -> N1_0 [label=" 10" weight=100 tooltip="10" labeltooltip="10"]
|
||||
NN1_0 [label = "tag2" fontsize=8 shape=box3d tooltip="20"]
|
||||
NN1_0 [label = "tag2" id="NN1_0" fontsize=8 shape=box3d tooltip="20"]
|
||||
N1 -> NN1_0 [label=" 20" weight=100 tooltip="20" labeltooltip="20"]
|
||||
N2 [label="dest\n15 (15.00%)\nof 25 (25.00%)" fontsize=24 shape=box tooltip="dest (25)" color="#b23c00" fillcolor="#edddd5"]
|
||||
N2 [label="dest\n15 (15.00%)\nof 25 (25.00%)" id="node2" fontsize=24 shape=box tooltip="dest (25)" color="#b23c00" fillcolor="#edddd5"]
|
||||
N1 -> N2 [label=" 10" weight=11 color="#b28559" tooltip="src ... dest (10)" labeltooltip="src ... dest (10)" style="dotted" minlen=2]
|
||||
}
|
||||
|
2
src/cmd/vendor/github.com/google/pprof/internal/graph/testdata/compose4.dot
generated
vendored
2
src/cmd/vendor/github.com/google/pprof/internal/graph/testdata/compose4.dot
generated
vendored
@ -1,4 +1,4 @@
|
||||
digraph "testtitle" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "label1" [shape=box fontsize=16 label="label1\llabel2\l"] }
|
||||
subgraph cluster_L { "label1" [shape=box fontsize=16 label="label1\llabel2\l" tooltip="testtitle"] }
|
||||
}
|
||||
|
10
src/cmd/vendor/github.com/google/pprof/internal/graph/testdata/compose5.dot
generated
vendored
10
src/cmd/vendor/github.com/google/pprof/internal/graph/testdata/compose5.dot
generated
vendored
@ -1,11 +1,11 @@
|
||||
digraph "testtitle" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "label1" [shape=box fontsize=16 label="label1\llabel2\l"] }
|
||||
N1 [label="src\n10 (10.00%)\nof 25 (25.00%)" fontsize=22 shape=box tooltip="src (25)" color="#b23c00" fillcolor="#edddd5"]
|
||||
N1_0 [label = "tag1" fontsize=8 shape=box3d tooltip="10"]
|
||||
subgraph cluster_L { "label1" [shape=box fontsize=16 label="label1\llabel2\l" tooltip="testtitle"] }
|
||||
N1 [label="src\n10 (10.00%)\nof 25 (25.00%)" id="node1" fontsize=22 shape=box tooltip="src (25)" color="#b23c00" fillcolor="#edddd5"]
|
||||
N1_0 [label = "tag1" id="N1_0" fontsize=8 shape=box3d tooltip="10"]
|
||||
N1 -> N1_0 [label=" 10" weight=100 tooltip="10" labeltooltip="10"]
|
||||
NN1_0_0 [label = "tag2" fontsize=8 shape=box3d tooltip="20"]
|
||||
NN1_0_0 [label = "tag2" id="NN1_0_0" fontsize=8 shape=box3d tooltip="20"]
|
||||
N1_0 -> NN1_0_0 [label=" 20" weight=100 tooltip="20" labeltooltip="20"]
|
||||
N2 [label="dest\n15 (15.00%)\nof 25 (25.00%)" fontsize=24 shape=box tooltip="dest (25)" color="#b23c00" fillcolor="#edddd5"]
|
||||
N2 [label="dest\n15 (15.00%)\nof 25 (25.00%)" id="node2" fontsize=24 shape=box tooltip="dest (25)" color="#b23c00" fillcolor="#edddd5"]
|
||||
N1 -> N2 [label=" 10" weight=11 color="#b28559" tooltip="src -> dest (10)" labeltooltip="src -> dest (10)" minlen=2]
|
||||
}
|
||||
|
7
src/cmd/vendor/github.com/google/pprof/internal/graph/testdata/compose6.dot
generated
vendored
Normal file
7
src/cmd/vendor/github.com/google/pprof/internal/graph/testdata/compose6.dot
generated
vendored
Normal file
@ -0,0 +1,7 @@
|
||||
digraph "testtitle" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "label1" [shape=box fontsize=16 label="label1\llabel2\l" URL="http://example.com" target="_blank" tooltip="testtitle"] }
|
||||
N1 [label="src\n10 (10.00%)\nof 25 (25.00%)" id="node1" fontsize=22 shape=box tooltip="src (25)" color="#b23c00" fillcolor="#edddd5"]
|
||||
N2 [label="dest\n15 (15.00%)\nof 25 (25.00%)" id="node2" fontsize=24 shape=box tooltip="dest (25)" color="#b23c00" fillcolor="#edddd5"]
|
||||
N1 -> N2 [label=" 10" weight=11 color="#b28559" tooltip="src -> dest (10)" labeltooltip="src -> dest (10)"]
|
||||
}
|
22
src/cmd/vendor/github.com/google/pprof/internal/measurement/measurement.go
generated
vendored
22
src/cmd/vendor/github.com/google/pprof/internal/measurement/measurement.go
generated
vendored
@ -170,12 +170,16 @@ func memoryLabel(value int64, fromUnit, toUnit string) (v float64, u string, ok
|
||||
|
||||
switch fromUnit {
|
||||
case "byte", "b":
|
||||
case "kilobyte", "kb":
|
||||
case "kb", "kbyte", "kilobyte":
|
||||
value *= 1024
|
||||
case "megabyte", "mb":
|
||||
case "mb", "mbyte", "megabyte":
|
||||
value *= 1024 * 1024
|
||||
case "gigabyte", "gb":
|
||||
case "gb", "gbyte", "gigabyte":
|
||||
value *= 1024 * 1024 * 1024
|
||||
case "tb", "tbyte", "terabyte":
|
||||
value *= 1024 * 1024 * 1024 * 1024
|
||||
case "pb", "pbyte", "petabyte":
|
||||
value *= 1024 * 1024 * 1024 * 1024 * 1024
|
||||
default:
|
||||
return 0, "", false
|
||||
}
|
||||
@ -188,8 +192,12 @@ func memoryLabel(value int64, fromUnit, toUnit string) (v float64, u string, ok
|
||||
toUnit = "kb"
|
||||
case value < 1024*1024*1024:
|
||||
toUnit = "mb"
|
||||
default:
|
||||
case value < 1024*1024*1024*1024:
|
||||
toUnit = "gb"
|
||||
case value < 1024*1024*1024*1024*1024:
|
||||
toUnit = "tb"
|
||||
default:
|
||||
toUnit = "pb"
|
||||
}
|
||||
}
|
||||
|
||||
@ -203,6 +211,10 @@ func memoryLabel(value int64, fromUnit, toUnit string) (v float64, u string, ok
|
||||
output, toUnit = float64(value)/(1024*1024), "MB"
|
||||
case "gb", "gbyte", "gigabyte":
|
||||
output, toUnit = float64(value)/(1024*1024*1024), "GB"
|
||||
case "tb", "tbyte", "terabyte":
|
||||
output, toUnit = float64(value)/(1024*1024*1024*1024), "TB"
|
||||
case "pb", "pbyte", "petabyte":
|
||||
output, toUnit = float64(value)/(1024*1024*1024*1024*1024), "PB"
|
||||
}
|
||||
return output, toUnit, true
|
||||
}
|
||||
@ -289,7 +301,7 @@ func timeLabel(value int64, fromUnit, toUnit string) (v float64, u string, ok bo
|
||||
case "week", "wk":
|
||||
output, toUnit = dd/float64(7*24*time.Hour), "wks"
|
||||
case "year", "yr":
|
||||
output, toUnit = dd/float64(365*7*24*time.Hour), "yrs"
|
||||
output, toUnit = dd/float64(365*24*time.Hour), "yrs"
|
||||
default:
|
||||
fallthrough
|
||||
case "sec", "second", "s":
|
||||
|
47
src/cmd/vendor/github.com/google/pprof/internal/measurement/measurement_test.go
generated
vendored
Normal file
47
src/cmd/vendor/github.com/google/pprof/internal/measurement/measurement_test.go
generated
vendored
Normal file
@ -0,0 +1,47 @@
|
||||
// Copyright 2017 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package measurement
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestScale(t *testing.T) {
|
||||
for _, tc := range []struct {
|
||||
value int64
|
||||
fromUnit, toUnit string
|
||||
wantValue float64
|
||||
wantUnit string
|
||||
}{
|
||||
{1, "s", "ms", 1000, "ms"},
|
||||
{1, "kb", "b", 1024, "B"},
|
||||
{1, "kbyte", "b", 1024, "B"},
|
||||
{1, "kilobyte", "b", 1024, "B"},
|
||||
{1, "mb", "kb", 1024, "kB"},
|
||||
{1, "gb", "mb", 1024, "MB"},
|
||||
{1024, "gb", "tb", 1, "TB"},
|
||||
{1024, "tb", "pb", 1, "PB"},
|
||||
{2048, "mb", "auto", 2, "GB"},
|
||||
{3.1536e7, "s", "auto", 1, "yrs"},
|
||||
{-1, "s", "ms", -1000, "ms"},
|
||||
{1, "foo", "count", 1, ""},
|
||||
{1, "foo", "bar", 1, "bar"},
|
||||
} {
|
||||
if gotValue, gotUnit := Scale(tc.value, tc.fromUnit, tc.toUnit); gotValue != tc.wantValue || gotUnit != tc.wantUnit {
|
||||
t.Errorf("Scale(%d, %q, %q) = (%f, %q), want (%f, %q)",
|
||||
tc.value, tc.fromUnit, tc.toUnit, gotValue, gotUnit, tc.wantValue, tc.wantUnit)
|
||||
}
|
||||
}
|
||||
}
|
25
src/cmd/vendor/github.com/google/pprof/internal/plugin/plugin.go
generated
vendored
25
src/cmd/vendor/github.com/google/pprof/internal/plugin/plugin.go
generated
vendored
@ -17,6 +17,7 @@ package plugin
|
||||
|
||||
import (
|
||||
"io"
|
||||
"net/http"
|
||||
"regexp"
|
||||
"time"
|
||||
|
||||
@ -31,6 +32,16 @@ type Options struct {
|
||||
Sym Symbolizer
|
||||
Obj ObjTool
|
||||
UI UI
|
||||
|
||||
// HTTPServer is a function that should block serving http requests,
|
||||
// including the handlers specfied in args. If non-nil, pprof will
|
||||
// invoke this function if necessary to provide a web interface.
|
||||
//
|
||||
// If HTTPServer is nil, pprof will use its own internal HTTP server.
|
||||
//
|
||||
// A common use for a custom HTTPServer is to provide custom
|
||||
// authentication checks.
|
||||
HTTPServer func(args *HTTPServerArgs) error
|
||||
}
|
||||
|
||||
// Writer provides a mechanism to write data under a certain name,
|
||||
@ -185,3 +196,17 @@ type UI interface {
|
||||
// the auto-completion of cmd, if the UI supports auto-completion at all.
|
||||
SetAutoComplete(complete func(string) string)
|
||||
}
|
||||
|
||||
// HTTPServerArgs contains arguments needed by an HTTP server that
|
||||
// is exporting a pprof web interface.
|
||||
type HTTPServerArgs struct {
|
||||
// Hostport contains the http server address (derived from flags).
|
||||
Hostport string
|
||||
|
||||
Host string // Host portion of Hostport
|
||||
Port int // Port portion of Hostport
|
||||
|
||||
// Handlers maps from URL paths to the handler to invoke to
|
||||
// serve that path.
|
||||
Handlers map[string]http.Handler
|
||||
}
|
||||
|
22
src/cmd/vendor/github.com/google/pprof/internal/proftest/proftest.go
generated
vendored
22
src/cmd/vendor/github.com/google/pprof/internal/proftest/proftest.go
generated
vendored
@ -22,6 +22,7 @@ import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/exec"
|
||||
"regexp"
|
||||
"testing"
|
||||
)
|
||||
|
||||
@ -71,10 +72,14 @@ func EncodeJSON(x interface{}) []byte {
|
||||
}
|
||||
|
||||
// TestUI implements the plugin.UI interface, triggering test failures
|
||||
// if more than Ignore errors are printed.
|
||||
// if more than Ignore errors not matching AllowRx are printed.
|
||||
// Also tracks the number of times the error matches AllowRx in
|
||||
// NumAllowRxMatches.
|
||||
type TestUI struct {
|
||||
T *testing.T
|
||||
Ignore int
|
||||
AllowRx string
|
||||
NumAllowRxMatches int
|
||||
}
|
||||
|
||||
// ReadLine returns no input, as no input is expected during testing.
|
||||
@ -89,11 +94,24 @@ func (ui *TestUI) Print(args ...interface{}) {
|
||||
// PrintErr messages may trigger an error failure. A fixed number of
|
||||
// error messages are permitted when appropriate.
|
||||
func (ui *TestUI) PrintErr(args ...interface{}) {
|
||||
if ui.AllowRx != "" {
|
||||
if matched, err := regexp.MatchString(ui.AllowRx, fmt.Sprint(args...)); matched || err != nil {
|
||||
if err != nil {
|
||||
ui.T.Errorf("failed to match against regex %q: %v", ui.AllowRx, err)
|
||||
}
|
||||
ui.NumAllowRxMatches++
|
||||
return
|
||||
}
|
||||
}
|
||||
if ui.Ignore > 0 {
|
||||
ui.Ignore--
|
||||
return
|
||||
}
|
||||
ui.T.Error(args)
|
||||
// Stringify arguments with fmt.Sprint() to match what default UI
|
||||
// implementation does. Without this Error() calls fmt.Sprintln() which
|
||||
// _always_ adds spaces between arguments, unlike fmt.Sprint() which only
|
||||
// adds them between arguments if neither is string.
|
||||
ui.T.Error(fmt.Sprint(args...))
|
||||
}
|
||||
|
||||
// IsTerminal indicates if the UI is an interactive terminal.
|
||||
|
246
src/cmd/vendor/github.com/google/pprof/internal/report/report.go
generated
vendored
246
src/cmd/vendor/github.com/google/pprof/internal/report/report.go
generated
vendored
@ -25,6 +25,7 @@ import (
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"text/tabwriter"
|
||||
"time"
|
||||
|
||||
"github.com/google/pprof/internal/graph"
|
||||
@ -63,6 +64,8 @@ type Options struct {
|
||||
Ratio float64
|
||||
Title string
|
||||
ProfileLabels []string
|
||||
ActiveFilters []string
|
||||
NumLabelUnits map[string]string
|
||||
|
||||
NodeCount int
|
||||
NodeFraction float64
|
||||
@ -125,6 +128,9 @@ func (rpt *Report) newTrimmedGraph() (g *graph.Graph, origCount, droppedNodes, d
|
||||
visualMode := o.OutputFormat == Dot
|
||||
cumSort := o.CumSort
|
||||
|
||||
// The call_tree option is only honored when generating visual representations of the callgraph.
|
||||
callTree := o.CallTree && (o.OutputFormat == Dot || o.OutputFormat == Callgrind)
|
||||
|
||||
// First step: Build complete graph to identify low frequency nodes, based on their cum weight.
|
||||
g = rpt.newGraph(nil)
|
||||
totalValue, _ := g.Nodes.Sum()
|
||||
@ -133,7 +139,7 @@ func (rpt *Report) newTrimmedGraph() (g *graph.Graph, origCount, droppedNodes, d
|
||||
|
||||
// Filter out nodes with cum value below nodeCutoff.
|
||||
if nodeCutoff > 0 {
|
||||
if o.CallTree {
|
||||
if callTree {
|
||||
if nodesKept := g.DiscardLowFrequencyNodePtrs(nodeCutoff); len(g.Nodes) != len(nodesKept) {
|
||||
droppedNodes = len(g.Nodes) - len(nodesKept)
|
||||
g.TrimTree(nodesKept)
|
||||
@ -154,7 +160,7 @@ func (rpt *Report) newTrimmedGraph() (g *graph.Graph, origCount, droppedNodes, d
|
||||
// Remove low frequency tags and edges as they affect selection.
|
||||
g.TrimLowFrequencyTags(nodeCutoff)
|
||||
g.TrimLowFrequencyEdges(edgeCutoff)
|
||||
if o.CallTree {
|
||||
if callTree {
|
||||
if nodesKept := g.SelectTopNodePtrs(nodeCount, visualMode); len(g.Nodes) != len(nodesKept) {
|
||||
g.TrimTree(nodesKept)
|
||||
g.SortNodes(cumSort, visualMode)
|
||||
@ -236,15 +242,27 @@ func (rpt *Report) newGraph(nodes graph.NodeSet) *graph.Graph {
|
||||
for _, f := range prof.Function {
|
||||
f.Filename = trimPath(f.Filename)
|
||||
}
|
||||
// Remove numeric tags not recognized by pprof.
|
||||
// Removes all numeric tags except for the bytes tag prior
|
||||
// to making graph.
|
||||
// TODO: modify to select first numeric tag if no bytes tag
|
||||
for _, s := range prof.Sample {
|
||||
numLabels := make(map[string][]int64, len(s.NumLabel))
|
||||
for k, v := range s.NumLabel {
|
||||
numUnits := make(map[string][]string, len(s.NumLabel))
|
||||
for k, vs := range s.NumLabel {
|
||||
if k == "bytes" {
|
||||
numLabels[k] = append(numLabels[k], v...)
|
||||
unit := o.NumLabelUnits[k]
|
||||
numValues := make([]int64, len(vs))
|
||||
numUnit := make([]string, len(vs))
|
||||
for i, v := range vs {
|
||||
numValues[i] = v
|
||||
numUnit[i] = unit
|
||||
}
|
||||
numLabels[k] = append(numLabels[k], numValues...)
|
||||
numUnits[k] = append(numUnits[k], numUnit...)
|
||||
}
|
||||
}
|
||||
s.NumLabel = numLabels
|
||||
s.NumUnit = numUnits
|
||||
}
|
||||
|
||||
formatTag := func(v int64, key string) string {
|
||||
@ -337,6 +355,11 @@ func (fm functionMap) FindOrAdd(ni graph.NodeInfo) *profile.Function {
|
||||
|
||||
// printAssembly prints an annotated assembly listing.
|
||||
func printAssembly(w io.Writer, rpt *Report, obj plugin.ObjTool) error {
|
||||
return PrintAssembly(w, rpt, obj, -1)
|
||||
}
|
||||
|
||||
// PrintAssembly prints annotated disasssembly of rpt to w.
|
||||
func PrintAssembly(w io.Writer, rpt *Report, obj plugin.ObjTool, maxFuncs int) error {
|
||||
o := rpt.options
|
||||
prof := rpt.prof
|
||||
|
||||
@ -352,12 +375,34 @@ func printAssembly(w io.Writer, rpt *Report, obj plugin.ObjTool) error {
|
||||
fmt.Fprintln(w, "Total:", rpt.formatValue(rpt.total))
|
||||
symbols := symbolsFromBinaries(prof, g, o.Symbol, address, obj)
|
||||
symNodes := nodesPerSymbol(g.Nodes, symbols)
|
||||
// Sort function names for printing.
|
||||
var syms objSymbols
|
||||
|
||||
// Sort for printing.
|
||||
var syms []*objSymbol
|
||||
for s := range symNodes {
|
||||
syms = append(syms, s)
|
||||
}
|
||||
sort.Sort(syms)
|
||||
byName := func(a, b *objSymbol) bool {
|
||||
if na, nb := a.sym.Name[0], b.sym.Name[0]; na != nb {
|
||||
return na < nb
|
||||
}
|
||||
return a.sym.Start < b.sym.Start
|
||||
}
|
||||
if maxFuncs < 0 {
|
||||
sort.Sort(orderSyms{syms, byName})
|
||||
} else {
|
||||
byFlatSum := func(a, b *objSymbol) bool {
|
||||
suma, _ := symNodes[a].Sum()
|
||||
sumb, _ := symNodes[b].Sum()
|
||||
if suma != sumb {
|
||||
return suma > sumb
|
||||
}
|
||||
return byName(a, b)
|
||||
}
|
||||
sort.Sort(orderSyms{syms, byFlatSum})
|
||||
if len(syms) > maxFuncs {
|
||||
syms = syms[:maxFuncs]
|
||||
}
|
||||
}
|
||||
|
||||
// Correlate the symbols from the binary with the profile samples.
|
||||
for _, s := range syms {
|
||||
@ -471,6 +516,7 @@ func symbolsFromBinaries(prof *profile.Profile, g *graph.Graph, rx *regexp.Regex
|
||||
&objSymbol{
|
||||
sym: ms,
|
||||
base: base,
|
||||
file: f,
|
||||
},
|
||||
)
|
||||
}
|
||||
@ -485,25 +531,18 @@ func symbolsFromBinaries(prof *profile.Profile, g *graph.Graph, rx *regexp.Regex
|
||||
type objSymbol struct {
|
||||
sym *plugin.Sym
|
||||
base uint64
|
||||
file plugin.ObjFile
|
||||
}
|
||||
|
||||
// objSymbols is a wrapper type to enable sorting of []*objSymbol.
|
||||
type objSymbols []*objSymbol
|
||||
|
||||
func (o objSymbols) Len() int {
|
||||
return len(o)
|
||||
// orderSyms is a wrapper type to sort []*objSymbol by a supplied comparator.
|
||||
type orderSyms struct {
|
||||
v []*objSymbol
|
||||
less func(a, b *objSymbol) bool
|
||||
}
|
||||
|
||||
func (o objSymbols) Less(i, j int) bool {
|
||||
if namei, namej := o[i].sym.Name[0], o[j].sym.Name[0]; namei != namej {
|
||||
return namei < namej
|
||||
}
|
||||
return o[i].sym.Start < o[j].sym.Start
|
||||
}
|
||||
|
||||
func (o objSymbols) Swap(i, j int) {
|
||||
o[i], o[j] = o[j], o[i]
|
||||
}
|
||||
func (o orderSyms) Len() int { return len(o.v) }
|
||||
func (o orderSyms) Less(i, j int) bool { return o.less(o.v[i], o.v[j]) }
|
||||
func (o orderSyms) Swap(i, j int) { o.v[i], o.v[j] = o.v[j], o.v[i] }
|
||||
|
||||
// nodesPerSymbol classifies nodes into a group of symbols.
|
||||
func nodesPerSymbol(ns graph.Nodes, symbols []*objSymbol) map[*objSymbol]graph.Nodes {
|
||||
@ -528,6 +567,13 @@ type assemblyInstruction struct {
|
||||
line int
|
||||
flat, cum int64
|
||||
flatDiv, cumDiv int64
|
||||
startsBlock bool
|
||||
inlineCalls []callID
|
||||
}
|
||||
|
||||
type callID struct {
|
||||
file string
|
||||
line int
|
||||
}
|
||||
|
||||
func (a *assemblyInstruction) flatValue() int64 {
|
||||
@ -617,26 +663,25 @@ func printTags(w io.Writer, rpt *Report) error {
|
||||
for _, s := range p.Sample {
|
||||
for key, vals := range s.Label {
|
||||
for _, val := range vals {
|
||||
if valueMap, ok := tagMap[key]; ok {
|
||||
valueMap[val] = valueMap[val] + s.Value[0]
|
||||
continue
|
||||
}
|
||||
valueMap := make(map[string]int64)
|
||||
valueMap[val] = s.Value[0]
|
||||
valueMap, ok := tagMap[key]
|
||||
if !ok {
|
||||
valueMap = make(map[string]int64)
|
||||
tagMap[key] = valueMap
|
||||
}
|
||||
valueMap[val] += o.SampleValue(s.Value)
|
||||
}
|
||||
}
|
||||
for key, vals := range s.NumLabel {
|
||||
unit := o.NumLabelUnits[key]
|
||||
for _, nval := range vals {
|
||||
val := formatTag(nval, key)
|
||||
if valueMap, ok := tagMap[key]; ok {
|
||||
valueMap[val] = valueMap[val] + s.Value[0]
|
||||
continue
|
||||
}
|
||||
valueMap := make(map[string]int64)
|
||||
valueMap[val] = s.Value[0]
|
||||
val := formatTag(nval, unit)
|
||||
valueMap, ok := tagMap[key]
|
||||
if !ok {
|
||||
valueMap = make(map[string]int64)
|
||||
tagMap[key] = valueMap
|
||||
}
|
||||
valueMap[val] += o.SampleValue(s.Value)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -644,6 +689,7 @@ func printTags(w io.Writer, rpt *Report) error {
|
||||
for key := range tagMap {
|
||||
tagKeys = append(tagKeys, &graph.Tag{Name: key})
|
||||
}
|
||||
tabw := tabwriter.NewWriter(w, 0, 0, 1, ' ', tabwriter.AlignRight)
|
||||
for _, tagKey := range graph.SortTags(tagKeys, true) {
|
||||
var total int64
|
||||
key := tagKey.Name
|
||||
@ -653,18 +699,19 @@ func printTags(w io.Writer, rpt *Report) error {
|
||||
tags = append(tags, &graph.Tag{Name: t, Flat: c})
|
||||
}
|
||||
|
||||
fmt.Fprintf(w, "%s: Total %d\n", key, total)
|
||||
f, u := measurement.Scale(total, o.SampleUnit, o.OutputUnit)
|
||||
fmt.Fprintf(tabw, "%s:\t Total %.1f%s\n", key, f, u)
|
||||
for _, t := range graph.SortTags(tags, true) {
|
||||
f, u := measurement.Scale(t.FlatValue(), o.SampleUnit, o.OutputUnit)
|
||||
if total > 0 {
|
||||
fmt.Fprintf(w, " %8d (%s): %s\n", t.FlatValue(),
|
||||
percentage(t.FlatValue(), total), t.Name)
|
||||
fmt.Fprintf(tabw, " \t%.1f%s (%s):\t %s\n", f, u, percentage(t.FlatValue(), total), t.Name)
|
||||
} else {
|
||||
fmt.Fprintf(w, " %8d: %s\n", t.FlatValue(), t.Name)
|
||||
fmt.Fprintf(tabw, " \t%.1f%s:\t %s\n", f, u, t.Name)
|
||||
}
|
||||
}
|
||||
fmt.Fprintln(w)
|
||||
fmt.Fprintln(tabw)
|
||||
}
|
||||
return nil
|
||||
return tabw.Flush()
|
||||
}
|
||||
|
||||
// printComments prints all freeform comments in the profile.
|
||||
@ -677,16 +724,22 @@ func printComments(w io.Writer, rpt *Report) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// printText prints a flat text report for a profile.
|
||||
func printText(w io.Writer, rpt *Report) error {
|
||||
// TextItem holds a single text report entry.
|
||||
type TextItem struct {
|
||||
Name string
|
||||
InlineLabel string // Not empty if inlined
|
||||
Flat, Cum int64 // Raw values
|
||||
FlatFormat, CumFormat string // Formatted values
|
||||
}
|
||||
|
||||
// TextItems returns a list of text items from the report and a list
|
||||
// of labels that describe the report.
|
||||
func TextItems(rpt *Report) ([]TextItem, []string) {
|
||||
g, origCount, droppedNodes, _ := rpt.newTrimmedGraph()
|
||||
rpt.selectOutputUnit(g)
|
||||
labels := reportLabels(rpt, g, origCount, droppedNodes, 0, false)
|
||||
|
||||
fmt.Fprintln(w, strings.Join(reportLabels(rpt, g, origCount, droppedNodes, 0, false), "\n"))
|
||||
|
||||
fmt.Fprintf(w, "%10s %5s%% %5s%% %10s %5s%%\n",
|
||||
"flat", "flat", "sum", "cum", "cum")
|
||||
|
||||
var items []TextItem
|
||||
var flatSum int64
|
||||
for _, n := range g.Nodes {
|
||||
name, flat, cum := n.Info.PrintableName(), n.FlatValue(), n.CumValue()
|
||||
@ -700,22 +753,46 @@ func printText(w io.Writer, rpt *Report) error {
|
||||
}
|
||||
}
|
||||
|
||||
var inl string
|
||||
if inline {
|
||||
if noinline {
|
||||
name = name + " (partial-inline)"
|
||||
inl = "(partial-inline)"
|
||||
} else {
|
||||
name = name + " (inline)"
|
||||
inl = "(inline)"
|
||||
}
|
||||
}
|
||||
|
||||
flatSum += flat
|
||||
fmt.Fprintf(w, "%10s %s %s %10s %s %s\n",
|
||||
rpt.formatValue(flat),
|
||||
percentage(flat, rpt.total),
|
||||
items = append(items, TextItem{
|
||||
Name: name,
|
||||
InlineLabel: inl,
|
||||
Flat: flat,
|
||||
Cum: cum,
|
||||
FlatFormat: rpt.formatValue(flat),
|
||||
CumFormat: rpt.formatValue(cum),
|
||||
})
|
||||
}
|
||||
return items, labels
|
||||
}
|
||||
|
||||
// printText prints a flat text report for a profile.
|
||||
func printText(w io.Writer, rpt *Report) error {
|
||||
items, labels := TextItems(rpt)
|
||||
fmt.Fprintln(w, strings.Join(labels, "\n"))
|
||||
fmt.Fprintf(w, "%10s %5s%% %5s%% %10s %5s%%\n",
|
||||
"flat", "flat", "sum", "cum", "cum")
|
||||
var flatSum int64
|
||||
for _, item := range items {
|
||||
inl := item.InlineLabel
|
||||
if inl != "" {
|
||||
inl = " " + inl
|
||||
}
|
||||
flatSum += item.Flat
|
||||
fmt.Fprintf(w, "%10s %s %s %10s %s %s%s\n",
|
||||
item.FlatFormat, percentage(item.Flat, rpt.total),
|
||||
percentage(flatSum, rpt.total),
|
||||
rpt.formatValue(cum),
|
||||
percentage(cum, rpt.total),
|
||||
name)
|
||||
item.CumFormat, percentage(item.Cum, rpt.total),
|
||||
item.Name, inl)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@ -749,6 +826,20 @@ func printTraces(w io.Writer, rpt *Report) error {
|
||||
}
|
||||
sort.Strings(labels)
|
||||
fmt.Fprint(w, strings.Join(labels, ""))
|
||||
|
||||
// Print any numeric labels for the sample
|
||||
var numLabels []string
|
||||
for key, vals := range sample.NumLabel {
|
||||
unit := o.NumLabelUnits[key]
|
||||
numValues := make([]string, len(vals))
|
||||
for i, vv := range vals {
|
||||
numValues[i] = measurement.Label(vv, unit)
|
||||
}
|
||||
numLabels = append(numLabels, fmt.Sprintf("%10s: %s\n", key, strings.Join(numValues, " ")))
|
||||
}
|
||||
sort.Strings(numLabels)
|
||||
fmt.Fprint(w, strings.Join(numLabels, ""))
|
||||
|
||||
var d, v int64
|
||||
v = o.SampleValue(sample.Value)
|
||||
if o.SampleMeanDivisor != nil {
|
||||
@ -969,24 +1060,25 @@ func printTree(w io.Writer, rpt *Report) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// printDOT prints an annotated callgraph in DOT format.
|
||||
func printDOT(w io.Writer, rpt *Report) error {
|
||||
// GetDOT returns a graph suitable for dot processing along with some
|
||||
// configuration information.
|
||||
func GetDOT(rpt *Report) (*graph.Graph, *graph.DotConfig) {
|
||||
g, origCount, droppedNodes, droppedEdges := rpt.newTrimmedGraph()
|
||||
rpt.selectOutputUnit(g)
|
||||
labels := reportLabels(rpt, g, origCount, droppedNodes, droppedEdges, true)
|
||||
|
||||
o := rpt.options
|
||||
formatTag := func(v int64, key string) string {
|
||||
return measurement.ScaledLabel(v, key, o.OutputUnit)
|
||||
}
|
||||
|
||||
c := &graph.DotConfig{
|
||||
Title: rpt.options.Title,
|
||||
Labels: labels,
|
||||
FormatValue: rpt.formatValue,
|
||||
FormatTag: formatTag,
|
||||
Total: rpt.total,
|
||||
}
|
||||
return g, c
|
||||
}
|
||||
|
||||
// printDOT prints an annotated callgraph in DOT format.
|
||||
func printDOT(w io.Writer, rpt *Report) error {
|
||||
g, c := GetDOT(rpt)
|
||||
graph.ComposeDot(w, g, &graph.DotAttributes{}, c)
|
||||
return nil
|
||||
}
|
||||
@ -1055,9 +1147,7 @@ func reportLabels(rpt *Report, g *graph.Graph, origCount, droppedNodes, droppedE
|
||||
|
||||
var label []string
|
||||
if len(rpt.options.ProfileLabels) > 0 {
|
||||
for _, l := range rpt.options.ProfileLabels {
|
||||
label = append(label, l)
|
||||
}
|
||||
label = append(label, rpt.options.ProfileLabels...)
|
||||
} else if fullHeaders || !rpt.options.CompactLabels {
|
||||
label = ProfileLabels(rpt)
|
||||
}
|
||||
@ -1067,6 +1157,11 @@ func reportLabels(rpt *Report, g *graph.Graph, origCount, droppedNodes, droppedE
|
||||
flatSum = flatSum + n.FlatValue()
|
||||
}
|
||||
|
||||
if len(rpt.options.ActiveFilters) > 0 {
|
||||
activeFilters := legendActiveFilters(rpt.options.ActiveFilters)
|
||||
label = append(label, activeFilters...)
|
||||
}
|
||||
|
||||
label = append(label, fmt.Sprintf("Showing nodes accounting for %s, %s of %s total", rpt.formatValue(flatSum), strings.TrimSpace(percentage(flatSum, rpt.total)), rpt.formatValue(rpt.total)))
|
||||
|
||||
if rpt.total != 0 {
|
||||
@ -1086,6 +1181,18 @@ func reportLabels(rpt *Report, g *graph.Graph, origCount, droppedNodes, droppedE
|
||||
return label
|
||||
}
|
||||
|
||||
func legendActiveFilters(activeFilters []string) []string {
|
||||
legendActiveFilters := make([]string, len(activeFilters)+1)
|
||||
legendActiveFilters[0] = "Active filters:"
|
||||
for i, s := range activeFilters {
|
||||
if len(s) > 80 {
|
||||
s = s[:80] + "…"
|
||||
}
|
||||
legendActiveFilters[i+1] = " " + s
|
||||
}
|
||||
return legendActiveFilters
|
||||
}
|
||||
|
||||
func genLabel(d int, n, l, f string) string {
|
||||
if d > 1 {
|
||||
n = n + "s"
|
||||
@ -1159,6 +1266,9 @@ type Report struct {
|
||||
formatValue func(int64) string
|
||||
}
|
||||
|
||||
// Total returns the total number of samples in a report.
|
||||
func (rpt *Report) Total() int64 { return rpt.total }
|
||||
|
||||
func abs64(i int64) int64 {
|
||||
if i < 0 {
|
||||
return -i
|
||||
|
21
src/cmd/vendor/github.com/google/pprof/internal/report/report_test.go
generated
vendored
21
src/cmd/vendor/github.com/google/pprof/internal/report/report_test.go
generated
vendored
@ -264,3 +264,24 @@ func TestFunctionMap(t *testing.T) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestLegendActiveFilters(t *testing.T) {
|
||||
activeFilterInput := []string{
|
||||
"focus=123|456|789|101112|131415|161718|192021|222324|252627|282930|313233|343536|363738|acbdefghijklmnop",
|
||||
"show=short filter",
|
||||
}
|
||||
expectedLegendActiveFilter := []string{
|
||||
"Active filters:",
|
||||
" focus=123|456|789|101112|131415|161718|192021|222324|252627|282930|313233|343536…",
|
||||
" show=short filter",
|
||||
}
|
||||
legendActiveFilter := legendActiveFilters(activeFilterInput)
|
||||
if len(legendActiveFilter) != len(expectedLegendActiveFilter) {
|
||||
t.Errorf("wanted length %v got length %v", len(expectedLegendActiveFilter), len(legendActiveFilter))
|
||||
}
|
||||
for i := range legendActiveFilter {
|
||||
if legendActiveFilter[i] != expectedLegendActiveFilter[i] {
|
||||
t.Errorf("%d: want \"%v\", got \"%v\"", i, expectedLegendActiveFilter[i], legendActiveFilter[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
225
src/cmd/vendor/github.com/google/pprof/internal/report/source.go
generated
vendored
225
src/cmd/vendor/github.com/google/pprof/internal/report/source.go
generated
vendored
@ -62,6 +62,7 @@ func printSource(w io.Writer, rpt *Report) error {
|
||||
}
|
||||
sourcePath = wd
|
||||
}
|
||||
reader := newSourceReader(sourcePath)
|
||||
|
||||
fmt.Fprintf(w, "Total: %s\n", rpt.formatValue(rpt.total))
|
||||
for _, fn := range functions {
|
||||
@ -94,7 +95,7 @@ func printSource(w io.Writer, rpt *Report) error {
|
||||
fns := fileNodes[filename]
|
||||
flatSum, cumSum := fns.Sum()
|
||||
|
||||
fnodes, _, err := getSourceFromFile(filename, sourcePath, fns, 0, 0)
|
||||
fnodes, _, err := getSourceFromFile(filename, reader, fns, 0, 0)
|
||||
fmt.Fprintf(w, "ROUTINE ======================== %s in %s\n", name, filename)
|
||||
fmt.Fprintf(w, "%10s %10s (flat, cum) %s of Total\n",
|
||||
rpt.formatValue(flatSum), rpt.formatValue(cumSum),
|
||||
@ -116,6 +117,16 @@ func printSource(w io.Writer, rpt *Report) error {
|
||||
// printWebSource prints an annotated source listing, include all
|
||||
// functions with samples that match the regexp rpt.options.symbol.
|
||||
func printWebSource(w io.Writer, rpt *Report, obj plugin.ObjTool) error {
|
||||
printHeader(w, rpt)
|
||||
if err := PrintWebList(w, rpt, obj, -1); err != nil {
|
||||
return err
|
||||
}
|
||||
printPageClosing(w)
|
||||
return nil
|
||||
}
|
||||
|
||||
// PrintWebList prints annotated source listing of rpt to w.
|
||||
func PrintWebList(w io.Writer, rpt *Report, obj plugin.ObjTool, maxFiles int) error {
|
||||
o := rpt.options
|
||||
g := rpt.newGraph(nil)
|
||||
|
||||
@ -134,6 +145,7 @@ func printWebSource(w io.Writer, rpt *Report, obj plugin.ObjTool) error {
|
||||
}
|
||||
sourcePath = wd
|
||||
}
|
||||
reader := newSourceReader(sourcePath)
|
||||
|
||||
type fileFunction struct {
|
||||
fileName, functionName string
|
||||
@ -167,7 +179,7 @@ func printWebSource(w io.Writer, rpt *Report, obj plugin.ObjTool) error {
|
||||
}
|
||||
|
||||
if len(fileNodes) == 0 {
|
||||
return fmt.Errorf("No source information for %s\n", o.Symbol.String())
|
||||
return fmt.Errorf("No source information for %s", o.Symbol.String())
|
||||
}
|
||||
|
||||
sourceFiles := make(graph.Nodes, 0, len(fileNodes))
|
||||
@ -176,10 +188,18 @@ func printWebSource(w io.Writer, rpt *Report, obj plugin.ObjTool) error {
|
||||
sNode.Flat, sNode.Cum = nodes.Sum()
|
||||
sourceFiles = append(sourceFiles, &sNode)
|
||||
}
|
||||
|
||||
// Limit number of files printed?
|
||||
if maxFiles < 0 {
|
||||
sourceFiles.Sort(graph.FileOrder)
|
||||
} else {
|
||||
sourceFiles.Sort(graph.FlatNameOrder)
|
||||
if maxFiles < len(sourceFiles) {
|
||||
sourceFiles = sourceFiles[:maxFiles]
|
||||
}
|
||||
}
|
||||
|
||||
// Print each file associated with this function.
|
||||
printHeader(w, rpt)
|
||||
for _, n := range sourceFiles {
|
||||
ff := fileFunction{n.Info.File, n.Info.Name}
|
||||
fns := fileNodes[ff]
|
||||
@ -187,18 +207,17 @@ func printWebSource(w io.Writer, rpt *Report, obj plugin.ObjTool) error {
|
||||
asm := assemblyPerSourceLine(symbols, fns, ff.fileName, obj)
|
||||
start, end := sourceCoordinates(asm)
|
||||
|
||||
fnodes, path, err := getSourceFromFile(ff.fileName, sourcePath, fns, start, end)
|
||||
fnodes, path, err := getSourceFromFile(ff.fileName, reader, fns, start, end)
|
||||
if err != nil {
|
||||
fnodes, path = getMissingFunctionSource(ff.fileName, asm, start, end)
|
||||
}
|
||||
|
||||
printFunctionHeader(w, ff.functionName, path, n.Flat, n.Cum, rpt)
|
||||
for _, fn := range fnodes {
|
||||
printFunctionSourceLine(w, fn, asm[fn.Info.Lineno], rpt)
|
||||
printFunctionSourceLine(w, fn, asm[fn.Info.Lineno], reader, rpt)
|
||||
}
|
||||
printFunctionClosing(w)
|
||||
}
|
||||
printPageClosing(w)
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -236,11 +255,41 @@ func assemblyPerSourceLine(objSyms []*objSymbol, rs graph.Nodes, src string, obj
|
||||
srcBase := filepath.Base(src)
|
||||
anodes := annotateAssembly(insts, rs, o.base)
|
||||
var lineno = 0
|
||||
var prevline = 0
|
||||
for _, an := range anodes {
|
||||
if filepath.Base(an.file) == srcBase {
|
||||
// Do not rely solely on the line number produced by Disasm
|
||||
// since it is not what we want in the presence of inlining.
|
||||
//
|
||||
// E.g., suppose we are printing source code for F and this
|
||||
// instruction is from H where F called G called H and both
|
||||
// of those calls were inlined. We want to use the line
|
||||
// number from F, not from H (which is what Disasm gives us).
|
||||
//
|
||||
// So find the outer-most linenumber in the source file.
|
||||
found := false
|
||||
if frames, err := o.file.SourceLine(an.address + o.base); err == nil {
|
||||
for i := len(frames) - 1; i >= 0; i-- {
|
||||
if filepath.Base(frames[i].File) == srcBase {
|
||||
for j := i - 1; j >= 0; j-- {
|
||||
an.inlineCalls = append(an.inlineCalls, callID{frames[j].File, frames[j].Line})
|
||||
}
|
||||
lineno = frames[i].Line
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
if !found && filepath.Base(an.file) == srcBase {
|
||||
lineno = an.line
|
||||
}
|
||||
|
||||
if lineno != 0 {
|
||||
if lineno != prevline {
|
||||
// This instruction starts a new block
|
||||
// of contiguous instructions on this line.
|
||||
an.startsBlock = true
|
||||
}
|
||||
prevline = lineno
|
||||
assembly[lineno] = append(assembly[lineno], an)
|
||||
}
|
||||
}
|
||||
@ -265,7 +314,15 @@ func findMatchingSymbol(objSyms []*objSymbol, ns graph.Nodes) *objSymbol {
|
||||
|
||||
// printHeader prints the page header for a weblist report.
|
||||
func printHeader(w io.Writer, rpt *Report) {
|
||||
fmt.Fprintln(w, weblistPageHeader)
|
||||
fmt.Fprintln(w, `
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<title>Pprof listing</title>`)
|
||||
fmt.Fprintln(w, weblistPageCSS)
|
||||
fmt.Fprintln(w, weblistPageScript)
|
||||
fmt.Fprint(w, "</head>\n<body>\n\n")
|
||||
|
||||
var labels []string
|
||||
for _, l := range ProfileLabels(rpt) {
|
||||
@ -290,30 +347,33 @@ func printFunctionHeader(w io.Writer, name, path string, flatSum, cumSum int64,
|
||||
}
|
||||
|
||||
// printFunctionSourceLine prints a source line and the corresponding assembly.
|
||||
func printFunctionSourceLine(w io.Writer, fn *graph.Node, assembly []assemblyInstruction, rpt *Report) {
|
||||
func printFunctionSourceLine(w io.Writer, fn *graph.Node, assembly []assemblyInstruction, reader *sourceReader, rpt *Report) {
|
||||
if len(assembly) == 0 {
|
||||
fmt.Fprintf(w,
|
||||
"<span class=line> %6d</span> <span class=nop> %10s %10s %s </span>\n",
|
||||
"<span class=line> %6d</span> <span class=nop> %10s %10s %8s %s </span>\n",
|
||||
fn.Info.Lineno,
|
||||
valueOrDot(fn.Flat, rpt), valueOrDot(fn.Cum, rpt),
|
||||
template.HTMLEscapeString(fn.Info.Name))
|
||||
"", template.HTMLEscapeString(fn.Info.Name))
|
||||
return
|
||||
}
|
||||
|
||||
fmt.Fprintf(w,
|
||||
"<span class=line> %6d</span> <span class=deadsrc> %10s %10s %s </span>",
|
||||
"<span class=line> %6d</span> <span class=deadsrc> %10s %10s %8s %s </span>",
|
||||
fn.Info.Lineno,
|
||||
valueOrDot(fn.Flat, rpt), valueOrDot(fn.Cum, rpt),
|
||||
template.HTMLEscapeString(fn.Info.Name))
|
||||
"", template.HTMLEscapeString(fn.Info.Name))
|
||||
srcIndent := indentation(fn.Info.Name)
|
||||
fmt.Fprint(w, "<span class=asm>")
|
||||
for _, an := range assembly {
|
||||
var curCalls []callID
|
||||
for i, an := range assembly {
|
||||
if an.startsBlock && i != 0 {
|
||||
// Insert a separator between discontiguous blocks.
|
||||
fmt.Fprintf(w, " %8s %28s\n", "", "⋮")
|
||||
}
|
||||
|
||||
var fileline string
|
||||
class := "disasmloc"
|
||||
if an.file != "" {
|
||||
fileline = fmt.Sprintf("%s:%d", template.HTMLEscapeString(an.file), an.line)
|
||||
if an.line != fn.Info.Lineno {
|
||||
class = "unimportant"
|
||||
}
|
||||
}
|
||||
flat, cum := an.flat, an.cum
|
||||
if an.flatDiv != 0 {
|
||||
@ -322,11 +382,30 @@ func printFunctionSourceLine(w io.Writer, fn *graph.Node, assembly []assemblyIns
|
||||
if an.cumDiv != 0 {
|
||||
cum = cum / an.cumDiv
|
||||
}
|
||||
fmt.Fprintf(w, " %8s %10s %10s %8x: %-48s <span class=%s>%s</span>\n", "",
|
||||
valueOrDot(flat, rpt), valueOrDot(cum, rpt),
|
||||
an.address,
|
||||
template.HTMLEscapeString(an.instruction),
|
||||
class,
|
||||
|
||||
// Print inlined call context.
|
||||
for j, c := range an.inlineCalls {
|
||||
if j < len(curCalls) && curCalls[j] == c {
|
||||
// Skip if same as previous instruction.
|
||||
continue
|
||||
}
|
||||
curCalls = nil
|
||||
fname := trimPath(c.file)
|
||||
fline, ok := reader.line(fname, c.line)
|
||||
if !ok {
|
||||
fline = ""
|
||||
}
|
||||
text := strings.Repeat(" ", srcIndent+4+4*j) + strings.TrimSpace(fline)
|
||||
fmt.Fprintf(w, " %8s %10s %10s %8s <span class=inlinesrc>%s</span> <span class=unimportant>%s:%d</span>\n",
|
||||
"", "", "", "",
|
||||
template.HTMLEscapeString(fmt.Sprintf("%-80s", text)),
|
||||
template.HTMLEscapeString(filepath.Base(fname)), c.line)
|
||||
}
|
||||
curCalls = an.inlineCalls
|
||||
text := strings.Repeat(" ", srcIndent+4+4*len(curCalls)) + an.instruction
|
||||
fmt.Fprintf(w, " %8s %10s %10s %8x: %s <span class=unimportant>%s</span>\n",
|
||||
"", valueOrDot(flat, rpt), valueOrDot(cum, rpt), an.address,
|
||||
template.HTMLEscapeString(fmt.Sprintf("%-80s", text)),
|
||||
template.HTMLEscapeString(fileline))
|
||||
}
|
||||
fmt.Fprintln(w, "</span>")
|
||||
@ -345,14 +424,10 @@ func printPageClosing(w io.Writer) {
|
||||
// getSourceFromFile collects the sources of a function from a source
|
||||
// file and annotates it with the samples in fns. Returns the sources
|
||||
// as nodes, using the info.name field to hold the source code.
|
||||
func getSourceFromFile(file, sourcePath string, fns graph.Nodes, start, end int) (graph.Nodes, string, error) {
|
||||
func getSourceFromFile(file string, reader *sourceReader, fns graph.Nodes, start, end int) (graph.Nodes, string, error) {
|
||||
file = trimPath(file)
|
||||
f, err := openSourceFile(file, sourcePath)
|
||||
if err != nil {
|
||||
return nil, file, err
|
||||
}
|
||||
|
||||
lineNodes := make(map[int]graph.Nodes)
|
||||
|
||||
// Collect source coordinates from profile.
|
||||
const margin = 5 // Lines before first/after last sample.
|
||||
if start == 0 {
|
||||
@ -382,23 +457,17 @@ func getSourceFromFile(file, sourcePath string, fns graph.Nodes, start, end int)
|
||||
}
|
||||
lineNodes[lineno] = append(lineNodes[lineno], n)
|
||||
}
|
||||
if start < 1 {
|
||||
start = 1
|
||||
}
|
||||
|
||||
var src graph.Nodes
|
||||
buf := bufio.NewReader(f)
|
||||
lineno := 1
|
||||
for {
|
||||
line, err := buf.ReadString('\n')
|
||||
if err != nil {
|
||||
if err != io.EOF {
|
||||
return nil, file, err
|
||||
}
|
||||
if line == "" {
|
||||
for lineno := start; lineno <= end; lineno++ {
|
||||
line, ok := reader.line(file, lineno)
|
||||
if !ok {
|
||||
break
|
||||
}
|
||||
}
|
||||
if lineno >= start {
|
||||
flat, cum := lineNodes[lineno].Sum()
|
||||
|
||||
src = append(src, &graph.Node{
|
||||
Info: graph.NodeInfo{
|
||||
Name: strings.TrimRight(line, "\n"),
|
||||
@ -408,10 +477,8 @@ func getSourceFromFile(file, sourcePath string, fns graph.Nodes, start, end int)
|
||||
Cum: cum,
|
||||
})
|
||||
}
|
||||
lineno++
|
||||
if lineno > end {
|
||||
break
|
||||
}
|
||||
if err := reader.fileError(file); err != nil {
|
||||
return nil, file, err
|
||||
}
|
||||
return src, file, nil
|
||||
}
|
||||
@ -446,6 +513,57 @@ func getMissingFunctionSource(filename string, asm map[int][]assemblyInstruction
|
||||
return fnodes, filename
|
||||
}
|
||||
|
||||
// sourceReader provides access to source code with caching of file contents.
|
||||
type sourceReader struct {
|
||||
searchPath string
|
||||
|
||||
// files maps from path name to a list of lines.
|
||||
// files[*][0] is unused since line numbering starts at 1.
|
||||
files map[string][]string
|
||||
|
||||
// errors collects errors encountered per file. These errors are
|
||||
// consulted before returning out of these module.
|
||||
errors map[string]error
|
||||
}
|
||||
|
||||
func newSourceReader(searchPath string) *sourceReader {
|
||||
return &sourceReader{
|
||||
searchPath,
|
||||
make(map[string][]string),
|
||||
make(map[string]error),
|
||||
}
|
||||
}
|
||||
|
||||
func (reader *sourceReader) fileError(path string) error {
|
||||
return reader.errors[path]
|
||||
}
|
||||
|
||||
func (reader *sourceReader) line(path string, lineno int) (string, bool) {
|
||||
lines, ok := reader.files[path]
|
||||
if !ok {
|
||||
// Read and cache file contents.
|
||||
lines = []string{""} // Skip 0th line
|
||||
f, err := openSourceFile(path, reader.searchPath)
|
||||
if err != nil {
|
||||
reader.errors[path] = err
|
||||
} else {
|
||||
s := bufio.NewScanner(f)
|
||||
for s.Scan() {
|
||||
lines = append(lines, s.Text())
|
||||
}
|
||||
f.Close()
|
||||
if s.Err() != nil {
|
||||
reader.errors[path] = err
|
||||
}
|
||||
}
|
||||
reader.files[path] = lines
|
||||
}
|
||||
if lineno <= 0 || lineno >= len(lines) {
|
||||
return "", false
|
||||
}
|
||||
return lines[lineno], true
|
||||
}
|
||||
|
||||
// openSourceFile opens a source file from a name encoded in a
|
||||
// profile. File names in a profile after often relative paths, so
|
||||
// search them in each of the paths in searchPath (or CWD by default),
|
||||
@ -492,3 +610,20 @@ func trimPath(path string) string {
|
||||
}
|
||||
return path
|
||||
}
|
||||
|
||||
func indentation(line string) int {
|
||||
column := 0
|
||||
for _, c := range line {
|
||||
if c == ' ' {
|
||||
column++
|
||||
} else if c == '\t' {
|
||||
column++
|
||||
for column%8 != 0 {
|
||||
column++
|
||||
}
|
||||
} else {
|
||||
break
|
||||
}
|
||||
}
|
||||
return column
|
||||
}
|
||||
|
39
src/cmd/vendor/github.com/google/pprof/internal/report/source_html.go
generated
vendored
39
src/cmd/vendor/github.com/google/pprof/internal/report/source_html.go
generated
vendored
@ -14,12 +14,17 @@
|
||||
|
||||
package report
|
||||
|
||||
const weblistPageHeader = `
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Pprof listing</title>
|
||||
<style type="text/css">
|
||||
import (
|
||||
"html/template"
|
||||
)
|
||||
|
||||
// AddSourceTemplates adds templates used by PrintWebList to t.
|
||||
func AddSourceTemplates(t *template.Template) {
|
||||
template.Must(t.Parse(`{{define "weblistcss"}}` + weblistPageCSS + `{{end}}`))
|
||||
template.Must(t.Parse(`{{define "weblistjs"}}` + weblistPageScript + `{{end}}`))
|
||||
}
|
||||
|
||||
const weblistPageCSS = `<style type="text/css">
|
||||
body {
|
||||
font-family: sans-serif;
|
||||
}
|
||||
@ -30,17 +35,11 @@ h1 {
|
||||
.legend {
|
||||
font-size: 1.25em;
|
||||
}
|
||||
.line {
|
||||
.line, .nop, .unimportant {
|
||||
color: #aaaaaa;
|
||||
}
|
||||
.nop {
|
||||
color: #aaaaaa;
|
||||
}
|
||||
.unimportant {
|
||||
color: #cccccc;
|
||||
}
|
||||
.disasmloc {
|
||||
color: #000000;
|
||||
.inlinesrc {
|
||||
color: #000066;
|
||||
}
|
||||
.deadsrc {
|
||||
cursor: pointer;
|
||||
@ -59,8 +58,9 @@ background-color: #eeeeee;
|
||||
color: #008800;
|
||||
display: none;
|
||||
}
|
||||
</style>
|
||||
<script type="text/javascript">
|
||||
</style>`
|
||||
|
||||
const weblistPageScript = `<script type="text/javascript">
|
||||
function pprof_toggle_asm(e) {
|
||||
var target;
|
||||
if (!e) e = window.event;
|
||||
@ -76,10 +76,7 @@ function pprof_toggle_asm(e) {
|
||||
}
|
||||
}
|
||||
}
|
||||
</script>
|
||||
</head>
|
||||
<body>
|
||||
`
|
||||
</script>`
|
||||
|
||||
const weblistPageClosing = `
|
||||
</body>
|
||||
|
89
src/cmd/vendor/github.com/google/pprof/internal/report/source_test.go
generated
vendored
Normal file
89
src/cmd/vendor/github.com/google/pprof/internal/report/source_test.go
generated
vendored
Normal file
@ -0,0 +1,89 @@
|
||||
package report
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/google/pprof/internal/binutils"
|
||||
"github.com/google/pprof/profile"
|
||||
)
|
||||
|
||||
func TestWebList(t *testing.T) {
|
||||
if runtime.GOOS != "linux" || runtime.GOARCH != "amd64" {
|
||||
t.Skip("weblist only tested on x86-64 linux")
|
||||
}
|
||||
|
||||
cpu := readProfile(filepath.Join("testdata", "sample.cpu"), t)
|
||||
rpt := New(cpu, &Options{
|
||||
OutputFormat: WebList,
|
||||
Symbol: regexp.MustCompile("busyLoop"),
|
||||
SampleValue: func(v []int64) int64 { return v[1] },
|
||||
SampleUnit: cpu.SampleType[1].Unit,
|
||||
})
|
||||
buf := bytes.NewBuffer(nil)
|
||||
if err := Generate(buf, rpt, &binutils.Binutils{}); err != nil {
|
||||
t.Fatalf("could not generate weblist: %v", err)
|
||||
}
|
||||
output := buf.String()
|
||||
|
||||
for _, expect := range []string{"func busyLoop", "callq", "math.Abs"} {
|
||||
if !strings.Contains(output, expect) {
|
||||
t.Errorf("weblist output does not contain '%s':\n%s", expect, output)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestIndentation(t *testing.T) {
|
||||
for _, c := range []struct {
|
||||
str string
|
||||
wantIndent int
|
||||
}{
|
||||
{"", 0},
|
||||
{"foobar", 0},
|
||||
{" foo", 2},
|
||||
{"\tfoo", 8},
|
||||
{"\t foo", 9},
|
||||
{" \tfoo", 8},
|
||||
{" \tfoo", 8},
|
||||
{" \tfoo", 16},
|
||||
} {
|
||||
if n := indentation(c.str); n != c.wantIndent {
|
||||
t.Errorf("indentation(%v): got %d, want %d", c.str, n, c.wantIndent)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func readProfile(fname string, t *testing.T) *profile.Profile {
|
||||
file, err := os.Open(fname)
|
||||
if err != nil {
|
||||
t.Fatalf("%s: could not open profile: %v", fname, err)
|
||||
}
|
||||
defer file.Close()
|
||||
p, err := profile.Parse(file)
|
||||
if err != nil {
|
||||
t.Fatalf("%s: could not parse profile: %v", fname, err)
|
||||
}
|
||||
|
||||
// Fix file names so they do not include absolute path names.
|
||||
fix := func(s string) string {
|
||||
const testdir = "/internal/report/"
|
||||
pos := strings.Index(s, testdir)
|
||||
if pos == -1 {
|
||||
return s
|
||||
}
|
||||
return s[pos+len(testdir):]
|
||||
}
|
||||
for _, m := range p.Mapping {
|
||||
m.File = fix(m.File)
|
||||
}
|
||||
for _, f := range p.Function {
|
||||
f.Filename = fix(f.Filename)
|
||||
}
|
||||
|
||||
return p
|
||||
}
|
10
src/cmd/vendor/github.com/google/pprof/internal/report/testdata/README.md
generated
vendored
Normal file
10
src/cmd/vendor/github.com/google/pprof/internal/report/testdata/README.md
generated
vendored
Normal file
@ -0,0 +1,10 @@
|
||||
sample/ contains a sample program that can be profiled.
|
||||
sample.bin is its x86-64 binary.
|
||||
sample.cpu is a profile generated by sample.bin.
|
||||
|
||||
To update the binary and profile:
|
||||
|
||||
```shell
|
||||
go build -o sample.bin ./sample
|
||||
./sample.bin -cpuprofile sample.cpu
|
||||
```
|
BIN
src/cmd/vendor/github.com/google/pprof/internal/report/testdata/sample.bin
generated
vendored
Executable file
BIN
src/cmd/vendor/github.com/google/pprof/internal/report/testdata/sample.bin
generated
vendored
Executable file
Binary file not shown.
BIN
src/cmd/vendor/github.com/google/pprof/internal/report/testdata/sample.cpu
generated
vendored
Normal file
BIN
src/cmd/vendor/github.com/google/pprof/internal/report/testdata/sample.cpu
generated
vendored
Normal file
Binary file not shown.
41
src/cmd/vendor/github.com/google/pprof/internal/report/testdata/sample/sample.go
generated
vendored
Normal file
41
src/cmd/vendor/github.com/google/pprof/internal/report/testdata/sample/sample.go
generated
vendored
Normal file
@ -0,0 +1,41 @@
|
||||
// sample program that is used to produce some of the files in
|
||||
// pprof/internal/report/testdata.
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"math"
|
||||
"os"
|
||||
"runtime/pprof"
|
||||
)
|
||||
|
||||
var cpuProfile = flag.String("cpuprofile", "", "where to write cpu profile")
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
f, err := os.Create(*cpuProfile)
|
||||
if err != nil {
|
||||
log.Fatal("could not create CPU profile: ", err)
|
||||
}
|
||||
if err := pprof.StartCPUProfile(f); err != nil {
|
||||
log.Fatal("could not start CPU profile: ", err)
|
||||
}
|
||||
defer pprof.StopCPUProfile()
|
||||
busyLoop()
|
||||
}
|
||||
|
||||
func busyLoop() {
|
||||
m := make(map[int]int)
|
||||
for i := 0; i < 1000000; i++ {
|
||||
m[i] = i + 10
|
||||
}
|
||||
var sum float64
|
||||
for i := 0; i < 100; i++ {
|
||||
for _, v := range m {
|
||||
sum += math.Abs(float64(v))
|
||||
}
|
||||
}
|
||||
fmt.Println("Sum", sum)
|
||||
}
|
14
src/cmd/vendor/github.com/google/pprof/internal/report/testdata/source.dot
generated
vendored
14
src/cmd/vendor/github.com/google/pprof/internal/report/testdata/source.dot
generated
vendored
@ -1,13 +1,13 @@
|
||||
digraph "unnamed" {
|
||||
node [style=filled fillcolor="#f8f8f8"]
|
||||
subgraph cluster_L { "Duration: 10s, Total samples = 11111 " [shape=box fontsize=16 label="Duration: 10s, Total samples = 11111 \lShowing nodes accounting for 11111, 100% of 11111 total\l"] }
|
||||
N1 [label="tee\nsource2:8\n10000 (90.00%)" fontsize=24 shape=box tooltip="tee testdata/source2:8 (10000)" color="#b20500" fillcolor="#edd6d5"]
|
||||
N2 [label="main\nsource1:2\n1 (0.009%)\nof 11111 (100%)" fontsize=9 shape=box tooltip="main testdata/source1:2 (11111)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N3 [label="tee\nsource2:2\n1000 (9.00%)\nof 11000 (99.00%)" fontsize=14 shape=box tooltip="tee testdata/source2:2 (11000)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N4 [label="tee\nsource2:8\n100 (0.9%)" fontsize=10 shape=box tooltip="tee testdata/source2:8 (100)" color="#b2b0aa" fillcolor="#edecec"]
|
||||
N5 [label="bar\nsource1:10\n10 (0.09%)" fontsize=9 shape=box tooltip="bar testdata/source1:10 (10)" color="#b2b2b1" fillcolor="#ededed"]
|
||||
N6 [label="bar\nsource1:10\n0 of 100 (0.9%)" fontsize=8 shape=box tooltip="bar testdata/source1:10 (100)" color="#b2b0aa" fillcolor="#edecec"]
|
||||
N7 [label="foo\nsource1:4\n0 of 10 (0.09%)" fontsize=8 shape=box tooltip="foo testdata/source1:4 (10)" color="#b2b2b1" fillcolor="#ededed"]
|
||||
N1 [label="tee\nsource2:8\n10000 (90.00%)" id="node1" fontsize=24 shape=box tooltip="tee testdata/source2:8 (10000)" color="#b20500" fillcolor="#edd6d5"]
|
||||
N2 [label="main\nsource1:2\n1 (0.009%)\nof 11111 (100%)" id="node2" fontsize=9 shape=box tooltip="main testdata/source1:2 (11111)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N3 [label="tee\nsource2:2\n1000 (9.00%)\nof 11000 (99.00%)" id="node3" fontsize=14 shape=box tooltip="tee testdata/source2:2 (11000)" color="#b20000" fillcolor="#edd5d5"]
|
||||
N4 [label="tee\nsource2:8\n100 (0.9%)" id="node4" fontsize=10 shape=box tooltip="tee testdata/source2:8 (100)" color="#b2b0aa" fillcolor="#edecec"]
|
||||
N5 [label="bar\nsource1:10\n10 (0.09%)" id="node5" fontsize=9 shape=box tooltip="bar testdata/source1:10 (10)" color="#b2b2b1" fillcolor="#ededed"]
|
||||
N6 [label="bar\nsource1:10\n0 of 100 (0.9%)" id="node6" fontsize=8 shape=box tooltip="bar testdata/source1:10 (100)" color="#b2b0aa" fillcolor="#edecec"]
|
||||
N7 [label="foo\nsource1:4\n0 of 10 (0.09%)" id="node7" fontsize=8 shape=box tooltip="foo testdata/source1:4 (10)" color="#b2b2b1" fillcolor="#ededed"]
|
||||
N2 -> N3 [label=" 11000" weight=100 penwidth=5 color="#b20000" tooltip="main testdata/source1:2 -> tee testdata/source2:2 (11000)" labeltooltip="main testdata/source1:2 -> tee testdata/source2:2 (11000)"]
|
||||
N3 -> N1 [label=" 10000" weight=91 penwidth=5 color="#b20500" tooltip="tee testdata/source2:2 -> tee testdata/source2:8 (10000)" labeltooltip="tee testdata/source2:2 -> tee testdata/source2:8 (10000)"]
|
||||
N6 -> N4 [label=" 100" color="#b2b0aa" tooltip="bar testdata/source1:10 -> tee testdata/source2:8 (100)" labeltooltip="bar testdata/source1:10 -> tee testdata/source2:8 (100)"]
|
||||
|
53
src/cmd/vendor/github.com/google/pprof/internal/symbolizer/symbolizer.go
generated
vendored
53
src/cmd/vendor/github.com/google/pprof/internal/symbolizer/symbolizer.go
generated
vendored
@ -18,6 +18,7 @@
|
||||
package symbolizer
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
@ -41,21 +42,26 @@ type Symbolizer struct {
|
||||
// test taps for dependency injection
|
||||
var symbolzSymbolize = symbolz.Symbolize
|
||||
var localSymbolize = doLocalSymbolize
|
||||
var demangleFunction = Demangle
|
||||
|
||||
// Symbolize attempts to symbolize profile p. First uses binutils on
|
||||
// local binaries; if the source is a URL it attempts to get any
|
||||
// missed entries using symbolz.
|
||||
func (s *Symbolizer) Symbolize(mode string, sources plugin.MappingSources, p *profile.Profile) error {
|
||||
remote, local, force, demanglerMode := true, true, false, ""
|
||||
remote, local, fast, force, demanglerMode := true, true, false, false, ""
|
||||
for _, o := range strings.Split(strings.ToLower(mode), ":") {
|
||||
switch o {
|
||||
case "":
|
||||
continue
|
||||
case "none", "no":
|
||||
return nil
|
||||
case "local", "fastlocal":
|
||||
case "local":
|
||||
remote, local = false, true
|
||||
case "fastlocal":
|
||||
remote, local, fast = false, true, true
|
||||
case "remote":
|
||||
remote, local = true, false
|
||||
case "", "force":
|
||||
case "force":
|
||||
force = true
|
||||
default:
|
||||
switch d := strings.TrimPrefix(o, "demangle="); d {
|
||||
@ -74,29 +80,48 @@ func (s *Symbolizer) Symbolize(mode string, sources plugin.MappingSources, p *pr
|
||||
var err error
|
||||
if local {
|
||||
// Symbolize locally using binutils.
|
||||
if err = localSymbolize(mode, p, s.Obj, s.UI); err != nil {
|
||||
if err = localSymbolize(p, fast, force, s.Obj, s.UI); err != nil {
|
||||
s.UI.PrintErr("local symbolization: " + err.Error())
|
||||
}
|
||||
}
|
||||
if remote {
|
||||
if err = symbolzSymbolize(sources, postURL, p, s.UI); err != nil {
|
||||
if err = symbolzSymbolize(p, force, sources, postURL, s.UI); err != nil {
|
||||
return err // Ran out of options.
|
||||
}
|
||||
}
|
||||
|
||||
Demangle(p, force, demanglerMode)
|
||||
demangleFunction(p, force, demanglerMode)
|
||||
return nil
|
||||
}
|
||||
|
||||
// postURL issues a POST to a URL over HTTP.
|
||||
func postURL(source, post string) ([]byte, error) {
|
||||
resp, err := http.Post(source, "application/octet-stream", strings.NewReader(post))
|
||||
url, err := url.Parse(source)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var tlsConfig *tls.Config
|
||||
if url.Scheme == "https+insecure" {
|
||||
tlsConfig = &tls.Config{
|
||||
InsecureSkipVerify: true,
|
||||
}
|
||||
url.Scheme = "https"
|
||||
source = url.String()
|
||||
}
|
||||
|
||||
client := &http.Client{
|
||||
Transport: &http.Transport{
|
||||
TLSClientConfig: tlsConfig,
|
||||
},
|
||||
}
|
||||
resp, err := client.Post(source, "application/octet-stream", strings.NewReader(post))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("http post %s: %v", source, err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return nil, statusCodeError(resp)
|
||||
return nil, fmt.Errorf("http post %s: %v", source, statusCodeError(resp))
|
||||
}
|
||||
return ioutil.ReadAll(resp.Body)
|
||||
}
|
||||
@ -114,19 +139,11 @@ func statusCodeError(resp *http.Response) error {
|
||||
// doLocalSymbolize adds symbol and line number information to all locations
|
||||
// in a profile. mode enables some options to control
|
||||
// symbolization.
|
||||
func doLocalSymbolize(mode string, prof *profile.Profile, obj plugin.ObjTool, ui plugin.UI) error {
|
||||
force := false
|
||||
// Disable some mechanisms based on mode string.
|
||||
for _, o := range strings.Split(strings.ToLower(mode), ":") {
|
||||
switch {
|
||||
case o == "force":
|
||||
force = true
|
||||
case o == "fastlocal":
|
||||
func doLocalSymbolize(prof *profile.Profile, fast, force bool, obj plugin.ObjTool, ui plugin.UI) error {
|
||||
if fast {
|
||||
if bu, ok := obj.(*binutils.Binutils); ok {
|
||||
bu.SetFastSymbolization(true)
|
||||
}
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
mt, err := newMapping(prof, obj, ui, force)
|
||||
|
70
src/cmd/vendor/github.com/google/pprof/internal/symbolizer/symbolizer_test.go
generated
vendored
70
src/cmd/vendor/github.com/google/pprof/internal/symbolizer/symbolizer_test.go
generated
vendored
@ -17,6 +17,7 @@ package symbolizer
|
||||
import (
|
||||
"fmt"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
@ -101,9 +102,11 @@ func TestSymbolization(t *testing.T) {
|
||||
defer func() {
|
||||
symbolzSymbolize = sSym
|
||||
localSymbolize = lSym
|
||||
demangleFunction = Demangle
|
||||
}()
|
||||
symbolzSymbolize = symbolzMock
|
||||
localSymbolize = localMock
|
||||
demangleFunction = demangleMock
|
||||
|
||||
type testcase struct {
|
||||
mode string
|
||||
@ -117,19 +120,35 @@ func TestSymbolization(t *testing.T) {
|
||||
for i, tc := range []testcase{
|
||||
{
|
||||
"local",
|
||||
"local=local",
|
||||
"local=[]",
|
||||
},
|
||||
{
|
||||
"fastlocal",
|
||||
"local=fastlocal",
|
||||
"local=[fast]",
|
||||
},
|
||||
{
|
||||
"remote",
|
||||
"symbolz",
|
||||
"symbolz=[]",
|
||||
},
|
||||
{
|
||||
"",
|
||||
"local=:symbolz",
|
||||
"local=[]:symbolz=[]",
|
||||
},
|
||||
{
|
||||
"demangle=none",
|
||||
"demangle=[none]:force:local=[force]:symbolz=[force]",
|
||||
},
|
||||
{
|
||||
"remote:demangle=full",
|
||||
"demangle=[full]:force:symbolz=[force]",
|
||||
},
|
||||
{
|
||||
"local:demangle=templates",
|
||||
"demangle=[templates]:force:local=[force]",
|
||||
},
|
||||
{
|
||||
"force:remote",
|
||||
"force:symbolz=[force]",
|
||||
},
|
||||
} {
|
||||
prof := testProfile.Copy()
|
||||
@ -137,23 +156,44 @@ func TestSymbolization(t *testing.T) {
|
||||
t.Errorf("symbolize #%d: %v", i, err)
|
||||
continue
|
||||
}
|
||||
sort.Strings(prof.Comments)
|
||||
if got, want := strings.Join(prof.Comments, ":"), tc.wantComment; got != want {
|
||||
t.Errorf("got %s, want %s", got, want)
|
||||
t.Errorf("%q: got %s, want %s", tc.mode, got, want)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func symbolzMock(sources plugin.MappingSources, syms func(string, string) ([]byte, error), p *profile.Profile, ui plugin.UI) error {
|
||||
p.Comments = append(p.Comments, "symbolz")
|
||||
func symbolzMock(p *profile.Profile, force bool, sources plugin.MappingSources, syms func(string, string) ([]byte, error), ui plugin.UI) error {
|
||||
var args []string
|
||||
if force {
|
||||
args = append(args, "force")
|
||||
}
|
||||
p.Comments = append(p.Comments, "symbolz=["+strings.Join(args, ",")+"]")
|
||||
return nil
|
||||
}
|
||||
|
||||
func localMock(mode string, p *profile.Profile, obj plugin.ObjTool, ui plugin.UI) error {
|
||||
p.Comments = append(p.Comments, "local="+mode)
|
||||
func localMock(p *profile.Profile, fast, force bool, obj plugin.ObjTool, ui plugin.UI) error {
|
||||
var args []string
|
||||
if fast {
|
||||
args = append(args, "fast")
|
||||
}
|
||||
if force {
|
||||
args = append(args, "force")
|
||||
}
|
||||
p.Comments = append(p.Comments, "local=["+strings.Join(args, ",")+"]")
|
||||
return nil
|
||||
}
|
||||
|
||||
func demangleMock(p *profile.Profile, force bool, mode string) {
|
||||
if force {
|
||||
p.Comments = append(p.Comments, "force")
|
||||
}
|
||||
if mode != "" {
|
||||
p.Comments = append(p.Comments, "demangle=["+mode+"]")
|
||||
}
|
||||
}
|
||||
|
||||
func TestLocalSymbolization(t *testing.T) {
|
||||
prof := testProfile.Copy()
|
||||
|
||||
@ -165,7 +205,7 @@ func TestLocalSymbolization(t *testing.T) {
|
||||
}
|
||||
|
||||
b := mockObjTool{}
|
||||
if err := localSymbolize("", prof, b, &proftest.TestUI{T: t}); err != nil {
|
||||
if err := localSymbolize(prof, false, false, b, &proftest.TestUI{T: t}); err != nil {
|
||||
t.Fatalf("localSymbolize(): %v", err)
|
||||
}
|
||||
|
||||
@ -207,11 +247,11 @@ func checkSymbolizedLocation(a uint64, got []profile.Line) error {
|
||||
}
|
||||
|
||||
var mockAddresses = map[uint64][]plugin.Frame{
|
||||
1000: []plugin.Frame{frame("fun11", "file11.src", 10)},
|
||||
2000: []plugin.Frame{frame("fun21", "file21.src", 20), frame("fun22", "file22.src", 20)},
|
||||
3000: []plugin.Frame{frame("fun31", "file31.src", 30), frame("fun32", "file32.src", 30), frame("fun33", "file33.src", 30)},
|
||||
4000: []plugin.Frame{frame("fun41", "file41.src", 40), frame("fun42", "file42.src", 40), frame("fun43", "file43.src", 40), frame("fun44", "file44.src", 40)},
|
||||
5000: []plugin.Frame{frame("fun51", "file51.src", 50), frame("fun52", "file52.src", 50), frame("fun53", "file53.src", 50), frame("fun54", "file54.src", 50), frame("fun55", "file55.src", 50)},
|
||||
1000: {frame("fun11", "file11.src", 10)},
|
||||
2000: {frame("fun21", "file21.src", 20), frame("fun22", "file22.src", 20)},
|
||||
3000: {frame("fun31", "file31.src", 30), frame("fun32", "file32.src", 30), frame("fun33", "file33.src", 30)},
|
||||
4000: {frame("fun41", "file41.src", 40), frame("fun42", "file42.src", 40), frame("fun43", "file43.src", 40), frame("fun44", "file44.src", 40)},
|
||||
5000: {frame("fun51", "file51.src", 50), frame("fun52", "file52.src", 50), frame("fun53", "file53.src", 50), frame("fun54", "file54.src", 50), frame("fun55", "file55.src", 50)},
|
||||
}
|
||||
|
||||
func frame(fname, file string, line int) plugin.Frame {
|
||||
|
20
src/cmd/vendor/github.com/google/pprof/internal/symbolz/symbolz.go
generated
vendored
20
src/cmd/vendor/github.com/google/pprof/internal/symbolz/symbolz.go
generated
vendored
@ -36,12 +36,13 @@ var (
|
||||
|
||||
// Symbolize symbolizes profile p by parsing data returned by a
|
||||
// symbolz handler. syms receives the symbolz query (hex addresses
|
||||
// separated by '+') and returns the symbolz output in a string. It
|
||||
// symbolizes all locations based on their addresses, regardless of
|
||||
// mapping.
|
||||
func Symbolize(sources plugin.MappingSources, syms func(string, string) ([]byte, error), p *profile.Profile, ui plugin.UI) error {
|
||||
// separated by '+') and returns the symbolz output in a string. If
|
||||
// force is false, it will only symbolize locations from mappings
|
||||
// not already marked as HasFunctions.
|
||||
func Symbolize(p *profile.Profile, force bool, sources plugin.MappingSources, syms func(string, string) ([]byte, error), ui plugin.UI) error {
|
||||
for _, m := range p.Mapping {
|
||||
if m.HasFunctions {
|
||||
if !force && m.HasFunctions {
|
||||
// Only check for HasFunctions as symbolz only populates function names.
|
||||
continue
|
||||
}
|
||||
mappingSources := sources[m.File]
|
||||
@ -65,24 +66,21 @@ func Symbolize(sources plugin.MappingSources, syms func(string, string) ([]byte,
|
||||
// symbolz returns the corresponding symbolz source for a profile URL.
|
||||
func symbolz(source string) string {
|
||||
if url, err := url.Parse(source); err == nil && url.Host != "" {
|
||||
if strings.Contains(url.Path, "/") {
|
||||
if dir := path.Dir(url.Path); dir == "/debug/pprof" {
|
||||
// For Go language profile handlers in net/http/pprof package.
|
||||
url.Path = "/debug/pprof/symbol"
|
||||
if strings.Contains(url.Path, "/debug/pprof/") {
|
||||
url.Path = path.Clean(url.Path + "/../symbol")
|
||||
} else {
|
||||
url.Path = "/symbolz"
|
||||
}
|
||||
url.RawQuery = ""
|
||||
return url.String()
|
||||
}
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
// symbolizeMapping symbolizes locations belonging to a Mapping by querying
|
||||
// a symbolz handler. An offset is applied to all addresses to take care of
|
||||
// normalization occured for merged Mappings.
|
||||
// normalization occurred for merged Mappings.
|
||||
func symbolizeMapping(source string, offset int64, syms func(string, string) ([]byte, error), m *profile.Mapping, p *profile.Profile) error {
|
||||
// Construct query of addresses to symbolize.
|
||||
var a []string
|
||||
|
72
src/cmd/vendor/github.com/google/pprof/internal/symbolz/symbolz_test.go
generated
vendored
72
src/cmd/vendor/github.com/google/pprof/internal/symbolz/symbolz_test.go
generated
vendored
@ -33,6 +33,7 @@ func TestSymbolzURL(t *testing.T) {
|
||||
"http://host:8000/debug/pprof/profile": "http://host:8000/debug/pprof/symbol",
|
||||
"http://host:8000/debug/pprof/profile?seconds=10": "http://host:8000/debug/pprof/symbol",
|
||||
"http://host:8000/debug/pprof/heap": "http://host:8000/debug/pprof/symbol",
|
||||
"http://some.host:8080/some/deeper/path/debug/pprof/endpoint?param=value": "http://some.host:8080/some/deeper/path/debug/pprof/symbol",
|
||||
} {
|
||||
if got := symbolz(try); got != want {
|
||||
t.Errorf(`symbolz(%s)=%s, want "%s"`, try, got, want)
|
||||
@ -41,12 +42,49 @@ func TestSymbolzURL(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestSymbolize(t *testing.T) {
|
||||
s := plugin.MappingSources{
|
||||
"buildid": []struct {
|
||||
Source string
|
||||
Start uint64
|
||||
}{
|
||||
{Source: "http://localhost:80/profilez"},
|
||||
},
|
||||
}
|
||||
|
||||
for _, hasFunctions := range []bool{false, true} {
|
||||
for _, force := range []bool{false, true} {
|
||||
p := testProfile(hasFunctions)
|
||||
|
||||
if err := Symbolize(p, force, s, fetchSymbols, &proftest.TestUI{T: t}); err != nil {
|
||||
t.Errorf("symbolz: %v", err)
|
||||
continue
|
||||
}
|
||||
var wantSym, wantNoSym []*profile.Location
|
||||
if force || !hasFunctions {
|
||||
wantNoSym = p.Location[:1]
|
||||
wantSym = p.Location[1:]
|
||||
} else {
|
||||
wantNoSym = p.Location
|
||||
}
|
||||
|
||||
if err := checkSymbolized(wantSym, true); err != nil {
|
||||
t.Errorf("symbolz hasFns=%v force=%v: %v", hasFunctions, force, err)
|
||||
}
|
||||
if err := checkSymbolized(wantNoSym, false); err != nil {
|
||||
t.Errorf("symbolz hasFns=%v force=%v: %v", hasFunctions, force, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func testProfile(hasFunctions bool) *profile.Profile {
|
||||
m := []*profile.Mapping{
|
||||
{
|
||||
ID: 1,
|
||||
Start: 0x1000,
|
||||
Limit: 0x5000,
|
||||
BuildID: "buildid",
|
||||
HasFunctions: hasFunctions,
|
||||
},
|
||||
}
|
||||
p := &profile.Profile{
|
||||
@ -59,34 +97,26 @@ func TestSymbolize(t *testing.T) {
|
||||
Mapping: m,
|
||||
}
|
||||
|
||||
s := plugin.MappingSources{
|
||||
"buildid": []struct {
|
||||
Source string
|
||||
Start uint64
|
||||
}{
|
||||
{Source: "http://localhost:80/profilez"},
|
||||
},
|
||||
return p
|
||||
}
|
||||
|
||||
if err := Symbolize(s, fetchSymbols, p, &proftest.TestUI{T: t}); err != nil {
|
||||
t.Errorf("symbolz: %v", err)
|
||||
func checkSymbolized(locs []*profile.Location, wantSymbolized bool) error {
|
||||
for _, loc := range locs {
|
||||
if !wantSymbolized && len(loc.Line) != 0 {
|
||||
return fmt.Errorf("unexpected symbolization for %#x: %v", loc.Address, loc.Line)
|
||||
}
|
||||
|
||||
if l := p.Location[0]; len(l.Line) != 0 {
|
||||
t.Errorf("unexpected symbolization for %#x: %v", l.Address, l.Line)
|
||||
if wantSymbolized {
|
||||
if len(loc.Line) != 1 {
|
||||
return fmt.Errorf("expected symbolization for %#x: %v", loc.Address, loc.Line)
|
||||
}
|
||||
|
||||
for _, l := range p.Location[1:] {
|
||||
if len(l.Line) != 1 {
|
||||
t.Errorf("failed to symbolize %#x", l.Address)
|
||||
continue
|
||||
}
|
||||
address := l.Address - l.Mapping.Start
|
||||
if got, want := l.Line[0].Function.Name, fmt.Sprintf("%#x", address); got != want {
|
||||
t.Errorf("symbolz %#x, got %s, want %s", address, got, want)
|
||||
address := loc.Address - loc.Mapping.Start
|
||||
if got, want := loc.Line[0].Function.Name, fmt.Sprintf("%#x", address); got != want {
|
||||
return fmt.Errorf("symbolz %#x, got %s, want %s", address, got, want)
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func fetchSymbols(source, post string) ([]byte, error) {
|
||||
var symbolz string
|
||||
|
38
src/cmd/vendor/github.com/google/pprof/profile/encode.go
generated
vendored
38
src/cmd/vendor/github.com/google/pprof/profile/encode.go
generated
vendored
@ -59,12 +59,19 @@ func (p *Profile) preEncode() {
|
||||
}
|
||||
sort.Strings(numKeys)
|
||||
for _, k := range numKeys {
|
||||
keyX := addString(strings, k)
|
||||
vs := s.NumLabel[k]
|
||||
for _, v := range vs {
|
||||
units := s.NumUnit[k]
|
||||
for i, v := range vs {
|
||||
var unitX int64
|
||||
if len(units) != 0 {
|
||||
unitX = addString(strings, units[i])
|
||||
}
|
||||
s.labelX = append(s.labelX,
|
||||
label{
|
||||
keyX: addString(strings, k),
|
||||
keyX: keyX,
|
||||
numX: v,
|
||||
unitX: unitX,
|
||||
},
|
||||
)
|
||||
}
|
||||
@ -289,6 +296,7 @@ func (p *Profile) postDecode() error {
|
||||
for _, s := range p.Sample {
|
||||
labels := make(map[string][]string, len(s.labelX))
|
||||
numLabels := make(map[string][]int64, len(s.labelX))
|
||||
numUnits := make(map[string][]string, len(s.labelX))
|
||||
for _, l := range s.labelX {
|
||||
var key, value string
|
||||
key, err = getString(p.stringTable, &l.keyX, err)
|
||||
@ -296,6 +304,14 @@ func (p *Profile) postDecode() error {
|
||||
value, err = getString(p.stringTable, &l.strX, err)
|
||||
labels[key] = append(labels[key], value)
|
||||
} else if l.numX != 0 {
|
||||
numValues := numLabels[key]
|
||||
units := numUnits[key]
|
||||
if l.unitX != 0 {
|
||||
var unit string
|
||||
unit, err = getString(p.stringTable, &l.unitX, err)
|
||||
units = padStringArray(units, len(numValues))
|
||||
numUnits[key] = append(units, unit)
|
||||
}
|
||||
numLabels[key] = append(numLabels[key], l.numX)
|
||||
}
|
||||
}
|
||||
@ -304,6 +320,12 @@ func (p *Profile) postDecode() error {
|
||||
}
|
||||
if len(numLabels) > 0 {
|
||||
s.NumLabel = numLabels
|
||||
for key, units := range numUnits {
|
||||
if len(units) > 0 {
|
||||
numUnits[key] = padStringArray(units, len(numLabels[key]))
|
||||
}
|
||||
}
|
||||
s.NumUnit = numUnits
|
||||
}
|
||||
s.Location = make([]*Location, len(s.locationIDX))
|
||||
for i, lid := range s.locationIDX {
|
||||
@ -340,6 +362,15 @@ func (p *Profile) postDecode() error {
|
||||
return err
|
||||
}
|
||||
|
||||
// padStringArray pads arr with enough empty strings to make arr
|
||||
// length l when arr's length is less than l.
|
||||
func padStringArray(arr []string, l int) []string {
|
||||
if l <= len(arr) {
|
||||
return arr
|
||||
}
|
||||
return append(arr, make([]string, l-len(arr))...)
|
||||
}
|
||||
|
||||
func (p *ValueType) decoder() []decoder {
|
||||
return valueTypeDecoder
|
||||
}
|
||||
@ -392,6 +423,7 @@ func (p label) encode(b *buffer) {
|
||||
encodeInt64Opt(b, 1, p.keyX)
|
||||
encodeInt64Opt(b, 2, p.strX)
|
||||
encodeInt64Opt(b, 3, p.numX)
|
||||
encodeInt64Opt(b, 4, p.unitX)
|
||||
}
|
||||
|
||||
var labelDecoder = []decoder{
|
||||
@ -402,6 +434,8 @@ var labelDecoder = []decoder{
|
||||
func(b *buffer, m message) error { return decodeInt64(b, &m.(*label).strX) },
|
||||
// optional int64 num = 3
|
||||
func(b *buffer, m message) error { return decodeInt64(b, &m.(*label).numX) },
|
||||
// optional int64 num = 4
|
||||
func(b *buffer, m message) error { return decodeInt64(b, &m.(*label).unitX) },
|
||||
}
|
||||
|
||||
func (p *Mapping) decoder() []decoder {
|
||||
|
3
src/cmd/vendor/github.com/google/pprof/profile/filter.go
generated
vendored
3
src/cmd/vendor/github.com/google/pprof/profile/filter.go
generated
vendored
@ -41,10 +41,11 @@ func (p *Profile) FilterSamplesByName(focus, ignore, hide, show *regexp.Regexp)
|
||||
}
|
||||
}
|
||||
if show != nil {
|
||||
hnm = true
|
||||
l.Line = l.matchedLines(show)
|
||||
if len(l.Line) == 0 {
|
||||
hidden[l.ID] = true
|
||||
} else {
|
||||
hnm = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
5
src/cmd/vendor/github.com/google/pprof/profile/legacy_java_profile.go
generated
vendored
5
src/cmd/vendor/github.com/google/pprof/profile/legacy_java_profile.go
generated
vendored
@ -212,7 +212,10 @@ func parseJavaSamples(pType string, b []byte, p *Profile) ([]byte, map[uint64]*L
|
||||
switch pType {
|
||||
case "heap":
|
||||
const javaHeapzSamplingRate = 524288 // 512K
|
||||
s.NumLabel = map[string][]int64{"bytes": []int64{s.Value[1] / s.Value[0]}}
|
||||
if s.Value[0] == 0 {
|
||||
return nil, nil, fmt.Errorf("parsing sample %s: second value must be non-zero", line)
|
||||
}
|
||||
s.NumLabel = map[string][]int64{"bytes": {s.Value[1] / s.Value[0]}}
|
||||
s.Value[0], s.Value[1] = scaleHeapSample(s.Value[0], s.Value[1], javaHeapzSamplingRate)
|
||||
case "contention":
|
||||
if period := p.Period; period != 0 {
|
||||
|
43
src/cmd/vendor/github.com/google/pprof/profile/merge.go
generated
vendored
43
src/cmd/vendor/github.com/google/pprof/profile/merge.go
generated
vendored
@ -85,6 +85,41 @@ func Merge(srcs []*Profile) (*Profile, error) {
|
||||
return p, nil
|
||||
}
|
||||
|
||||
// Normalize normalizes the source profile by multiplying each value in profile by the
|
||||
// ratio of the sum of the base profile's values of that sample type to the sum of the
|
||||
// source profile's value of that sample type.
|
||||
func (p *Profile) Normalize(pb *Profile) error {
|
||||
|
||||
if err := p.compatible(pb); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
baseVals := make([]int64, len(p.SampleType))
|
||||
for _, s := range pb.Sample {
|
||||
for i, v := range s.Value {
|
||||
baseVals[i] += v
|
||||
}
|
||||
}
|
||||
|
||||
srcVals := make([]int64, len(p.SampleType))
|
||||
for _, s := range p.Sample {
|
||||
for i, v := range s.Value {
|
||||
srcVals[i] += v
|
||||
}
|
||||
}
|
||||
|
||||
normScale := make([]float64, len(baseVals))
|
||||
for i := range baseVals {
|
||||
if srcVals[i] == 0 {
|
||||
normScale[i] = 0.0
|
||||
} else {
|
||||
normScale[i] = float64(baseVals[i]) / float64(srcVals[i])
|
||||
}
|
||||
}
|
||||
p.ScaleN(normScale)
|
||||
return nil
|
||||
}
|
||||
|
||||
func isZeroSample(s *Sample) bool {
|
||||
for _, v := range s.Value {
|
||||
if v != 0 {
|
||||
@ -120,6 +155,7 @@ func (pm *profileMerger) mapSample(src *Sample) *Sample {
|
||||
Value: make([]int64, len(src.Value)),
|
||||
Label: make(map[string][]string, len(src.Label)),
|
||||
NumLabel: make(map[string][]int64, len(src.NumLabel)),
|
||||
NumUnit: make(map[string][]string, len(src.NumLabel)),
|
||||
}
|
||||
for i, l := range src.Location {
|
||||
s.Location[i] = pm.mapLocation(l)
|
||||
@ -130,9 +166,13 @@ func (pm *profileMerger) mapSample(src *Sample) *Sample {
|
||||
s.Label[k] = vv
|
||||
}
|
||||
for k, v := range src.NumLabel {
|
||||
u := src.NumUnit[k]
|
||||
vv := make([]int64, len(v))
|
||||
uu := make([]string, len(u))
|
||||
copy(vv, v)
|
||||
copy(uu, u)
|
||||
s.NumLabel[k] = vv
|
||||
s.NumUnit[k] = uu
|
||||
}
|
||||
// Check memoization table. Must be done on the remapped location to
|
||||
// account for the remapped mapping. Add current values to the
|
||||
@ -165,7 +205,7 @@ func (sample *Sample) key() sampleKey {
|
||||
|
||||
numlabels := make([]string, 0, len(sample.NumLabel))
|
||||
for k, v := range sample.NumLabel {
|
||||
numlabels = append(numlabels, fmt.Sprintf("%q%x", k, v))
|
||||
numlabels = append(numlabels, fmt.Sprintf("%q%x%x", k, v, sample.NumUnit[k]))
|
||||
}
|
||||
sort.Strings(numlabels)
|
||||
|
||||
@ -432,7 +472,6 @@ func (p *Profile) compatible(pb *Profile) error {
|
||||
return fmt.Errorf("incompatible sample types %v and %v", p.SampleType, pb.SampleType)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
245
src/cmd/vendor/github.com/google/pprof/profile/profile.go
generated
vendored
245
src/cmd/vendor/github.com/google/pprof/profile/profile.go
generated
vendored
@ -26,6 +26,7 @@ import (
|
||||
"regexp"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
@ -47,6 +48,10 @@ type Profile struct {
|
||||
PeriodType *ValueType
|
||||
Period int64
|
||||
|
||||
// The following fields are modified during encoding and copying,
|
||||
// so are protected by a Mutex.
|
||||
encodeMu sync.Mutex
|
||||
|
||||
commentX []int64
|
||||
dropFramesX int64
|
||||
keepFramesX int64
|
||||
@ -69,6 +74,7 @@ type Sample struct {
|
||||
Value []int64
|
||||
Label map[string][]string
|
||||
NumLabel map[string][]int64
|
||||
NumUnit map[string][]string
|
||||
|
||||
locationIDX []uint64
|
||||
labelX []label
|
||||
@ -80,6 +86,8 @@ type label struct {
|
||||
// Exactly one of the two following values must be set
|
||||
strX int64
|
||||
numX int64 // Integer value for this label
|
||||
// can be set if numX has value
|
||||
unitX int64
|
||||
}
|
||||
|
||||
// Mapping corresponds to Profile.Mapping
|
||||
@ -296,21 +304,25 @@ func (p *Profile) updateLocationMapping(from, to *Mapping) {
|
||||
}
|
||||
}
|
||||
|
||||
// Write writes the profile as a gzip-compressed marshaled protobuf.
|
||||
func (p *Profile) Write(w io.Writer) error {
|
||||
func serialize(p *Profile) []byte {
|
||||
p.encodeMu.Lock()
|
||||
p.preEncode()
|
||||
b := marshal(p)
|
||||
p.encodeMu.Unlock()
|
||||
return b
|
||||
}
|
||||
|
||||
// Write writes the profile as a gzip-compressed marshaled protobuf.
|
||||
func (p *Profile) Write(w io.Writer) error {
|
||||
zw := gzip.NewWriter(w)
|
||||
defer zw.Close()
|
||||
_, err := zw.Write(b)
|
||||
_, err := zw.Write(serialize(p))
|
||||
return err
|
||||
}
|
||||
|
||||
// WriteUncompressed writes the profile as a marshaled protobuf.
|
||||
func (p *Profile) WriteUncompressed(w io.Writer) error {
|
||||
p.preEncode()
|
||||
b := marshal(p)
|
||||
_, err := w.Write(b)
|
||||
_, err := w.Write(serialize(p))
|
||||
return err
|
||||
}
|
||||
|
||||
@ -325,8 +337,11 @@ func (p *Profile) CheckValid() error {
|
||||
return fmt.Errorf("missing sample type information")
|
||||
}
|
||||
for _, s := range p.Sample {
|
||||
if s == nil {
|
||||
return fmt.Errorf("profile has nil sample")
|
||||
}
|
||||
if len(s.Value) != sampleLen {
|
||||
return fmt.Errorf("mismatch: sample has: %d values vs. %d types", len(s.Value), len(p.SampleType))
|
||||
return fmt.Errorf("mismatch: sample has %d values vs. %d types", len(s.Value), len(p.SampleType))
|
||||
}
|
||||
for _, l := range s.Location {
|
||||
if l == nil {
|
||||
@ -339,6 +354,9 @@ func (p *Profile) CheckValid() error {
|
||||
// Check that there are no duplicate ids
|
||||
mappings := make(map[uint64]*Mapping, len(p.Mapping))
|
||||
for _, m := range p.Mapping {
|
||||
if m == nil {
|
||||
return fmt.Errorf("profile has nil mapping")
|
||||
}
|
||||
if m.ID == 0 {
|
||||
return fmt.Errorf("found mapping with reserved ID=0")
|
||||
}
|
||||
@ -349,6 +367,9 @@ func (p *Profile) CheckValid() error {
|
||||
}
|
||||
functions := make(map[uint64]*Function, len(p.Function))
|
||||
for _, f := range p.Function {
|
||||
if f == nil {
|
||||
return fmt.Errorf("profile has nil function")
|
||||
}
|
||||
if f.ID == 0 {
|
||||
return fmt.Errorf("found function with reserved ID=0")
|
||||
}
|
||||
@ -359,6 +380,9 @@ func (p *Profile) CheckValid() error {
|
||||
}
|
||||
locations := make(map[uint64]*Location, len(p.Location))
|
||||
for _, l := range p.Location {
|
||||
if l == nil {
|
||||
return fmt.Errorf("profile has nil location")
|
||||
}
|
||||
if l.ID == 0 {
|
||||
return fmt.Errorf("found location with reserved id=0")
|
||||
}
|
||||
@ -426,6 +450,70 @@ func (p *Profile) Aggregate(inlineFrame, function, filename, linenumber, address
|
||||
return p.CheckValid()
|
||||
}
|
||||
|
||||
// NumLabelUnits returns a map of numeric label keys to the units
|
||||
// associated with those keys and a map of those keys to any units
|
||||
// that were encountered but not used.
|
||||
// Unit for a given key is the first encountered unit for that key. If multiple
|
||||
// units are encountered for values paired with a particular key, then the first
|
||||
// unit encountered is used and all other units are returned in sorted order
|
||||
// in map of ignored units.
|
||||
// If no units are encountered for a particular key, the unit is then inferred
|
||||
// based on the key.
|
||||
func (p *Profile) NumLabelUnits() (map[string]string, map[string][]string) {
|
||||
numLabelUnits := map[string]string{}
|
||||
ignoredUnits := map[string]map[string]bool{}
|
||||
encounteredKeys := map[string]bool{}
|
||||
|
||||
// Determine units based on numeric tags for each sample.
|
||||
for _, s := range p.Sample {
|
||||
for k := range s.NumLabel {
|
||||
encounteredKeys[k] = true
|
||||
for _, unit := range s.NumUnit[k] {
|
||||
if unit == "" {
|
||||
continue
|
||||
}
|
||||
if wantUnit, ok := numLabelUnits[k]; !ok {
|
||||
numLabelUnits[k] = unit
|
||||
} else if wantUnit != unit {
|
||||
if v, ok := ignoredUnits[k]; ok {
|
||||
v[unit] = true
|
||||
} else {
|
||||
ignoredUnits[k] = map[string]bool{unit: true}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// Infer units for keys without any units associated with
|
||||
// numeric tag values.
|
||||
for key := range encounteredKeys {
|
||||
unit := numLabelUnits[key]
|
||||
if unit == "" {
|
||||
switch key {
|
||||
case "alignment", "request":
|
||||
numLabelUnits[key] = "bytes"
|
||||
default:
|
||||
numLabelUnits[key] = key
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Copy ignored units into more readable format
|
||||
unitsIgnored := make(map[string][]string, len(ignoredUnits))
|
||||
for key, values := range ignoredUnits {
|
||||
units := make([]string, len(values))
|
||||
i := 0
|
||||
for unit := range values {
|
||||
units[i] = unit
|
||||
i++
|
||||
}
|
||||
sort.Strings(units)
|
||||
unitsIgnored[key] = units
|
||||
}
|
||||
|
||||
return numLabelUnits, unitsIgnored
|
||||
}
|
||||
|
||||
// String dumps a text representation of a profile. Intended mainly
|
||||
// for debugging purposes.
|
||||
func (p *Profile) String() string {
|
||||
@ -455,36 +543,50 @@ func (p *Profile) String() string {
|
||||
}
|
||||
ss = append(ss, strings.TrimSpace(sh1))
|
||||
for _, s := range p.Sample {
|
||||
var sv string
|
||||
for _, v := range s.Value {
|
||||
sv = fmt.Sprintf("%s %10d", sv, v)
|
||||
}
|
||||
sv = sv + ": "
|
||||
for _, l := range s.Location {
|
||||
sv = sv + fmt.Sprintf("%d ", l.ID)
|
||||
}
|
||||
ss = append(ss, sv)
|
||||
const labelHeader = " "
|
||||
if len(s.Label) > 0 {
|
||||
ls := []string{}
|
||||
for k, v := range s.Label {
|
||||
ls = append(ls, fmt.Sprintf("%s:%v", k, v))
|
||||
}
|
||||
sort.Strings(ls)
|
||||
ss = append(ss, labelHeader+strings.Join(ls, " "))
|
||||
}
|
||||
if len(s.NumLabel) > 0 {
|
||||
ls := []string{}
|
||||
for k, v := range s.NumLabel {
|
||||
ls = append(ls, fmt.Sprintf("%s:%v", k, v))
|
||||
}
|
||||
sort.Strings(ls)
|
||||
ss = append(ss, labelHeader+strings.Join(ls, " "))
|
||||
}
|
||||
ss = append(ss, s.string())
|
||||
}
|
||||
|
||||
ss = append(ss, "Locations")
|
||||
for _, l := range p.Location {
|
||||
ss = append(ss, l.string())
|
||||
}
|
||||
|
||||
ss = append(ss, "Mappings")
|
||||
for _, m := range p.Mapping {
|
||||
ss = append(ss, m.string())
|
||||
}
|
||||
|
||||
return strings.Join(ss, "\n") + "\n"
|
||||
}
|
||||
|
||||
// string dumps a text representation of a mapping. Intended mainly
|
||||
// for debugging purposes.
|
||||
func (m *Mapping) string() string {
|
||||
bits := ""
|
||||
if m.HasFunctions {
|
||||
bits = bits + "[FN]"
|
||||
}
|
||||
if m.HasFilenames {
|
||||
bits = bits + "[FL]"
|
||||
}
|
||||
if m.HasLineNumbers {
|
||||
bits = bits + "[LN]"
|
||||
}
|
||||
if m.HasInlineFrames {
|
||||
bits = bits + "[IN]"
|
||||
}
|
||||
return fmt.Sprintf("%d: %#x/%#x/%#x %s %s %s",
|
||||
m.ID,
|
||||
m.Start, m.Limit, m.Offset,
|
||||
m.File,
|
||||
m.BuildID,
|
||||
bits)
|
||||
}
|
||||
|
||||
// string dumps a text representation of a location. Intended mainly
|
||||
// for debugging purposes.
|
||||
func (l *Location) string() string {
|
||||
ss := []string{}
|
||||
locStr := fmt.Sprintf("%6d: %#x ", l.ID, l.Address)
|
||||
if m := l.Mapping; m != nil {
|
||||
locStr = locStr + fmt.Sprintf("M=%d ", m.ID)
|
||||
@ -508,32 +610,63 @@ func (p *Profile) String() string {
|
||||
// Do not print location details past the first line
|
||||
locStr = " "
|
||||
}
|
||||
return strings.Join(ss, "\n")
|
||||
}
|
||||
|
||||
ss = append(ss, "Mappings")
|
||||
for _, m := range p.Mapping {
|
||||
bits := ""
|
||||
if m.HasFunctions {
|
||||
bits = bits + "[FN]"
|
||||
// string dumps a text representation of a sample. Intended mainly
|
||||
// for debugging purposes.
|
||||
func (s *Sample) string() string {
|
||||
ss := []string{}
|
||||
var sv string
|
||||
for _, v := range s.Value {
|
||||
sv = fmt.Sprintf("%s %10d", sv, v)
|
||||
}
|
||||
if m.HasFilenames {
|
||||
bits = bits + "[FL]"
|
||||
sv = sv + ": "
|
||||
for _, l := range s.Location {
|
||||
sv = sv + fmt.Sprintf("%d ", l.ID)
|
||||
}
|
||||
if m.HasLineNumbers {
|
||||
bits = bits + "[LN]"
|
||||
ss = append(ss, sv)
|
||||
const labelHeader = " "
|
||||
if len(s.Label) > 0 {
|
||||
ss = append(ss, labelHeader+labelsToString(s.Label))
|
||||
}
|
||||
if m.HasInlineFrames {
|
||||
bits = bits + "[IN]"
|
||||
if len(s.NumLabel) > 0 {
|
||||
ss = append(ss, labelHeader+numLabelsToString(s.NumLabel, s.NumUnit))
|
||||
}
|
||||
ss = append(ss, fmt.Sprintf("%d: %#x/%#x/%#x %s %s %s",
|
||||
m.ID,
|
||||
m.Start, m.Limit, m.Offset,
|
||||
m.File,
|
||||
m.BuildID,
|
||||
bits))
|
||||
return strings.Join(ss, "\n")
|
||||
}
|
||||
|
||||
return strings.Join(ss, "\n") + "\n"
|
||||
// labelsToString returns a string representation of a
|
||||
// map representing labels.
|
||||
func labelsToString(labels map[string][]string) string {
|
||||
ls := []string{}
|
||||
for k, v := range labels {
|
||||
ls = append(ls, fmt.Sprintf("%s:%v", k, v))
|
||||
}
|
||||
sort.Strings(ls)
|
||||
return strings.Join(ls, " ")
|
||||
}
|
||||
|
||||
// numLablesToString returns a string representation of a map
|
||||
// representing numeric labels.
|
||||
func numLabelsToString(numLabels map[string][]int64, numUnits map[string][]string) string {
|
||||
ls := []string{}
|
||||
for k, v := range numLabels {
|
||||
units := numUnits[k]
|
||||
var labelString string
|
||||
if len(units) == len(v) {
|
||||
values := make([]string, len(v))
|
||||
for i, vv := range v {
|
||||
values[i] = fmt.Sprintf("%d %s", vv, units[i])
|
||||
}
|
||||
labelString = fmt.Sprintf("%s:%v", k, values)
|
||||
} else {
|
||||
labelString = fmt.Sprintf("%s:%v", k, v)
|
||||
}
|
||||
ls = append(ls, labelString)
|
||||
}
|
||||
sort.Strings(ls)
|
||||
return strings.Join(ls, " ")
|
||||
}
|
||||
|
||||
// Scale multiplies all sample values in a profile by a constant.
|
||||
@ -596,19 +729,17 @@ func (p *Profile) HasFileLines() bool {
|
||||
}
|
||||
|
||||
// Unsymbolizable returns true if a mapping points to a binary for which
|
||||
// locations can't be symbolized in principle, at least now.
|
||||
// locations can't be symbolized in principle, at least now. Examples are
|
||||
// "[vdso]", [vsyscall]" and some others, see the code.
|
||||
func (m *Mapping) Unsymbolizable() bool {
|
||||
name := filepath.Base(m.File)
|
||||
return name == "[vdso]" || strings.HasPrefix(name, "linux-vdso") || name == "[heap]" || strings.HasPrefix(m.File, "/dev/dri/")
|
||||
return strings.HasPrefix(name, "[") || strings.HasPrefix(name, "linux-vdso") || strings.HasPrefix(m.File, "/dev/dri/")
|
||||
}
|
||||
|
||||
// Copy makes a fully independent copy of a profile.
|
||||
func (p *Profile) Copy() *Profile {
|
||||
p.preEncode()
|
||||
b := marshal(p)
|
||||
|
||||
pp := &Profile{}
|
||||
if err := unmarshal(b, pp); err != nil {
|
||||
if err := unmarshal(serialize(p), pp); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if err := pp.postDecode(); err != nil {
|
||||
|
557
src/cmd/vendor/github.com/google/pprof/profile/profile_test.go
generated
vendored
557
src/cmd/vendor/github.com/google/pprof/profile/profile_test.go
generated
vendored
@ -19,8 +19,10 @@ import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"strings"
|
||||
"sync"
|
||||
"testing"
|
||||
|
||||
"github.com/google/pprof/internal/proftest"
|
||||
@ -91,7 +93,6 @@ func TestParse(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestParseError(t *testing.T) {
|
||||
|
||||
testcases := []string{
|
||||
"",
|
||||
"garbage text",
|
||||
@ -107,6 +108,63 @@ func TestParseError(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestCheckValid(t *testing.T) {
|
||||
const path = "testdata/java.cpu"
|
||||
|
||||
inbytes, err := ioutil.ReadFile(path)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read profile file %q: %v", path, err)
|
||||
}
|
||||
p, err := Parse(bytes.NewBuffer(inbytes))
|
||||
if err != nil {
|
||||
t.Fatalf("failed to parse profile %q: %s", path, err)
|
||||
}
|
||||
|
||||
for _, tc := range []struct {
|
||||
mutateFn func(*Profile)
|
||||
wantErr string
|
||||
}{
|
||||
{
|
||||
mutateFn: func(p *Profile) { p.SampleType = nil },
|
||||
wantErr: "missing sample type information",
|
||||
},
|
||||
{
|
||||
mutateFn: func(p *Profile) { p.Sample[0] = nil },
|
||||
wantErr: "profile has nil sample",
|
||||
},
|
||||
{
|
||||
mutateFn: func(p *Profile) { p.Sample[0].Value = append(p.Sample[0].Value, 0) },
|
||||
wantErr: "sample has 3 values vs. 2 types",
|
||||
},
|
||||
{
|
||||
mutateFn: func(p *Profile) { p.Sample[0].Location[0] = nil },
|
||||
wantErr: "sample has nil location",
|
||||
},
|
||||
{
|
||||
mutateFn: func(p *Profile) { p.Location[0] = nil },
|
||||
wantErr: "profile has nil location",
|
||||
},
|
||||
{
|
||||
mutateFn: func(p *Profile) { p.Mapping = append(p.Mapping, nil) },
|
||||
wantErr: "profile has nil mapping",
|
||||
},
|
||||
{
|
||||
mutateFn: func(p *Profile) { p.Function[0] = nil },
|
||||
wantErr: "profile has nil function",
|
||||
},
|
||||
} {
|
||||
t.Run(tc.wantErr, func(t *testing.T) {
|
||||
p := p.Copy()
|
||||
tc.mutateFn(p)
|
||||
if err := p.CheckValid(); err == nil {
|
||||
t.Errorf("CheckValid(): got no error, want error %q", tc.wantErr)
|
||||
} else if !strings.Contains(err.Error(), tc.wantErr) {
|
||||
t.Errorf("CheckValid(): got error %v, want error %q", err, tc.wantErr)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// leaveTempfile leaves |b| in a temporary file on disk and returns the
|
||||
// temp filename. This is useful to recover a profile when the test
|
||||
// fails.
|
||||
@ -217,7 +275,7 @@ var cpuL = []*Location{
|
||||
},
|
||||
}
|
||||
|
||||
var testProfile = &Profile{
|
||||
var testProfile1 = &Profile{
|
||||
PeriodType: &ValueType{Type: "cpu", Unit: "milliseconds"},
|
||||
Period: 1,
|
||||
DurationNanos: 10e9,
|
||||
@ -230,40 +288,181 @@ var testProfile = &Profile{
|
||||
Location: []*Location{cpuL[0]},
|
||||
Value: []int64{1000, 1000},
|
||||
Label: map[string][]string{
|
||||
"key1": []string{"tag1"},
|
||||
"key2": []string{"tag1"},
|
||||
"key1": {"tag1"},
|
||||
"key2": {"tag1"},
|
||||
},
|
||||
},
|
||||
{
|
||||
Location: []*Location{cpuL[1], cpuL[0]},
|
||||
Value: []int64{100, 100},
|
||||
Label: map[string][]string{
|
||||
"key1": []string{"tag2"},
|
||||
"key3": []string{"tag2"},
|
||||
"key1": {"tag2"},
|
||||
"key3": {"tag2"},
|
||||
},
|
||||
},
|
||||
{
|
||||
Location: []*Location{cpuL[2], cpuL[0]},
|
||||
Value: []int64{10, 10},
|
||||
Label: map[string][]string{
|
||||
"key1": []string{"tag3"},
|
||||
"key2": []string{"tag2"},
|
||||
"key1": {"tag3"},
|
||||
"key2": {"tag2"},
|
||||
},
|
||||
},
|
||||
{
|
||||
Location: []*Location{cpuL[3], cpuL[0]},
|
||||
Value: []int64{10000, 10000},
|
||||
Label: map[string][]string{
|
||||
"key1": []string{"tag4"},
|
||||
"key2": []string{"tag1"},
|
||||
"key1": {"tag4"},
|
||||
"key2": {"tag1"},
|
||||
},
|
||||
},
|
||||
{
|
||||
Location: []*Location{cpuL[4], cpuL[0]},
|
||||
Value: []int64{1, 1},
|
||||
Label: map[string][]string{
|
||||
"key1": []string{"tag4"},
|
||||
"key2": []string{"tag1"},
|
||||
"key1": {"tag4"},
|
||||
"key2": {"tag1"},
|
||||
},
|
||||
},
|
||||
},
|
||||
Location: cpuL,
|
||||
Function: cpuF,
|
||||
Mapping: cpuM,
|
||||
}
|
||||
|
||||
var testProfile2 = &Profile{
|
||||
PeriodType: &ValueType{Type: "cpu", Unit: "milliseconds"},
|
||||
Period: 1,
|
||||
DurationNanos: 10e9,
|
||||
SampleType: []*ValueType{
|
||||
{Type: "samples", Unit: "count"},
|
||||
{Type: "cpu", Unit: "milliseconds"},
|
||||
},
|
||||
Sample: []*Sample{
|
||||
{
|
||||
Location: []*Location{cpuL[0]},
|
||||
Value: []int64{70, 1000},
|
||||
Label: map[string][]string{
|
||||
"key1": {"tag1"},
|
||||
"key2": {"tag1"},
|
||||
},
|
||||
},
|
||||
{
|
||||
Location: []*Location{cpuL[1], cpuL[0]},
|
||||
Value: []int64{60, 100},
|
||||
Label: map[string][]string{
|
||||
"key1": {"tag2"},
|
||||
"key3": {"tag2"},
|
||||
},
|
||||
},
|
||||
{
|
||||
Location: []*Location{cpuL[2], cpuL[0]},
|
||||
Value: []int64{50, 10},
|
||||
Label: map[string][]string{
|
||||
"key1": {"tag3"},
|
||||
"key2": {"tag2"},
|
||||
},
|
||||
},
|
||||
{
|
||||
Location: []*Location{cpuL[3], cpuL[0]},
|
||||
Value: []int64{40, 10000},
|
||||
Label: map[string][]string{
|
||||
"key1": {"tag4"},
|
||||
"key2": {"tag1"},
|
||||
},
|
||||
},
|
||||
{
|
||||
Location: []*Location{cpuL[4], cpuL[0]},
|
||||
Value: []int64{1, 1},
|
||||
Label: map[string][]string{
|
||||
"key1": {"tag4"},
|
||||
"key2": {"tag1"},
|
||||
},
|
||||
},
|
||||
},
|
||||
Location: cpuL,
|
||||
Function: cpuF,
|
||||
Mapping: cpuM,
|
||||
}
|
||||
|
||||
var testProfile3 = &Profile{
|
||||
PeriodType: &ValueType{Type: "cpu", Unit: "milliseconds"},
|
||||
Period: 1,
|
||||
DurationNanos: 10e9,
|
||||
SampleType: []*ValueType{
|
||||
{Type: "samples", Unit: "count"},
|
||||
},
|
||||
Sample: []*Sample{
|
||||
{
|
||||
Location: []*Location{cpuL[0]},
|
||||
Value: []int64{1000},
|
||||
Label: map[string][]string{
|
||||
"key1": {"tag1"},
|
||||
"key2": {"tag1"},
|
||||
},
|
||||
},
|
||||
},
|
||||
Location: cpuL,
|
||||
Function: cpuF,
|
||||
Mapping: cpuM,
|
||||
}
|
||||
|
||||
var testProfile4 = &Profile{
|
||||
PeriodType: &ValueType{Type: "cpu", Unit: "milliseconds"},
|
||||
Period: 1,
|
||||
DurationNanos: 10e9,
|
||||
SampleType: []*ValueType{
|
||||
{Type: "samples", Unit: "count"},
|
||||
},
|
||||
Sample: []*Sample{
|
||||
{
|
||||
Location: []*Location{cpuL[0]},
|
||||
Value: []int64{1000},
|
||||
NumLabel: map[string][]int64{
|
||||
"key1": {10},
|
||||
"key2": {30},
|
||||
},
|
||||
NumUnit: map[string][]string{
|
||||
"key1": {"bytes"},
|
||||
"key2": {"bytes"},
|
||||
},
|
||||
},
|
||||
},
|
||||
Location: cpuL,
|
||||
Function: cpuF,
|
||||
Mapping: cpuM,
|
||||
}
|
||||
|
||||
var testProfile5 = &Profile{
|
||||
PeriodType: &ValueType{Type: "cpu", Unit: "milliseconds"},
|
||||
Period: 1,
|
||||
DurationNanos: 10e9,
|
||||
SampleType: []*ValueType{
|
||||
{Type: "samples", Unit: "count"},
|
||||
},
|
||||
Sample: []*Sample{
|
||||
{
|
||||
Location: []*Location{cpuL[0]},
|
||||
Value: []int64{1000},
|
||||
NumLabel: map[string][]int64{
|
||||
"key1": {10},
|
||||
"key2": {30},
|
||||
},
|
||||
NumUnit: map[string][]string{
|
||||
"key1": {"bytes"},
|
||||
"key2": {"bytes"},
|
||||
},
|
||||
},
|
||||
{
|
||||
Location: []*Location{cpuL[0]},
|
||||
Value: []int64{1000},
|
||||
NumLabel: map[string][]int64{
|
||||
"key1": {10},
|
||||
"key2": {30},
|
||||
},
|
||||
NumUnit: map[string][]string{
|
||||
"key1": {"kilobytes"},
|
||||
"key2": {"kilobytes"},
|
||||
},
|
||||
},
|
||||
},
|
||||
@ -273,10 +472,10 @@ var testProfile = &Profile{
|
||||
}
|
||||
|
||||
var aggTests = map[string]aggTest{
|
||||
"precise": aggTest{true, true, true, true, 5},
|
||||
"fileline": aggTest{false, true, true, true, 4},
|
||||
"inline_function": aggTest{false, true, false, true, 3},
|
||||
"function": aggTest{false, true, false, false, 2},
|
||||
"precise": {true, true, true, true, 5},
|
||||
"fileline": {false, true, true, true, 4},
|
||||
"inline_function": {false, true, false, true, 3},
|
||||
"function": {false, true, false, false, 2},
|
||||
}
|
||||
|
||||
type aggTest struct {
|
||||
@ -287,7 +486,7 @@ type aggTest struct {
|
||||
const totalSamples = int64(11111)
|
||||
|
||||
func TestAggregation(t *testing.T) {
|
||||
prof := testProfile.Copy()
|
||||
prof := testProfile1.Copy()
|
||||
for _, resolution := range []string{"precise", "fileline", "inline_function", "function"} {
|
||||
a := aggTests[resolution]
|
||||
if !a.precise {
|
||||
@ -362,7 +561,7 @@ func checkAggregation(prof *Profile, a *aggTest) error {
|
||||
|
||||
// Test merge leaves the main binary in place.
|
||||
func TestMergeMain(t *testing.T) {
|
||||
prof := testProfile.Copy()
|
||||
prof := testProfile1.Copy()
|
||||
p1, err := Merge([]*Profile{prof})
|
||||
if err != nil {
|
||||
t.Fatalf("merge error: %v", err)
|
||||
@ -377,7 +576,7 @@ func TestMerge(t *testing.T) {
|
||||
// -2. Should end up with an empty profile (all samples for a
|
||||
// location should add up to 0).
|
||||
|
||||
prof := testProfile.Copy()
|
||||
prof := testProfile1.Copy()
|
||||
p1, err := Merge([]*Profile{prof, prof})
|
||||
if err != nil {
|
||||
t.Errorf("merge error: %v", err)
|
||||
@ -409,7 +608,7 @@ func TestMergeAll(t *testing.T) {
|
||||
// Aggregate 10 copies of the profile.
|
||||
profs := make([]*Profile, 10)
|
||||
for i := 0; i < 10; i++ {
|
||||
profs[i] = testProfile.Copy()
|
||||
profs[i] = testProfile1.Copy()
|
||||
}
|
||||
prof, err := Merge(profs)
|
||||
if err != nil {
|
||||
@ -420,7 +619,7 @@ func TestMergeAll(t *testing.T) {
|
||||
tb := locationHash(s)
|
||||
samples[tb] = samples[tb] + s.Value[0]
|
||||
}
|
||||
for _, s := range testProfile.Sample {
|
||||
for _, s := range testProfile1.Sample {
|
||||
tb := locationHash(s)
|
||||
if samples[tb] != s.Value[0]*10 {
|
||||
t.Errorf("merge got wrong value at %s : %d instead of %d", tb, samples[tb], s.Value[0]*10)
|
||||
@ -428,6 +627,140 @@ func TestMergeAll(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestNumLabelMerge(t *testing.T) {
|
||||
for _, tc := range []struct {
|
||||
name string
|
||||
profs []*Profile
|
||||
wantNumLabels []map[string][]int64
|
||||
wantNumUnits []map[string][]string
|
||||
}{
|
||||
{
|
||||
name: "different tag units not merged",
|
||||
profs: []*Profile{testProfile4.Copy(), testProfile5.Copy()},
|
||||
wantNumLabels: []map[string][]int64{
|
||||
{
|
||||
"key1": {10},
|
||||
"key2": {30},
|
||||
},
|
||||
{
|
||||
"key1": {10},
|
||||
"key2": {30},
|
||||
},
|
||||
},
|
||||
wantNumUnits: []map[string][]string{
|
||||
{
|
||||
"key1": {"bytes"},
|
||||
"key2": {"bytes"},
|
||||
},
|
||||
{
|
||||
"key1": {"kilobytes"},
|
||||
"key2": {"kilobytes"},
|
||||
},
|
||||
},
|
||||
},
|
||||
} {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
prof, err := Merge(tc.profs)
|
||||
if err != nil {
|
||||
t.Errorf("merge error: %v", err)
|
||||
}
|
||||
|
||||
if want, got := len(tc.wantNumLabels), len(prof.Sample); want != got {
|
||||
t.Fatalf("got %d samples, want %d samples", got, want)
|
||||
}
|
||||
for i, wantLabels := range tc.wantNumLabels {
|
||||
numLabels := prof.Sample[i].NumLabel
|
||||
if !reflect.DeepEqual(wantLabels, numLabels) {
|
||||
t.Errorf("got numeric labels %v, want %v", numLabels, wantLabels)
|
||||
}
|
||||
|
||||
wantUnits := tc.wantNumUnits[i]
|
||||
numUnits := prof.Sample[i].NumUnit
|
||||
if !reflect.DeepEqual(wantUnits, numUnits) {
|
||||
t.Errorf("got numeric labels %v, want %v", numUnits, wantUnits)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestNormalizeBySameProfile(t *testing.T) {
|
||||
pb := testProfile1.Copy()
|
||||
p := testProfile1.Copy()
|
||||
|
||||
if err := p.Normalize(pb); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
for i, s := range p.Sample {
|
||||
for j, v := range s.Value {
|
||||
expectedSampleValue := testProfile1.Sample[i].Value[j]
|
||||
if v != expectedSampleValue {
|
||||
t.Errorf("For sample %d, value %d want %d got %d", i, j, expectedSampleValue, v)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNormalizeByDifferentProfile(t *testing.T) {
|
||||
p := testProfile1.Copy()
|
||||
pb := testProfile2.Copy()
|
||||
|
||||
if err := p.Normalize(pb); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
expectedSampleValues := [][]int64{
|
||||
{19, 1000},
|
||||
{1, 100},
|
||||
{0, 10},
|
||||
{198, 10000},
|
||||
{0, 1},
|
||||
}
|
||||
|
||||
for i, s := range p.Sample {
|
||||
for j, v := range s.Value {
|
||||
if v != expectedSampleValues[i][j] {
|
||||
t.Errorf("For sample %d, value %d want %d got %d", i, j, expectedSampleValues[i][j], v)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNormalizeByMultipleOfSameProfile(t *testing.T) {
|
||||
pb := testProfile1.Copy()
|
||||
for i, s := range pb.Sample {
|
||||
for j, v := range s.Value {
|
||||
pb.Sample[i].Value[j] = 10 * v
|
||||
}
|
||||
}
|
||||
|
||||
p := testProfile1.Copy()
|
||||
|
||||
err := p.Normalize(pb)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
for i, s := range p.Sample {
|
||||
for j, v := range s.Value {
|
||||
expectedSampleValue := 10 * testProfile1.Sample[i].Value[j]
|
||||
if v != expectedSampleValue {
|
||||
t.Errorf("For sample %d, value %d, want %d got %d", i, j, expectedSampleValue, v)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNormalizeIncompatibleProfiles(t *testing.T) {
|
||||
p := testProfile1.Copy()
|
||||
pb := testProfile3.Copy()
|
||||
|
||||
if err := p.Normalize(pb); err == nil {
|
||||
t.Errorf("Expected an error")
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilter(t *testing.T) {
|
||||
// Perform several forms of filtering on the test profile.
|
||||
|
||||
@ -437,12 +770,33 @@ func TestFilter(t *testing.T) {
|
||||
}
|
||||
|
||||
for tx, tc := range []filterTestcase{
|
||||
{nil, nil, nil, nil, true, false, false, false},
|
||||
{regexp.MustCompile("notfound"), nil, nil, nil, false, false, false, false},
|
||||
{nil, regexp.MustCompile("foo.c"), nil, nil, true, true, false, false},
|
||||
{nil, nil, regexp.MustCompile("lib.so"), nil, true, false, true, false},
|
||||
{
|
||||
fm: true, // nil focus matches every sample
|
||||
},
|
||||
{
|
||||
focus: regexp.MustCompile("notfound"),
|
||||
},
|
||||
{
|
||||
ignore: regexp.MustCompile("foo.c"),
|
||||
fm: true,
|
||||
im: true,
|
||||
},
|
||||
{
|
||||
hide: regexp.MustCompile("lib.so"),
|
||||
fm: true,
|
||||
hm: true,
|
||||
},
|
||||
{
|
||||
show: regexp.MustCompile("foo.c"),
|
||||
fm: true,
|
||||
hnm: true,
|
||||
},
|
||||
{
|
||||
show: regexp.MustCompile("notfound"),
|
||||
fm: true,
|
||||
},
|
||||
} {
|
||||
prof := *testProfile.Copy()
|
||||
prof := *testProfile1.Copy()
|
||||
gf, gi, gh, gnh := prof.FilterSamplesByName(tc.focus, tc.ignore, tc.hide, tc.show)
|
||||
if gf != tc.fm {
|
||||
t.Errorf("Filter #%d, got fm=%v, want %v", tx, gf, tc.fm)
|
||||
@ -488,7 +842,7 @@ func TestTagFilter(t *testing.T) {
|
||||
{regexp.MustCompile("key1"), nil, true, false, 1},
|
||||
{nil, regexp.MustCompile("key[12]"), true, true, 1},
|
||||
} {
|
||||
prof := testProfile.Copy()
|
||||
prof := testProfile1.Copy()
|
||||
gim, gem := prof.FilterTagsByName(tc.include, tc.exclude)
|
||||
if gim != tc.im {
|
||||
t.Errorf("Filter #%d, got include match=%v, want %v", tx, gim, tc.im)
|
||||
@ -513,9 +867,152 @@ func locationHash(s *Sample) string {
|
||||
return tb
|
||||
}
|
||||
|
||||
func TestNumLabelUnits(t *testing.T) {
|
||||
var tagFilterTests = []struct {
|
||||
desc string
|
||||
tagVals []map[string][]int64
|
||||
tagUnits []map[string][]string
|
||||
wantUnits map[string]string
|
||||
wantIgnoredUnits map[string][]string
|
||||
}{
|
||||
{
|
||||
"One sample, multiple keys, different specified units",
|
||||
[]map[string][]int64{{"key1": {131072}, "key2": {128}}},
|
||||
[]map[string][]string{{"key1": {"bytes"}, "key2": {"kilobytes"}}},
|
||||
map[string]string{"key1": "bytes", "key2": "kilobytes"},
|
||||
map[string][]string{},
|
||||
},
|
||||
{
|
||||
"One sample, one key with one value, unit specified",
|
||||
[]map[string][]int64{{"key1": {8}}},
|
||||
[]map[string][]string{{"key1": {"bytes"}}},
|
||||
map[string]string{"key1": "bytes"},
|
||||
map[string][]string{},
|
||||
},
|
||||
{
|
||||
"One sample, one key with one value, empty unit specified",
|
||||
[]map[string][]int64{{"key1": {8}}},
|
||||
[]map[string][]string{{"key1": {""}}},
|
||||
map[string]string{"key1": "key1"},
|
||||
map[string][]string{},
|
||||
},
|
||||
{
|
||||
"Key bytes, unit not specified",
|
||||
[]map[string][]int64{{"bytes": {8}}},
|
||||
[]map[string][]string{nil},
|
||||
map[string]string{"bytes": "bytes"},
|
||||
map[string][]string{},
|
||||
},
|
||||
{
|
||||
"One sample, one key with one value, unit not specified",
|
||||
[]map[string][]int64{{"kilobytes": {8}}},
|
||||
[]map[string][]string{nil},
|
||||
map[string]string{"kilobytes": "kilobytes"},
|
||||
map[string][]string{},
|
||||
},
|
||||
{
|
||||
"Key request, unit not specified",
|
||||
[]map[string][]int64{{"request": {8}}},
|
||||
[]map[string][]string{nil},
|
||||
map[string]string{"request": "bytes"},
|
||||
map[string][]string{},
|
||||
},
|
||||
{
|
||||
"Key alignment, unit not specified",
|
||||
[]map[string][]int64{{"alignment": {8}}},
|
||||
[]map[string][]string{nil},
|
||||
map[string]string{"alignment": "bytes"},
|
||||
map[string][]string{},
|
||||
},
|
||||
{
|
||||
"One sample, one key with multiple values and two different units",
|
||||
[]map[string][]int64{{"key1": {8, 8}}},
|
||||
[]map[string][]string{{"key1": {"bytes", "kilobytes"}}},
|
||||
map[string]string{"key1": "bytes"},
|
||||
map[string][]string{"key1": {"kilobytes"}},
|
||||
},
|
||||
{
|
||||
"One sample, one key with multiple values and three different units",
|
||||
[]map[string][]int64{{"key1": {8, 8}}},
|
||||
[]map[string][]string{{"key1": {"bytes", "megabytes", "kilobytes"}}},
|
||||
map[string]string{"key1": "bytes"},
|
||||
map[string][]string{"key1": {"kilobytes", "megabytes"}},
|
||||
},
|
||||
{
|
||||
"Two samples, one key, different units specified",
|
||||
[]map[string][]int64{{"key1": {8}}, {"key1": {8}}},
|
||||
[]map[string][]string{{"key1": {"bytes"}}, {"key1": {"kilobytes"}}},
|
||||
map[string]string{"key1": "bytes"},
|
||||
map[string][]string{"key1": {"kilobytes"}},
|
||||
},
|
||||
{
|
||||
"Keys alignment, request, and bytes have units specified",
|
||||
[]map[string][]int64{{
|
||||
"alignment": {8},
|
||||
"request": {8},
|
||||
"bytes": {8},
|
||||
}},
|
||||
[]map[string][]string{{
|
||||
"alignment": {"seconds"},
|
||||
"request": {"minutes"},
|
||||
"bytes": {"hours"},
|
||||
}},
|
||||
map[string]string{
|
||||
"alignment": "seconds",
|
||||
"request": "minutes",
|
||||
"bytes": "hours",
|
||||
},
|
||||
map[string][]string{},
|
||||
},
|
||||
}
|
||||
for _, test := range tagFilterTests {
|
||||
p := &Profile{Sample: make([]*Sample, len(test.tagVals))}
|
||||
for i, numLabel := range test.tagVals {
|
||||
s := Sample{
|
||||
NumLabel: numLabel,
|
||||
NumUnit: test.tagUnits[i],
|
||||
}
|
||||
p.Sample[i] = &s
|
||||
}
|
||||
units, ignoredUnits := p.NumLabelUnits()
|
||||
if !reflect.DeepEqual(test.wantUnits, units) {
|
||||
t.Errorf("%s: got %v units, want %v", test.desc, units, test.wantUnits)
|
||||
}
|
||||
if !reflect.DeepEqual(test.wantIgnoredUnits, ignoredUnits) {
|
||||
t.Errorf("%s: got %v ignored units, want %v", test.desc, ignoredUnits, test.wantIgnoredUnits)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestSetMain(t *testing.T) {
|
||||
testProfile.massageMappings()
|
||||
if testProfile.Mapping[0].File != mainBinary {
|
||||
t.Errorf("got %s for main", testProfile.Mapping[0].File)
|
||||
testProfile1.massageMappings()
|
||||
if testProfile1.Mapping[0].File != mainBinary {
|
||||
t.Errorf("got %s for main", testProfile1.Mapping[0].File)
|
||||
}
|
||||
}
|
||||
|
||||
// parallel runs n copies of fn in parallel.
|
||||
func parallel(n int, fn func()) {
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(n)
|
||||
for i := 0; i < n; i++ {
|
||||
go func() {
|
||||
fn()
|
||||
wg.Done()
|
||||
}()
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
func TestThreadSafety(t *testing.T) {
|
||||
src := testProfile1.Copy()
|
||||
parallel(4, func() { src.Copy() })
|
||||
parallel(4, func() {
|
||||
var b bytes.Buffer
|
||||
src.WriteUncompressed(&b)
|
||||
})
|
||||
parallel(4, func() {
|
||||
var b bytes.Buffer
|
||||
src.Write(&b)
|
||||
})
|
||||
}
|
||||
|
14
src/cmd/vendor/github.com/google/pprof/profile/proto.go
generated
vendored
14
src/cmd/vendor/github.com/google/pprof/profile/proto.go
generated
vendored
@ -71,7 +71,7 @@ func encodeLength(b *buffer, tag int, len int) {
|
||||
|
||||
func encodeUint64(b *buffer, tag int, x uint64) {
|
||||
// append varint to b.data
|
||||
encodeVarint(b, uint64(tag)<<3|0)
|
||||
encodeVarint(b, uint64(tag)<<3)
|
||||
encodeVarint(b, x)
|
||||
}
|
||||
|
||||
@ -145,13 +145,6 @@ func encodeStrings(b *buffer, tag int, x []string) {
|
||||
}
|
||||
}
|
||||
|
||||
func encodeStringOpt(b *buffer, tag int, x string) {
|
||||
if x == "" {
|
||||
return
|
||||
}
|
||||
encodeString(b, tag, x)
|
||||
}
|
||||
|
||||
func encodeBool(b *buffer, tag int, x bool) {
|
||||
if x {
|
||||
encodeUint64(b, tag, 1)
|
||||
@ -161,11 +154,10 @@ func encodeBool(b *buffer, tag int, x bool) {
|
||||
}
|
||||
|
||||
func encodeBoolOpt(b *buffer, tag int, x bool) {
|
||||
if x == false {
|
||||
return
|
||||
}
|
||||
if x {
|
||||
encodeBool(b, tag, x)
|
||||
}
|
||||
}
|
||||
|
||||
func encodeMessage(b *buffer, tag int, m message) {
|
||||
n1 := len(b.data)
|
||||
|
19
src/cmd/vendor/github.com/google/pprof/profile/proto_test.go
generated
vendored
19
src/cmd/vendor/github.com/google/pprof/profile/proto_test.go
generated
vendored
@ -100,8 +100,8 @@ var all = &Profile{
|
||||
{
|
||||
Location: []*Location{testL[0], testL[1], testL[2], testL[1], testL[1]},
|
||||
Label: map[string][]string{
|
||||
"key1": []string{"value1"},
|
||||
"key2": []string{"value2"},
|
||||
"key1": {"value1"},
|
||||
"key2": {"value2"},
|
||||
},
|
||||
Value: []int64{10, 20},
|
||||
},
|
||||
@ -109,12 +109,19 @@ var all = &Profile{
|
||||
Location: []*Location{testL[1], testL[2], testL[0], testL[1]},
|
||||
Value: []int64{30, 40},
|
||||
Label: map[string][]string{
|
||||
"key1": []string{"value1"},
|
||||
"key2": []string{"value2"},
|
||||
"key1": {"value1"},
|
||||
"key2": {"value2"},
|
||||
},
|
||||
NumLabel: map[string][]int64{
|
||||
"key1": []int64{1, 2},
|
||||
"key2": []int64{3, 4},
|
||||
"key1": {1, 2},
|
||||
"key2": {3, 4},
|
||||
"bytes": {3, 4},
|
||||
"requests": {1, 1, 3, 4, 5},
|
||||
"alignment": {3, 4},
|
||||
},
|
||||
NumUnit: map[string][]string{
|
||||
"requests": {"", "", "seconds", "", "s"},
|
||||
"alignment": {"kilobytes", "kilobytes"},
|
||||
},
|
||||
},
|
||||
},
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user