为什么将指针传递给通道的速度较慢

时间:2016-12-16 06:46:55

标签: pointers go channel

我是golang的新手,尝试在golang中重写我的java服务器项目。

我发现,将指针传递到通道后,与传递值相比,性能下降了近30%。

以下是一个示例代码段:     包主     进口(         "时间"         " FMT"     )

array = [["34224", "1"], ["20561", "1"], ["26195", "1"], ["32098", "1"], ["33375", "1"], ["34077", "1"], ["34219", "1"], ["34220", "1"], ["34223", "1"], ["19178", "1"], ["33650", "1"], ["33654", "1"], ["33715", "1"], ["33716", "1"], ["33966", "1"], ["33967", "1"], ["33968", "1"], ["34068", "1"], ["34069", "1"], ["34070", "1"], ["34071", "1"], ["34072", "1"], ["34073", "1"], ["34074", "1"], ["34075", "1"], ["34076", "1"], ["33515", "2"], ["33717", "2"], ["33852", "2"], ["33965", "2"], ["34222", "2"]]
hash = array.group_by { |e| e[1] }
hash.keys.each do |key|
  hash[key] = hash[key].sort_by { |e| e[0].to_i }
end
hash  # Final result

更新。修复包丢失

2 个答案:

答案 0 :(得分:9)

作为一个值,可以叠加堆栈:

go run -gcflags '-m' tmp.go
# command-line-arguments
./tmp.go:18: inlining call to time.Time.Nanosecond
./tmp.go:24: inlining call to time.Time.Nanosecond
./tmp.go:25: t2 escapes to heap
./tmp.go:25: main ... argument does not escape
63613

作为指针,它会转义到堆:

go run -gcflags '-m' tmp.go
# command-line-arguments
./tmp.go:18: inlining call to time.Time.Nanosecond
./tmp.go:24: inlining call to time.Time.Nanosecond
./tmp.go:21: &b escapes to heap <-- Additional GC pressure
./tmp.go:20: moved to heap: b   <-- 
./tmp.go:25: t2 escapes to heap
./tmp.go:25: main ... argument does not escape
122513

转移到堆中会引入一些开销/ GC压力。

查看程序集时,指针版本还引入了其他说明,包括:

go run -gcflags '-S' tmp.go
0x0055 00085 (...tmp.go:18) CALL    runtime.newobject(SB)

在调用runtime.chansend1之前,非指针变量不会产生这种开销。

答案 1 :(得分:1)

作为good analysis of Martin Gallagher的补充,必须补充一点,您测量的方式是可疑的。这些微小程序的性能差异很大,因此应该反复进行测量。你的例子中也有一些错误。

首先:它没有编译,因为缺少包语句。

第二:NanosecondsNanosecond

之间存在重要差异

我试图以这种方式评估你的观察 *

package main

import (
    "time"
    "fmt"
)

const (
    chan_size = 1000
    cycle_count = 1000
)

var (
    v_ch = make(chan t, chan_size)
    p_ch = make(chan *t, chan_size)
)

type t struct {
    a uint
    b uint
}

func fill_v() {
    for i := 0; i < chan_size; i++ {
        b := t{a:3, b:5}
        v_ch <- b
    }
}

func fill_p() {
    for i := 0; i < chan_size; i++ {
        b := t{a:3, b:5}
        p_ch <- &b
    }
}

func measure_f(f func()) int64 {
    start := time.Now()
    f();
    elapsed := time.Since(start)
    return elapsed.Nanoseconds()
}

func main() {

    var v_nanos int64 = 0
    var p_nanos int64 = 0
    for i := 0; i<cycle_count; i++ {
        v_nanos += measure_f(fill_v);
        for i := 0; i < chan_size; i++ {
            _ = <- v_ch
        }
    }
    for i := 0; i<cycle_count; i++ {
        p_nanos += measure_f(fill_p);
        for i := 0; i < chan_size; i++ {
            _ = <- p_ch
        }
    }
    fmt.Println(
        "v:",v_nanos/cycle_count, 
        " p:", p_nanos/cycle_count, 
        "ratio (v/p):", float64(v_nanos)/float64(p_nanos))
}

确实可衡量的性能下降(我定义像这样的drop=1-(candidate/optimum)),但是我重复代码1000次,它在25%到50%之间变化,我和# 39; m甚至不确定堆如何回收以及什么时候,所以它可能难以量化

* 参见&#34;运行&#34; ideone

上的演示

...请注意stdout被冻结:v: 34875 p: 59420 ratio (v/p)0.586923845267128

出于某种原因,无法运行this code in the Go Playground