使用Go SDK的云数据流并行问题

时间:2019-03-11 01:02:03

标签: go google-cloud-dataflow apache-beam

我在Go SDK上有Apache Beam代码实现,如下所述。管道包含3个步骤。一个是textio.Read,另一个是CountLines,最后一步是ProcessLinesProcessLines步骤大约需要10秒钟的时间。为了简洁起见,我刚刚添加了一个睡眠功能。

我正在与20名工人通话。当我运行管道时,我期望有20个工作程序可以并行运行,并且textio.Read从文件中读取20行,而ProcessLines将在10秒内执行20个并行执行。但是,管道无法像这样工作。当前,它的工作方式是textio.Read从文件中读取一行,将数据推送到下一步,并等待,直到ProcessLines步骤完成其10秒钟的工作。没有并行性,整个管道中文件中只有一个行字符串。您能否澄清一下我在并行处理方面做错了什么?我应该如何更新代码以实现如上所述的并行性?

package main

import (
    "context"
    "flag"
    "time"

    "github.com/apache/beam/sdks/go/pkg/beam"
    "github.com/apache/beam/sdks/go/pkg/beam/io/textio"
    "github.com/apache/beam/sdks/go/pkg/beam/log"
    "github.com/apache/beam/sdks/go/pkg/beam/x/beamx"
)

// metrics to be monitored
var (
    input         = flag.String("input", "", "Input file (required).")
    numberOfLines = beam.NewCounter("extract", "numberOfLines")
    lineLen       = beam.NewDistribution("extract", "lineLenDistro")
)

func countLines(ctx context.Context, line string) string {
    lineLen.Update(ctx, int64(len(line)))
    numberOfLines.Inc(ctx, 1)

    return line
}

func processLines(ctx context.Context, line string) {
    time.Sleep(10 * time.Second)
}

func CountLines(s beam.Scope, lines beam.PCollection) beam.PCollection 
{
    s = s.Scope("Count Lines")

    return beam.ParDo(s, countLines, lines)
}

func ProcessLines(s beam.Scope, lines beam.PCollection) {
    s = s.Scope("Process Lines")

    beam.ParDo0(s, processLines, lines)
}

func main() {
    // If beamx or Go flags are used, flags must be parsed first.
    flag.Parse()
    // beam.Init() is an initialization hook that must be called on startup. On
    // distributed runners, it is used to intercept control.
    beam.Init()

    // Input validation is done as usual. Note that it must be after Init().
    if *input == "" {
        log.Fatal(context.Background(), "No input file provided")
    }

    p := beam.NewPipeline()
    s := p.Root()

    l := textio.Read(s, *input)
    lines := CountLines(s, l)
    ProcessLines(s, lines)

    // Concept #1: The beamx.Run convenience wrapper allows a number of
    // pre-defined runners to be used via the --runner flag.
    if err := beamx.Run(context.Background(), p); err != nil {
        log.Fatalf(context.Background(), "Failed to execute job: %v", err.Error())
    }
}

编辑:

在获得关于问题的答案可能是由融合引起的答案后,我更改了代码的相关部分,但此代码不再起作用。

现在,第一步和第二步并行运行,但是第三步ProcessLines不能并行运行。我只做了以下更改。谁能告诉我问题出在哪里?

func AddRandomKey(s beam.Scope, col beam.PCollection) beam.PCollection {
    return beam.ParDo(s, addRandomKeyFn, col)
}

func addRandomKeyFn(elm beam.T) (int, beam.T) {
    return rand.Int(), elm
}

func countLines(ctx context.Context, _ int, lines func(*string) bool, emit func(string)) {
    var line string
    for lines(&line) {
        lineLen.Update(ctx, int64(len(line)))
        numberOfLines.Inc(ctx, 1)
        emit(line)
    }
}
func processLines(ctx context.Context, _ int, lines func(*string) bool) {
    var line string
    for lines(&line) {
        time.Sleep(10 * time.Second)
        numberOfLinesProcess.Inc(ctx, 1)
    }
}

func CountLines(s beam.Scope, lines beam.PCollection) beam.PCollection {
    s = s.Scope("Count Lines")
    keyed := AddRandomKey(s, lines)
    grouped := beam.GroupByKey(s, keyed)

    return beam.ParDo(s, countLines, grouped)
}

func ProcessLines(s beam.Scope, lines beam.PCollection) {
    s = s.Scope("Process Lines")
    keyed := AddRandomKey(s, lines)
    grouped := beam.GroupByKey(s, keyed)

    beam.ParDo0(s, processLines, grouped)
}

1 个答案:

答案 0 :(得分:0)

MapReduce类型管道的许多高级运行程序融合了可以一起在内存中运行的阶段。 Apache Beam和Dataflow也不例外。

这里发生的是流水线的三个步骤融合在一起,并且发生在同一台机器上。此外,不幸的是,Go SDK当前不支持拆分Read转换。

要在第三次转换中实现并行性,可以中断 ReadProcessLines之间的融合。为此,您可以在行中添加随机键,并进行GroupByKey转换。

在Python中,它将是:

(p | beam.ReadFromText(...)
   | CountLines()
   | beam.Map(lambda x: (random.randint(0, 1000), x))
   | beam.GroupByKey()
   | beam.FlatMap(lambda k, v: v)  # Discard the key, and return the values
   | ProcessLines())

这将允许您并行化ProcessLines