并行运行Akka Streams阶段会大大增加内存压力

时间:2017-01-20 12:46:46

标签: parallel-processing akka akka-stream memory-pressure akka-actor

我试图实现一个Akka Stream,它从视频文件中读取帧并应用SVM分类器以检测每个帧上的对象。检测可以并行运行,因为视频帧的顺序无关紧要。我的想法是创建一个遵循Akka Streams Cookbook(Balancing jobs to a fixed pool of workers)的图表,其中有两个标记为.async的检测阶段。

它在一定程度上如预期的那样工作,但我注意到我的系统的内存压力(仅8 GB可用)显着增加,并且在图表之外显着减慢了系统的速度。将此与使用.mapAsyncAkka Docs)的不同方法进行比较,甚至将三个角色集成到执行对象检测的流中,内存压力显着降低。

我错过了什么?为什么并行运行两个阶段会增加内存压力,而三个并行运行的actor似乎工作正常?

补充说明:我使用OpenCV阅读视频文件。由于4K分辨率,Mat类型的每个视频帧大约为26.5 MB。

.async并行运行两个阶段会显着增加内存压力

implicit val materializer = ActorMaterializer(
  ActorMaterializerSettings(actorSystem)
    .withInputBuffer(initialSize = 1, maxSize = 1)
    .withOutputBurstLimit(1)
    .withSyncProcessingLimit(2)
  )

val greyscaleConversion: Flow[Frame, Frame, NotUsed] =
  Flow[Frame].map { el => Frame(el.videoPos, FrameTransformation.transformToGreyscale(el.frame)) }

val objectDetection: Flow[Frame, DetectedObjectPos, NotUsed] =
  Flow.fromGraph(GraphDSL.create() { implicit builder =>
    import GraphDSL.Implicits._

    val numberOfDetectors = 2
    val frameBalance: UniformFanOutShape[Frame, Frame] = builder.add(Balance[Frame](numberOfDetectors, waitForAllDownstreams = true))
    val detectionMerge: UniformFanInShape[DetectedObjectPos, DetectedObjectPos] = builder.add(Merge[DetectedObjectPos](numberOfDetectors))

    for (i <- 0 until numberOfDetectors) {
      val detectionFlow: Flow[Frame, DetectedObjectPos, NotUsed] = Flow[Frame].map { greyFrame =>
        val classifier = new CascadeClassifier()
        classifier.load("classifier.xml")
        val detectedObjects: MatOfRect = new MatOfRect()
        classifier.detectMultiScale(greyFrame.frame, detectedObjects, 1.08, 5, 0 | Objdetect.CASCADE_SCALE_IMAGE, new Size(40, 20), new Size(100, 80))
        DetectedObjectPos(greyFrame.videoPos, detectedObjects)
      }

      frameBalance.out(i) ~> detectionFlow.async ~> detectionMerge.in(i)
    }

    FlowShape(frameBalance.in, detectionMerge.out)
  })

def createGraph(videoFile: Video): RunnableGraph[NotUsed] = {
  Source.fromGraph(new VideoSource(videoFile))
    .via(greyscaleConversion).async
    .via(objectDetection)
    .to(Sink.foreach(detectionDisplayActor !))
}

将演员与.mapAsync整合,不会增加记忆压力

val greyscaleConversion: Flow[Frame, Frame, NotUsed] =
  Flow[Frame].map { el => Frame(el.videoPos, FrameTransformation.transformToGreyscale(el.frame)) }

val detectionRouter: ActorRef =
  actorSystem.actorOf(RandomPool(numberOfDetectors).props(Props[DetectionActor]), "detectionRouter")

val detectionFlow: Flow[Frame, DetectedObjectPos, NotUsed] =
  Flow[Frame].mapAsyncUnordered(parallelism = 3)(el => (detectionRouter ? el).mapTo[DetectedObjectPos])

def createGraph(videoFile: Video): RunnableGraph[NotUsed] = {
  Source.fromGraph(new VideoSource(videoFile))
    .via(greyscaleConversion)
    .via(detectionFlow)
    .to(Sink.foreach(detectionDisplayActor !))
}

0 个答案:

没有答案