akka stream http率限制

时间:2016-11-28 11:26:01

标签: scala akka akka-stream akka-http

我的计算图的一个阶段是类型流 Flow[Seq[Request], Seq[Response], NotUsed]。 显然,这个阶段应该为每个请求分配一个响应,并在解决所有请求后发出seq。

现在,底层API具有严格的速率限制策略,因此我每秒只能触发一个请求。如果我有Flow个单Request个,我可以zip此流与每秒发出一个元素的那个(How to limit an Akka Stream to execute and send down one message only once per second?),但我不是在这种情况下看到类似的解决方案。

有没有一种很好的表达方式?我想到的想法是使用低级Graph DSL并在那里使用一秒钟的流式流,并使用它来处理请求的序列,但我怀疑它会变得好看。

2 个答案:

答案 0 :(得分:2)

正如Victor所说,你应该使用默认油门。但是如果你想自己做,它可能看起来像这样

private def throttleFlow[T](rate: FiniteDuration) = Flow.fromGraph(GraphDSL.create() { implicit builder =>
  import GraphDSL.Implicits._

  val ticker = Source.tick(rate, rate, Unit)

  val zip = builder.add(Zip[T, Unit.type])
  val map = Flow[(T, Unit.type)].map { case (value, _) => value }
  val messageExtractor = builder.add(map)

  ticker ~> zip.in1
  zip.out ~> messageExtractor.in

  FlowShape.of(zip.in0, messageExtractor.out)
})

// And it will be used in your flow as follows
// .via(throttleFlow(FiniteDuration(200, MILLISECONDS)))

此外,由于您限制对某些API的访问,因此您可能希望以集中方式限制对其的调用。假设您的项目中有多个位置调用相同的外部API,但因为来自同一IP限制的调用应该应用于所有这些位置。对于这种情况,请考虑使用MergeHub.source作为(据称)akka-http流程。每个呼叫者都将创建并执行新图表以进行呼叫。

答案 1 :(得分:1)

以下是我最终使用的内容:

  case class FlowItem[I](i: I, requests: Seq[HttpRequest], responses: Seq[String]) {
    def withResponse(resp: String) = copy(responses = resp +: responses)
    def extractNextRequest = (requests.head, copy(requests = requests.tail))
  }


 def apiFlow[I, O](requestPer: FiniteDuration,
                    buildRequests: I => Seq[HttpRequest],
                    buildOut: (I, Seq[String]) => O
                   )(implicit system: ActorSystem, materializer: ActorMaterializer) = {
    GraphDSL.create() { implicit b =>
      import GraphDSL.Implicits._

      val in: FlowShape[I, FlowItem[I]] =
        b.add(Flow[I].map(i => FlowItem(i, buildRequests(i), Seq.empty)))

      val merge: MergePreferredShape[FlowItem[I]] =
        b.add(MergePreferred[FlowItem[I]](1))

      val throttle: FlowShape[FlowItem[I], FlowItem[I]] =
        b.add(Flow[FlowItem[I]].throttle(1, requestPer, 1, ThrottleMode.shaping))

      val prepareRequest: FlowShape[FlowItem[I], (HttpRequest, FlowItem[I])] =
        b.add(Flow[FlowItem[I]].map(_.extractNextRequest))

      val log =
        b.add(Flow[(HttpRequest, FlowItem[I])].map { r => Console.println(s"rquest to ${r._1.uri}"); r})

      val pool: FlowShape[(HttpRequest, FlowItem[I]), (Try[HttpResponse], FlowItem[I])] =
        b.add(Http(system).superPool[FlowItem[I]]())

      val transformResponse: FlowShape[(Try[HttpResponse], FlowItem[I]), FlowItem[I]] =
        b.add(Flow[(Try[HttpResponse], FlowItem[I])].mapAsync(1) {
          case (Success(HttpResponse(StatusCodes.OK, headers, entity, _)), flowItem) =>
            entity.toStrict(1.second).map(resp => flowItem.withResponse(resp.data.utf8String))
        })

      val split: UniformFanOutShape[FlowItem[I], FlowItem[I]] =
        b.add(Partition[FlowItem[I]](2, fi => if (fi.requests.isEmpty) 0 else 1))


      val out: FlowShape[FlowItem[I], O] =
        b.add(Flow[FlowItem[I]].map(fi => buildOut(fi.i, fi.responses)))

        in ~> merge ~> throttle ~> prepareRequest ~> log ~> pool ~> transformResponse ~> split ~> out
              merge.preferred   <~                                                       split

      FlowShape(in.in, out.out)
    }
  }

我们的想法是传递元素与请求一样多次抛出限制,并将其他(尚未执行的)请求与消息一起存储。 split元素检查是否还有更多请求。