doing lower-level operations in akka stream HTTP client

时间:2017-07-10 15:19:14

标签: scala akka akka-stream akka-http

In our application we use akka stream to issue HTTP requests upon receiving streamed data. We started with some simple approach described in Akka documentation

class HttpNotificationFlow {

  private val poolClientFlow = Http().cachedHostConnectionPool[SomeData](dest.host, dest.port)

  val hostConnectionFlow: Flow[SomeData, Boolean, NotUsed] = Flow[SomeData]
    .map[(HttpRequest, SomeData)](someData =>
    HttpRequest(method = HttpMethods.POST, entity = HttpEntity(ContentTypes.`application/json`, TopLevelNotification(someData).toJson.compactPrint)) -> someData)
    .via(poolClientFlow)
    .map {
      case (Success(response), _) => if (response.status == StatusCodes.OK) true else throw new ResponseNotOkException(response.status)
      case (Failure(ex), _) => logger.error("Can't notify", ex); false
    }
}

We then use hostConnectionFlow in our graph:

RunnableGraph.fromGraph(GraphDSL.create() { implicit builder =>
  import GraphDSL.Implicits._

  val sink: Sink[Boolean, Future[_]] = Sink.foreach(res => logger.info(res.toString))
  kafkaReader.someDataSource ~> NotificationFlow.hostConnectionFlow ~> sink
}).run()

The problem is that we now need to have control on two different levels during Http operations:

  1. Connection level: we need to shutdown the application as soon as we discover that connection can't be established during startup. This way we want to implement fail-fast behavior for our system, in order to avoid running with wrong config for destination host. Would be nice to avoid fake requests to trigger connection instantiation.
  2. Request level: we need to measure latency for individual HTTP requests, i.e. measure time between issuing request and receiving response.

Current implementation using Host-Level API doesn't allow these things to be done:

  1. Connections are established only on first request in the stream and failures to connect do not result in application shutdown.
  2. For request latency we can measure time between dispatching request into pool and receiving response from it, but then we end up with additional time spent on pool operations (establishing connections) logged as request latency.

So how is it possible to acquire control over these lower level operations while using akka HTTP inside stream with connection pool?

I've read about Connection-level API and Request-level API but still don't have an idea on how they can help.

In case of Connection-level API it is still not clear how to implement fail-fast behavior, while we clearly lose pool implementation.

In case of Request-level API we don't have control over connection (default cached connection pool is used).

0 个答案:

没有答案