In our application we use akka stream to issue HTTP requests upon receiving streamed data. We started with some simple approach described in Akka documentation
class HttpNotificationFlow {
private val poolClientFlow = Http().cachedHostConnectionPool[SomeData](dest.host, dest.port)
val hostConnectionFlow: Flow[SomeData, Boolean, NotUsed] = Flow[SomeData]
.map[(HttpRequest, SomeData)](someData =>
HttpRequest(method = HttpMethods.POST, entity = HttpEntity(ContentTypes.`application/json`, TopLevelNotification(someData).toJson.compactPrint)) -> someData)
.via(poolClientFlow)
.map {
case (Success(response), _) => if (response.status == StatusCodes.OK) true else throw new ResponseNotOkException(response.status)
case (Failure(ex), _) => logger.error("Can't notify", ex); false
}
}
We then use hostConnectionFlow
in our graph:
RunnableGraph.fromGraph(GraphDSL.create() { implicit builder =>
import GraphDSL.Implicits._
val sink: Sink[Boolean, Future[_]] = Sink.foreach(res => logger.info(res.toString))
kafkaReader.someDataSource ~> NotificationFlow.hostConnectionFlow ~> sink
}).run()
The problem is that we now need to have control on two different levels during Http operations:
Current implementation using Host-Level API doesn't allow these things to be done:
So how is it possible to acquire control over these lower level operations while using akka HTTP inside stream with connection pool?
I've read about Connection-level API and Request-level API but still don't have an idea on how they can help.
In case of Connection-level API it is still not clear how to implement fail-fast behavior, while we clearly lose pool implementation.
In case of Request-level API we don't have control over connection (default cached connection pool is used).