我们有一个运行esper的原型,但是性能却相当不足。我想这是我的错,而不是esper本身的问题,因此一直在寻找有关性能问题所在的帮助。
我正在运行esper服务的一个实例,并且已按以下方式分配了内存约束:-Xmx6G -Xms1G(我尝试了这些值的各种组合)。它可以使用4个CPU内核。在进行这些测试时,没有其他服务正在运行,只有esper,kafka和zookeeper。
我正在使用Akka Streams将事件流式传输到Esper中,该服务非常简单,它从kafka中流式传输,将事件插入Esper Runtime中,Esper已测试并正常工作了3个EPStatements。有一个侦听器,我将其添加到所有3条语句中,该侦听器将匹配的事件输出到kafka。
一些我试图找出性能问题所在的地方:
只有上面的第4个可观的性能优势。
以下是我们通过esper运行的示例查询。它已经过测试并且可以正常工作,我已经阅读了文档的性能调整部分,对我来说似乎还可以。我所有的查询都遵循类似的格式:
select * from EsperEvent#time(5 minutes)
match_recognize (
partition by asset_id
measures A as event1, B as event2, C as event3
pattern (A Z* B Z* C)
interval 10 seconds or terminated
define
A as A.eventtype = 13 AND A.win_EventID = "4624" AND A.win_LogonType = "3",
B as B.eventtype = 13 AND B.win_EventID = "4672",
C as C.eventtype = 13 AND (C.win_EventID = "4697" OR C.win_EventID = "7045")
)
某些代码。
这是我的akka流:
kafkaConsumer
.via(parsing) // Parse the json event to a POJO for esper. Have tried without this step also, no performance impact
.via(esperFlow) // mapAsync call to sendEvent(...)
//Here I am using kafka to measure the flow throughput rate. This is where I establish my throughput rate, based on the rate messages are written to "esper_flow_through" topic.
.map(rec => new ProducerRecord[Array[Byte], String]("esper_flow_through", Serialization.write(rec)))
.runWith(sink)
esperFlow(默认为并行= 4):
val esperFlow = Flow[EsperEvent]
.mapAsync(Parallelism)(event => Future {
engine.getEPRuntime.sendEvent(event)
event
})
监听器:
override def update(newEvents: Array[EventBean], oldEvents: Array[EventBean], statement: EPStatement, epServiceProvider: EPServiceProvider): Unit = Future {
logger.info(s"Received Listener updates: Query Name: ${statement.getName} ---- ${newEvents.map(_.getUnderlying)}, $oldEvents")
statement.getName match {
case "SERVICE_INSTALL" => serviceInstall.increment(newEvents.length)
case "ADMIN_GROUP" => adminGroup.increment(newEvents.length)
case "SMB_SHARE" => smbShare.increment(newEvents.length)
}
newEvents.map(_.getUnderlying.toString).toList
.foreach(queryMatch => {
val record: ProducerRecord[Array[Byte], String] = new ProducerRecord[Array[Byte], String]("esper_output", queryMatch)
producer.send(record)
})
}
性能观察
该速率似乎很低,所以我假设我在这里缺少某些esper配置方面的信息?
我们的目标吞吐量是每秒约10k。我们还有很长的路要走,Spark中的POC也越来越接近这个目标。
更新:
在@ user650839评论之后,我能够将吞吐量提高到稳定的每秒1k。这两个查询产生相同的吞吐量:
select * from EsperEvent(eventtype = 13 and win_EventID in ("4624", "4672", "4697", "7045"))#time(5 minutes)
match_recognize (
partition by asset_id
measures A as event1, B as event2, C as event3
pattern (A B C)
interval 10 seconds or terminated
define
A as A.eventtype = 13 AND A.win_EventID = "4624" AND A.win_LogonType = "3",
B as B.eventtype = 13 AND B.win_EventID = "4672",
C as C.eventtype = 13 AND (C.win_EventID = "4697" OR C.win_EventID = "7045"))
create context NetworkLogonThenInstallationOfANewService
start EsperEvent(eventtype = 13 AND win_EventID = "4624" AND win_LogonType = "3")
end pattern [
b=EsperEvent(eventtype = 13 AND win_EventID = "4672") ->
c=EsperEvent(eventtype = 13 AND (win_EventID = "4697" OR win_EventID = "7045"))
where timer:within(5 minutes)
]
context NetworkLogonThenInstallationOfANewService select * from EsperEvent output when terminated
但是每秒1k仍然太慢,无法满足我们的需求。
答案 0 :(得分:1)
匹配识别定义不正确。 A事件或B事件或C事件也可以是Z事件,因为任何事件都与Z事件匹配(Z是未定义的)。因此,存在大量可能的组合。我认为对于4个即将到来的事件,已经有1 * 2 * 3 * 4个组合可以被比赛识别,从而保持跟踪!匹配识别跟踪所有可能的组合,并且当事物匹配时,匹配识别对组合进行排序和排序,并输出全部/任意/某些。匹配识别可能不是一个好选择,或者将Z定义为也与A / B / C不匹配的内容。
我将使用一个上下文,该上下文以A事件开始并以C事件终止,并带有“终止时输出”,而不是匹配识别。
此外,他们还以您的方式设计了查询,即时间窗口将保留所有事件。您可以做得更好。
select * from EsperEvent(eventtype = 13 and win_EventID in ("4624", "4672", "4692", "7045"))#time(5 minutes)
match_recognize (
.........
define
A as A.win_EventID = "4624" AND A.win_LogonType = "3",
B as B.win_EventID = "4672",
C as C.win_EventID = "4697" OR C.win_EventID = "7045"
)
请注意,EsperEvent(eventtype=13 ....)
在事件进入时间窗口之前将其丢弃。关于使用过滤条件删除不需要的事件,文档中有一个性能提示。
编辑:一个错误是将IO吞吐量和Esper吞吐量作为一个度量。删除IO。使用Esper API和代码生成的数据测试Esper。放心之后,请重新添加IO。