grpc低层遥测测井

时间:2019-04-03 19:23:14

标签: java python grpc grpc-java grpc-python

我正在尝试在较低级别上衡量服务的延迟。随便看看,我发现可以向grpc构建器添加app_context()

我已经完成了这样的简单实现,并打印了日志:

from flask import current_app as app
from iothub_client import IoTHubClient
class Session():
    def __init__(self, device_connection_string):
        self.client = IoTHubClient(device_connection_string, IoTHubTransportProvider.MQTT)

    def device_twin_callback(self, update_state, payload, user_context):
        with app.app_context():
            print(app.config)

    def initialize_callback(self):
        print(app.config)
        self.client.set_device_twin_callback(self.device_twin_callback, 0)

我正在循环运行一个简单的grpc客户端,并检查服务器流跟踪器的输出。

我看到日志的“生命周期”不断重复。这是一次迭代(但一次又一次地喷出完全相同的迭代):

addStreamTracerFactory

仅通过查看这些日志,我对某些事情还不太了解:

  1. 为什么要为每个请求创建一个新的流?我虽然说grpc客户端应该重新使用连接。 “流关闭”不应该称为对吗?
  2. 如果要重新使用流,我怎么会看到val server = io.grpc.netty.NettyServerBuilder.forPort(ApplicationConfig.Service.bindPort).addStreamTracerFactory(ServerStreamTracerFactory).... class Telemetry(fullMethodName: String, headers: Metadata) extends ServerStreamTracer with LazyLogging { override def serverCallStarted(callInfo: ServerStreamTracer.ServerCallInfo[_, _]): Unit = { logger.info(s"Telemetry '$fullMethodName' '$headers' callinfo:$callInfo") super.serverCallStarted(callInfo) } override def inboundMessage(seqNo: Int): Unit = { logger.info(s"inboundMessage $seqNo") super.inboundMessage(seqNo) } override def inboundMessageRead(seqNo: Int, optionalWireSize: Long, optionalUncompressedSize: Long): Unit = { logger.info(s"inboundMessageRead $seqNo $optionalWireSize $optionalUncompressedSize") super.inboundMessageRead(seqNo, optionalWireSize, optionalUncompressedSize) } override def outboundMessage(seqNo: Int): Unit = { logger.info(s"outboundMessage $seqNo") super.outboundMessage(seqNo) } override def outboundMessageSent(seqNo: Int, optionalWireSize: Long, optionalUncompressedSize: Long): Unit = { logger.info(s"outboundMessageSent $seqNo $optionalWireSize $optionalUncompressedSize") super.outboundMessageSent(seqNo, optionalWireSize, optionalUncompressedSize) } override def streamClosed(status: Status): Unit = { logger.info(s"streamClosed $status") super.streamClosed(status) } } object ServerStreamTracerFactory extends Factory with LazyLogging{ logger.info("called") override def newServerStreamTracer(fullMethodName: String, headers: Metadata): ServerStreamTracer = { logger.info(s"called with $fullMethodName $headers") new Telemetry(fullMethodName, headers) } } 数字(和22:15:06 INFO [grpc-default-worker-ELG-3-2] [newServerStreamTracer:38] [ServerStreamTracerFactory$] called with com.dy.affinity.service.AffinityService/getAffinities Metadata(content-type=application/grpc,user-agent=grpc-python/1.15.0 grpc-c/6.0.0 (osx; chttp2; glider),grpc-accept-encoding=identity,deflate,gzip,accept-encoding=identity,gzip) 22:15:06 INFO [grpc-default-executor-0] [serverCallStarted:8] [Telemetry] Telemetry 'com.dy.affinity.service.AffinityService/getAffinities' 'Metadata(content-type=application/grpc,user-agent=grpc-python/1.15.0 grpc-c/6.0.0 (osx; chttp2; glider),grpc-accept-encoding=identity,deflate,gzip,accept-encoding=identity,gzip)' callinfo:io.grpc.internal.ServerCallInfoImpl@5badffd8 22:15:06 INFO [grpc-default-worker-ELG-3-2] [inboundMessage:13] [Telemetry] inboundMessage 0 22:15:06 INFO [grpc-default-worker-ELG-3-2] [inboundMessageRead:17] [Telemetry] inboundMessageRead 0 19 -1 22:15:06 INFO [pool-1-thread-5] [outboundMessage:21] [Telemetry] outboundMessage 0 22:15:06 INFO [pool-1-thread-5] [outboundMessageSent:25] [Telemetry] outboundMessageSent 0 0 0 22:15:06 INFO [grpc-default-worker-ELG-3-2] [streamClosed:29] [Telemetry] streamClosed Status{code=OK, description=null, cause=null} )始终为“ 0”。 (另外,当我并行启动多个客户端时,它始终为0)。在哪种情况下,消息号不应为0?
  3. 如果不重新使用流,我应该如何以不同的方式配置客户端以重新使用连接?

1 个答案:

答案 0 :(得分:0)

在gRPC中,为每个RPC创建一个HTTP2流(而如果启用重试或对冲,则每个RPC可以有多个流)。 HTTP2流在一个连接上多路复用,打开和关闭流非常便宜。因此,是连接被重用,而不是流。

从跟踪器方法获得的seqNo是此流的消息的seqNo,从0开始。看起来您正在执行一元RPC,它发出一个请求并获得一个响应,然后关闭。您所看到的完全正常。