什么是AWSRequestMetricsFullSupport以及如何将其关闭?

时间:2015-09-04 20:52:31

标签: amazon-web-services amazon-s3 apache-spark amazon-emr

我正在尝试将一些数据从Spark数据帧保存到S3存储桶。这很简单:

dataframe.saveAsParquetFile("s3://kirk/my_file.parquet")

数据已成功保存,但UI很长时间处于忙碌状态。我得到了数千行:

2015-09-04 20:48:19,591 INFO  [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[200], ServiceName=[Amazon S3], AWSRequestID=[5C3211750F4FF5AB], ServiceEndpoint=[https://kirk.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[63.827], HttpRequestTime=[62.919], HttpClientReceiveResponseTime=[61.678], RequestSigningTime=[0.05], ResponseProcessingTime=[0.812], HttpClientSendRequestTime=[0.038],
2015-09-04 20:48:19,610 INFO  [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[204], ServiceName=[Amazon S3], AWSRequestID=[709DA41540539FE0], ServiceEndpoint=[https://kirk.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[18.064], HttpRequestTime=[17.959], HttpClientReceiveResponseTime=[16.703], RequestSigningTime=[0.06], ResponseProcessingTime=[0.003], HttpClientSendRequestTime=[0.046],
2015-09-04 20:48:19,664 INFO  [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[204], ServiceName=[Amazon S3], AWSRequestID=[1B1EB812E7982C7A], ServiceEndpoint=[https://kirk.s3.amazonaws.com], HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[54.36], HttpRequestTime=[54.26], HttpClientReceiveResponseTime=[53.006], RequestSigningTime=[0.057], ResponseProcessingTime=[0.002], HttpClientSendRequestTime=[0.034],
2015-09-04 20:48:19,675 INFO  [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[404], Exception=[com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: AF6F960F3B2BF3AB), S3 Extended Request ID: CLs9xY8HAxbEAKEJC4LS1SgpqDcnHeaGocAbdsmYKwGttS64oVjFXJOe314vmb9q], ServiceName=[Amazon S3], AWSErrorCode=[404 Not Found], AWSRequestID=[AF6F960F3B2BF3AB], ServiceEndpoint=[https://kirk.s3.amazonaws.com], Exception=1, HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[10.111], HttpRequestTime=[10.009], HttpClientReceiveResponseTime=[8.758], RequestSigningTime=[0.043], HttpClientSendRequestTime=[0.044],
2015-09-04 20:48:19,685 INFO  [main] amazonaws.latency (AWSRequestMetricsFullSupport.java:log(203)) - StatusCode=[404], Exception=[com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: F2198ACEB4B2CE72), S3 Extended Request ID: J9oWD8ncn6WgfUhHA1yqrBfzFC+N533oD/DK90eiSvQrpGH4OJUc3riG2R4oS1NU], ServiceName=[Amazon S3], AWSErrorCode=[404 Not Found], AWSRequestID=[F2198ACEB4B2CE72], ServiceEndpoint=[https://kirk.s3.amazonaws.com], Exception=1, HttpClientPoolLeasedCount=0, RequestCount=1, HttpClientPoolPendingCount=0, HttpClientPoolAvailableCount=1, ClientExecuteTime=[9.879], HttpRequestTime=[9.776], HttpClientReceiveResponseTime=[8.537], RequestSigningTime=[0.05], HttpClientSendRequestTime=[0.033],

我可以理解一些用户是否有兴趣记录S3操作的延迟,但有没有办法禁用AWSRequestMetricsFullSupport中的所有监控和日志记录?

当我检查Spark UI时,它告诉我作业相对较快地完成,但是控制台很长时间都充满了这些消息。

3 个答案:

答案 0 :(得分:1)

分别AWS SDK for Java source comment读到:

/**
 * Start an event which will be timed. [...]
 * 
 * This feature is enabled if the system property
 * "com.amazonaws.sdk.enableRuntimeProfiling" is set, or if a
 * {@link RequestMetricCollector} is in use either at the request, web service
 * client, or AWS SDK level.
 * 
 * @param eventName
 *            - The name of the event to start
 * 
 * @see AwsSdkMetrics
 */

如引用的AwsSdkMetrics Java Docs中进一步概述的那样,您可以通过系统属性禁用它:

  

禁用Java AWS SDK的默认度量标准集合   默认。要启用它,只需指定系统属性即可   启动JVM时," com.amazonaws.sdk.enableDefaultMetrics" 。   指定系统属性后,将使用默认度量标准收集器   从AWS SDK级别开始。默认实现上传   使用AWS捕获到Amazon CloudWatch的请求/响应指标   通过DefaultAWSCredentialsProviderChain获得的凭据。

这似乎可以被请求,Web服务客户端或AWS SDK级别的<{1}}硬连接覆盖,这可能需要resp。正在使用的客户端/框架的配置调整(例如Spark):

  

需要完全自定义指标集合的客户端可以   实现SPI MetricCollector,然后替换默认的AWS   通过SDK实现收集器   setMetricCollector(MetricCollector)

到目前为止,这些功能的文档似乎有点稀疏,以下是我所知道的两篇相关博文:

答案 1 :(得分:1)

在发布标签EMR上沉默这些日志被证明是一个非常大的挑战。有&#34; an issue with Spark Log4j-based logging in YARN containers&#34;已在版本emr-4.7.2中修复。一个有效的解决方案是将这些jsons添加为配置:

[
{
  "Classification": "hadoop-log4j",
  "Properties": {
    "log4j.logger.com.amazon.ws.emr.hadoop.fs": "ERROR",
    "log4j.logger.com.amazonaws.latency": "ERROR"
  },
  "Configurations": []
}
]

并且在pre emr-4.7.2中也有这个丢弃“车”的json&#39;错误的log4j for spark是默认值:

[
{
  "Classification": "spark-defaults",
  "Properties": {
    "spark.driver.extraJavaOptions": "-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=512M -XX:OnOutOfMemoryError='kill -9 %p'"
  },
  "Configurations": []
}
]

答案 2 :(得分:0)

我发现的最佳解决方案是通过将log4j配置文件传递给Spark上下文来配置Java日志记录(即关闭)。

--driver-java-options "-Dlog4j.configuration=/home/user/log4j.properties"

其中log4j.properties是禁用INFO类型消息的log4j配置文件。