我想创建一个测试,以确保我的包装AWS S3客户端的类请求一些正确数量的客户端。
class Wrapper {
//
private void buildClient() {
this.client = AmazonS3ClientBuilder.standard()
.withCredentials(this.secret)
.withRegion(this.region
.build();
}
public void doSomething() {
while(checkSomething()) {
client.doSomething();
client.doSomething();
}
}
}
我想做这样的事情。
class WrapperTest {
public testDoSomething() {
wrapper.doSomething();
assertTrue(numberOfHttpRequest, 3);
}
}
出于测试目的,我总是可以模拟客户端对象,但我也考虑将数据存储在生产用途中以进行性能分析(因此收集字节数也可能与收集HTTP请求数本身一样有用)
到目前为止,阅读Java文档:1)ProgressEvents,2)Logging和3)AWSRequestMetrics。但我不确定哪一个更适合收集请求数量以及如何以编程方式配置它们。
答案 0 :(得分:0)
对于单元测试,您肯定希望使用Mockito或scalamock或任何Mocking API。
关于请求数量的指标,
1)我建议收集日志并发送到elasticsearch(这是非常常见的),您可以根据特定字段进行汇总。
如下src/main/resource/log4j.properties
,
log4j.rootLogger=INFO, file, consoleLogs
# Direct log messages to a log file
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=myapp.log
log4j.appender.file.MaxFileSize=10MB
log4j.appender.file.MaxBackupIndex=10
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
log4j.appender.consoleLogs=org.apache.log4j.ConsoleAppender
log4j.appender.consoleLogs.layout=org.apache.log4j.PatternLayout
log4j.appender.consoleLogs.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
log4j.logger.com.amazonaws.request=DEBUG
AWS SDK的log4j日志如下所示,
2017-06-01 11:56:58 DEBUG request:1137 - Sending Request: PUT https://samsa-repo.s3.amazonaws.com /sendme.log Headers: (User-Agent: aws-sdk-java/1.11.109 Mac_OS_X/10.11.6 Java_HotSpot(TM)_64-Bit_Server_VM/25.111-b14/1.8.0_111 scala/2.11.8, amz-sdk-invocation-id: 9179e5c2-fee3-4e6e-abb9-b50f882f1966, Content-Length: 9, x-amz-storage-class: REDUCED_REDUNDANCY, Content-MD5: /UERdk1lrFHXgNJHTSd3QA==, Content-Type: application/octet-stream, )
2017-06-01 11:56:58 DEBUG request:87 - Received successful response: 200, AWS Request ID: 3695D599CB1FD794
对于错误回应,
2017-06-01 13:58:24 DEBUG request:1572 - Received error response: com.amazonaws.services.s3.model.AmazonS3Exception: The Content-MD5 you specified did not match what we received. (Service: Amazon S3; Status Code: 400; Error Code: BadDigest; Request ID: 684584BD135900F3), S3 Extended Request ID: Y1NowPaA/mhydTWaDBupS7o7CA/PkliiVKzmDrDQwENIOdrg049h8BZ+I6Pi1GC8TZqBq1AJGJg=
由于更容易以JSON格式查询日志,因此您可以使用以下maven依赖项来转换JSON格式的log4j日志,
<dependency>
<groupId>net.logstash.log4j</groupId>
<artifactId>jsonevent-layout</artifactId>
<version>1.7</version>
</dependency>
然后,将您的日志模式更改为net.logstash.log4j.JSONEventLayoutV1
,
log4j.rootLogger=INFO, file, consoleLogs
# Direct log messages to a log file
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=myapp.log
log4j.appender.file.MaxFileSize=10MB
log4j.appender.file.MaxBackupIndex=10
log4j.appender.file.layout=net.logstash.log4j.JSONEventLayoutV1
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
log4j.appender.consoleLogs=org.apache.log4j.ConsoleAppender
log4j.appender.consoleLogs.layout=net.logstash.log4j.JSONEventLayoutV1
log4j.appender.consoleLogs.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
log4j.logger.com.amazonaws.request=DEBUG
日志如下所示,
{
"@timestamp": "2017-06-01T21:05:37.204Z",
"source_host": "M00974000.prayagupd.net",
"file": "AmazonHttpClient.java",
"method": "executeOneRequest",
"level": "DEBUG",
"line_number": "1137",
"thread_name": "ScalaTest-run-running-PublishToSimpleStorageServiceSpecs",
"@version": 1,
"logger_name": "com.amazonaws.request",
"message": "Sending Request: HEAD https://samsa-repo.s3.amazonaws.com / Headers: (User-Agent: aws-sdk-java/1.11.109 Mac_OS_X/10.11.6 Java_HotSpot(TM)_64-Bit_Server_VM/25.111-b14/1.8.0_111 scala/2.11.8, amz-sdk-invocation-id: 39fd8121-b40d-cb48-a6ea-65cf580f569f, Content-Type: application/octet-stream, ) ",
"class": "com.amazonaws.http.AmazonHttpClient$RequestExecutor",
"mdc": {}
},
{
"@timestamp": "2017-06-01T21:05:38.337Z",
"source_host": "M00974000.prayagupd.net",
"file": "AwsResponseHandlerAdapter.java",
"method": "handle",
"level": "DEBUG",
"line_number": "87",
"thread_name": "ScalaTest-run-running-PublishToSimpleStorageServiceSpecs",
"@version": 1,
"logger_name": "com.amazonaws.request",
"message": "Received successful response: 200, AWS Request ID: null",
"class": "com.amazonaws.http.response.AwsResponseHandlerAdapter",
"mdc": {}
}
使用filebeat之类的转发器将日志发送到elasticsearch后,您可以按值Sending Request
聚合/搜索日志/请求。
如果filebeat转发器/ elasticsearch / kibana仪表板过度,您可能希望通过AWS cloudwatch转发聚合日志。