Spark 2.0 S3元数据加载在多个数据帧读取时挂起

时间:2016-08-09 20:20:32

标签: apache-spark amazon-s3 avro

我们目前正在评估来自spark 1.6的spark 2.0升级,但是我们有一个非常奇怪的错误阻止我们进行此转换。

我们的一个要求是从S3读取多个数据点并将它们组合在一起。当我们加载50个数据集时,没有问题。但是,在第51个数据集加载时,所有内容都会挂起寻找元数据。这不是间歇性的,而且这种情况一直发生。

数据格式是avro容器,我们使用的是spark-avro 3.0.0。

对此有任何答案吗?

  • 这与套接字超时issue无关,所有套接字线程都不会被阻止。
<<main thread dump>>
java.lang.Thread.sleep(Native Method)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.doPauseBeforeRetry(AmazonHttpClient.java:1475)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.pauseBeforeRetry(AmazonHttpClient.java:1439)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:794)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3826)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1015)
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:991)
com.amazon.ws.emr.hadoop.fs.s3n.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:212)
sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
com.sun.proxy.$Proxy36.retrieveMetadata(Unknown Source)
com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:780)
org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1428)
com.amazon.ws.emr.hadoop.fs.EmrFileSystem.exists(EmrFileSystem.java:313)
org.apache.spark.sql.execution.datasources.DataSource.hasMetadata(DataSource.scala:289)
org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:324)
org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:132)

1 个答案:

答案 0 :(得分:0)

似乎avro-spark通过不释放连接来耗尽连接池。

https://github.com/databricks/spark-avro/issues/156