Spark:java.io.NotSerializableException:com.amazonaws.services.s3.AmazonS3Client

时间:2019-05-15 10:53:05

标签: scala amazon-web-services apache-spark amazon-s3

我正在尝试从S3读取大量大文件,如果作为Dataframe函数完成,则需要花费大量时间。因此,遵循此post和相关的gist之后,我尝试使用RDD如下所示并行读取s3对象

def dfFromS3Objects(s3: AmazonS3, bucket: String, prefix: String, pageLength: Int = 1000) = {
    import com.amazonaws.services.s3._
    import model._
    import spark.sqlContext.implicits._

    import scala.collection.JavaConversions._

    val request = new ListObjectsRequest()
    request.setBucketName(bucket)
    request.setPrefix(prefix)
    request.setMaxKeys(pageLength)

    val objs: ObjectListing = s3.listObjects(request) // Note that this method returns truncated data if longer than the "pageLength" above. You might need to deal with that.

    spark.sparkContext.parallelize(objs.getObjectSummaries.map(_.getKey).toList)
      .flatMap { key => Source.fromInputStream(s3.getObject(bucket, key).getObjectContent: InputStream).getLines }.toDF()
  }

经测试最终以

表示的
Caused by: java.io.NotSerializableException: com.amazonaws.services.s3.AmazonS3Client
Serialization stack:
    - object not serializable (class: com.amazonaws.services.s3.AmazonS3Client, value: com.amazonaws.services.s3.AmazonS3Client@35c8be21)
    - field (class: de.smava.data.bards.anonymize.HistoricalBardAnonymization$$anonfun$dfFromS3Objects$2, name: s3$1, type: interface com.amazonaws.services.s3.AmazonS3)
    - object (class de.smava.data.bards.anonymize.HistoricalBardAnonymization$$anonfun$dfFromS3Objects$2, <function1>)
    at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:342)
    ... 63 more

我知道我提供的AmazonS3对象需要运送到执行者,因此需要可序列化,但这来自一个示例代码段,这意味着有人可以使用它,需要帮助找出我缺少的内容这里

1 个答案:

答案 0 :(得分:1)

根据要点,s3被定义为将为每个呼叫创建新客户端的方法。不建议这样做。解决该问题的一种方法是使用mapPartitions

spark
  .sparkContext
  .parallelize(objs.getObjectSummaries.map(_.getKey).toList)
  .mapPartitions { it =>
    val s3 = ... // init the client here
    it.flatMap { key => Source.fromInputStream(s3.getObject(bucket, key).getObjectContent: InputStream).getLines }
  }
  .toDF

这仍将为每个JVM创建多个客户端,但可能比为每个对象创建一个客户端的版本少得多。如果您想在JVM内部线程之间重用客户端,则可以例如将其包装在顶级对象中

object Foo {
  val s3 = ...
}

并为客户端使用静态配置。