以下是代码段:
val client = AmazonDynamoDBClientBuilder.standard.withRegion(Regions.the_region).withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("access_key", "secret_key"))).build()
val dynamoDB = new DynamoDB(client)
val table = dynamoDB.getTable("tbl_name")
def putItem(email: String, name: String): Unit = {
val item = new Item().withPrimaryKey("email", email).withNumber("ts", System.currentTimeMillis).withString("name", name)
table.putItem(item)
}
spark.sql("""
select
email,
name
from db.hive_table_name
""").rdd.repartition(40).map(row => putItem(row.getString(0), row.getString(1))).collect()
我打算通过AWS提供的Java SDK将每条记录写入Dynamodb表,但它抱怨如下错误:
org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2094)
at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:370)
at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:369)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.map(RDD.scala:369)
我如何调整代码以便为每个分区创建DynamoDB
和Table
对象,以便利用Spark作业的并行性。谢谢!
答案 0 :(得分:1)
而不是map
和collect
我会使用foreachPartition
:
spark.sql(query).rdd.repartition(40).foreachPartition(iter => {
val client = AmazonDynamoDBClientBuilder.standard.withRegion(Regions.the_region)
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("access_key", "secret_key"))).build()
val dynamoDB = new DynamoDB(client)
val table = dynamoDB.getTable("tbl_name")
iter.foreach(row => putItem(row.getString(0), row.getString(1)))
})