Spark 2.2.0 - 如何向DynamoDB写入/读取DataFrame

时间:2017-12-08 21:48:13

标签: scala apache-spark amazon-dynamodb amazon-emr

我希望我的Spark应用程序从DynamoDB中读取表,执行操作,然后将结果写入DynamoDB。

将表格读入DataFrame

现在,我可以将DynamoDB中的表作为hadoopRDD读入Spark,并将其转换为DataFrame。但是,我必须使用正则表达式从AttributeValue中提取值。有更好/更优雅的方式吗?在AWS API中找不到任何内容。

package main.scala.util

import org.apache.spark.sql.SparkSession
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
import org.apache.spark.rdd.RDD
import scala.util.matching.Regex
import java.util.HashMap

import com.amazonaws.services.dynamodbv2.model.AttributeValue
import org.apache.hadoop.io.Text;
import org.apache.hadoop.dynamodb.DynamoDBItemWritable
/* Importing DynamoDBInputFormat and DynamoDBOutputFormat */
import org.apache.hadoop.dynamodb.read.DynamoDBInputFormat
import org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat
import org.apache.hadoop.mapred.JobConf
import org.apache.hadoop.io.LongWritable

object Tester {

  // {S: 298905396168806365,} 
  def extractValue : (String => String) = (aws:String) => {
    val pat_value = "\\s(.*),".r

    val matcher = pat_value.findFirstMatchIn(aws)
                matcher match {
                case Some(number) => number.group(1).toString
                case None => ""
        }
  }


   def main(args: Array[String]) {
    val spark = SparkSession.builder().getOrCreate()
    val sparkContext = spark.sparkContext

      import spark.implicits._

      // UDF to extract Value from AttributeValue 
      val col_extractValue = udf(extractValue)

  // Configure connection to DynamoDB
  var jobConf_add = new JobConf(sparkContext.hadoopConfiguration)
      jobConf_add.set("dynamodb.input.tableName", "MyTable")
      jobConf_add.set("dynamodb.output.tableName", "MyTable")
      jobConf_add.set("mapred.output.format.class", "org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat")
      jobConf_add.set("mapred.input.format.class", "org.apache.hadoop.dynamodb.read.DynamoDBInputFormat")


      // org.apache.spark.rdd.RDD[(org.apache.hadoop.io.Text, org.apache.hadoop.dynamodb.DynamoDBItemWritable)]
      var hadooprdd_add = sparkContext.hadoopRDD(jobConf_add, classOf[DynamoDBInputFormat], classOf[Text], classOf[DynamoDBItemWritable])

      // Convert HadoopRDD to RDD
      val rdd_add: RDD[(String, String)] = hadooprdd_add.map {
      case (text, dbwritable) => (dbwritable.getItem().get("PIN").toString(), dbwritable.getItem().get("Address").toString())
      }

      // Convert RDD to DataFrame and extract Values from AttributeValue
      val df_add = rdd_add.toDF()
                  .withColumn("PIN", col_extractValue($"_1"))
                  .withColumn("Address", col_extractValue($"_2"))
                  .select("PIN","Address")
   }
}

将DataFrame写入DynamoDB

stackoverflow和其他地方的许多答案仅指向blog postemr-dynamodb-hadoop github。这些资源都没有实际演示如何写入DynamoDB。

I tried converting我的DataFrameRDD[Row]未成功。

df_add.rdd.saveAsHadoopDataset(jobConf_add)

将此DataFrame写入DynamoDB的步骤是什么? (如果你告诉我如何控制overwrite vs putItem;)

,可以获得奖励积分

注意:df_add与DynamoDB中的MyTable具有相同的架构。

编辑:我按照this answer提出的Using Spark SQL for ETL指向此帖的建议:

// Format table to DynamoDB format
  val output_rdd =  df_add.as[(String,String)].rdd.map(a => {
    var ddbMap = new HashMap[String, AttributeValue]()

    // Field PIN
    var PINValue = new AttributeValue() // New AttributeValue
    PINValue.setS(a._1)                 // Set value of Attribute as String. First element of tuple
    ddbMap.put("PIN", PINValue)         // Add to HashMap

    // Field Address
    var AddValue = new AttributeValue() // New AttributeValue
    AddValue.setS(a._2)                 // Set value of Attribute as String
    ddbMap.put("Address", AddValue)     // Add to HashMap

    var item = new DynamoDBItemWritable()
    item.setItem(ddbMap)

    (new Text(""), item)
  })             

  output_rdd.saveAsHadoopDataset(jobConf_add) 

然而,现在我得到java.lang.ClassCastException: java.lang.String cannot be cast to org.apache.hadoop.io.Text尽管遵循了文件......你有什么建议吗?

编辑2 :在Using Spark SQL for ETL上仔细阅读这篇文章:

  

获得DataFrame后,执行转换以使RDD与DynamoDB自定义输出格式知道如何编写的类型相匹配。自定义输出格式需要包含Text和DynamoDBItemWritable类型的元组。

考虑到这一点,下面的代码正是WPS博客文章建议的内容,除了我将output_df转换为rdd,否则saveAsHadoopDataset不起作用。现在,我得到了Exception in thread "main" scala.reflect.internal.Symbols$CyclicReference: illegal cyclic reference involving object InterfaceAudience。我在绳子的尽头!

      // Format table to DynamoDB format
  val output_df =  df_add.map(a => {
    var ddbMap = new HashMap[String, AttributeValue]()

    // Field PIN
    var PINValue = new AttributeValue() // New AttributeValue
    PINValue.setS(a.get(0).toString())                 // Set value of Attribute as String
    ddbMap.put("PIN", PINValue)         // Add to HashMap

    // Field Address
    var AddValue = new AttributeValue() // New AttributeValue
    AddValue.setS(a.get(1).toString())                 // Set value of Attribute as String
    ddbMap.put("Address", AddValue)     // Add to HashMap

    var item = new DynamoDBItemWritable()
    item.setItem(ddbMap)

    (new Text(""), item)
  })             

  output_df.rdd.saveAsHadoopDataset(jobConf_add)   

3 个答案:

答案 0 :(得分:5)

我正在关注"将Spark SQL用于ETL"链接,并发现相同"非法循环引用"例外。 该异常的解决方案非常简单(但需要花费2天才能计算出来),如下所示。关键是在数据帧的RDD上使用map函数,而不是数据帧本身。

val ddbConf = new JobConf(spark.sparkContext.hadoopConfiguration)
ddbConf.set("dynamodb.output.tableName", "<myTableName>")
ddbConf.set("dynamodb.throughput.write.percent", "1.5")
ddbConf.set("mapred.input.format.class", "org.apache.hadoop.dynamodb.read.DynamoDBInputFormat")
ddbConf.set("mapred.output.format.class", "org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat")


val df_ddb =  spark.read.option("header","true").parquet("<myInputFile>")
val schema_ddb = df_ddb.dtypes

var ddbInsertFormattedRDD = df_ddb.rdd.map(a => {
    val ddbMap = new HashMap[String, AttributeValue]()

    for (i <- 0 to schema_ddb.length - 1) {
        val value = a.get(i)
        if (value != null) {
            val att = new AttributeValue()
            att.setS(value.toString)
            ddbMap.put(schema_ddb(i)._1, att)
        }
    }

    val item = new DynamoDBItemWritable()
    item.setItem(ddbMap)

    (new Text(""), item)
}
)

ddbInsertFormattedRDD.saveAsHadoopDataset(ddbConf)

答案 1 :(得分:4)

我们为Spark创建了DynamoDB自定义数据源:

https://github.com/audienceproject/spark-dynamodb

它具有许多优雅的功能:

  • 具有延迟评估的分布式并行扫描
  • 通过速率限制所配置的表/索引容量的目标分数来进行吞吐量控制
  • 满足您需求的架构发现
  • 动态推理
  • 案例分类的统计分析
  • 列和过滤器下推
  • 全球二级索引支持
  • 写支持

我认为这绝对适合您的用例。如果您可以检查一下并提供反馈,我们将非常乐意。

答案 2 :(得分:0)

这是一个更简单的工作示例。

例如,使用Hadoop RDD从Kinesis Stream写入DynamoDB:-

https://github.com/kali786516/Spark2StructuredStreaming/blob/master/src/main/scala/com/dataframe/part11/kinesis/consumer/KinesisSaveAsHadoopDataSet/TransactionConsumerDstreamToDynamoDBHadoopDataSet.scala

用于使用Hadoop RDD和不带正则表达式的spark SQL从DynamoDB中读取。

val ddbConf = new JobConf(spark.sparkContext.hadoopConfiguration)
    //ddbConf.set("dynamodb.output.tableName", "student")
    ddbConf.set("dynamodb.input.tableName", "student")
    ddbConf.set("dynamodb.throughput.write.percent", "1.5")
    ddbConf.set("dynamodb.endpoint", "dynamodb.us-east-1.amazonaws.com")
    ddbConf.set("dynamodb.regionid", "us-east-1")
    ddbConf.set("dynamodb.servicename", "dynamodb")
    ddbConf.set("dynamodb.throughput.read", "1")
    ddbConf.set("dynamodb.throughput.read.percent", "1")
    ddbConf.set("mapred.input.format.class", "org.apache.hadoop.dynamodb.read.DynamoDBInputFormat")
    ddbConf.set("mapred.output.format.class", "org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat")
    //ddbConf.set("dynamodb.awsAccessKeyId", credentials.getAWSAccessKeyId)
    //ddbConf.set("dynamodb.awsSecretAccessKey", credentials.getAWSSecretKey)


val data = spark.sparkContext.hadoopRDD(ddbConf, classOf[DynamoDBInputFormat], classOf[Text], classOf[DynamoDBItemWritable])

val simple2: RDD[(String)] = data.map { case (text, dbwritable) => (dbwritable.toString)}

spark.read.json(simple2).registerTempTable("gooddata")

spark.sql("select replace(replace(split(cast(address as string),',')[0],']',''),'[','') as housenumber from gooddata").show(false)