使用Apache Spark将数据保存到DynamoDB

时间:2016-03-09 20:31:20

标签: apache-spark amazon-dynamodb apache-spark-sql amazon-emr spark-dataframe

我有一个申请表 1.我使用SqlContext.read.json将S3中的JSON文件读入Dataframe 2.然后在DataFrame上进行一些转换 3.最后,我想使用记录值之一将记录保存到DynamoDB,其中一个记录值作为键/列的其余JSON参数。

我正在尝试类似的事情:

JobConf jobConf = new JobConf(sc.hadoopConfiguration());
jobConf.set("dynamodb.servicename", "dynamodb");
jobConf.set("dynamodb.input.tableName", "my-dynamo-table");   // Pointing to DynamoDB table
jobConf.set("dynamodb.endpoint", "dynamodb.us-east-1.amazonaws.com");
jobConf.set("dynamodb.regionid", "us-east-1");
jobConf.set("dynamodb.throughput.read", "1");
jobConf.set("dynamodb.throughput.read.percent", "1");
jobConf.set("dynamodb.version", "2011-12-05");

jobConf.set("mapred.output.format.class", "org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat");
jobConf.set("mapred.input.format.class", "org.apache.hadoop.dynamodb.read.DynamoDBInputFormat");

DataFrame df = sqlContext.read().json("s3n://mybucket/abc.json");
RDD<String> jsonRDD = df.toJSON();
JavaRDD<String> jsonJavaRDD = jsonRDD.toJavaRDD();
PairFunction<String, Text, DynamoDBItemWritable> keyData = new PairFunction<String, Text, DynamoDBItemWritable>() {
    public Tuple2<Text, DynamoDBItemWritable> call(String row) {
        DynamoDBItemWritable writeable = new DynamoDBItemWritable();
        try {
            System.out.println("JSON : " + row);
            JSONObject jsonObject = new JSONObject(row);

            System.out.println("JSON Object: " + jsonObject);

            Map<String, AttributeValue> attributes = new HashMap<String, AttributeValue>();
            AttributeValue attributeValue = new AttributeValue();
            attributeValue.setS(row);
            attributes.put("values", attributeValue);

            AttributeValue attributeKeyValue = new AttributeValue();
            attributeValue.setS(jsonObject.getString("external_id"));
            attributes.put("primary_key", attributeKeyValue);

            AttributeValue attributeSecValue = new AttributeValue();
            attributeValue.setS(jsonObject.getString("123434335"));
            attributes.put("creation_date", attributeSecValue);
            writeable.setItem(attributes);
        } catch (Exception e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
        return new Tuple2(new Text(row), writeable);
    }
};

JavaPairRDD<Text, DynamoDBItemWritable> pairs = jsonJavaRDD
        .mapToPair(keyData);

Map<Text, DynamoDBItemWritable> map = pairs.collectAsMap();
System.out.println("Results : " + map);
pairs.saveAsHadoopDataset(jobConf);    

但是我没有看到任何数据写入DynamoDB。我也没有收到任何错误消息。

1 个答案:

答案 0 :(得分:5)

我不确定,但你似乎比它可能需要的更复杂。

我已经使用以下方法成功地将RDD写入DynamoDB:

val ddbInsertFormattedRDD = inputRDD.map { case (skey, svalue) =>
    val ddbMap = new util.HashMap[String, AttributeValue]()

    val key = new AttributeValue()
    key.setS(skey.toString)
    ddbMap.put("DynamoDbKey", key)


    val value = new AttributeValue()
    value.setS(svalue.toString)
    ddbMap.put("DynamoDbKey", value)

    val item = new DynamoDBItemWritable()
    item.setItem(ddbMap)

    (new Text(""), item)
}

val ddbConf = new JobConf(sc.hadoopConfiguration)
ddbConf.set("dynamodb.output.tableName", "my-dynamo-table")
ddbConf.set("dynamodb.throughput.write.percent", "0.5")
ddbConf.set("mapred.input.format.class", "org.apache.hadoop.dynamodb.read.DynamoDBInputFormat")
ddbConf.set("mapred.output.format.class", "org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat")
ddbInsertFormattedRDD.saveAsHadoopDataset(ddbConf)

另外,您是否检查过您是否正确提升了容量?