我们如何将HadoopRDD结果转换为Parquet格式?

时间:2016-06-20 05:19:55

标签: hadoop apache-spark amazon-dynamodb rdd parquet

我尝试使用 Apache Spark 阅读DynamodDB表。

以下是我的实施:

所以在Spark Shell中

spark-shell --jars /usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar
import org.apache.hadoop.io.Text; 
import org.apache.hadoop.dynamodb.DynamoDBItemWritable

/* Importing DynamoDBInputFormat and DynamoDBOutputFormat */ 
import org.apache.hadoop.dynamodb.read.DynamoDBInputFormat 
import org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat 
import org.apache.hadoop.mapred.JobConf 
import org.apache.hadoop.io.LongWritable   
var jobConf = new JobConf(sc.hadoopConfiguration) 
jobConf.set("dynamodb.servicename", "dynamodb") 
jobConf.set("dynamodb.input.tableName", "myDynamoDBTable")

// Pointing to DynamoDB table 
jobConf.set("dynamodb.endpoint", "dynamodb.us-east-1.amazonaws.com") 
jobConf.set("dynamodb.regionid", "us-east-1") jobConf.set("dynamodb.throughput.read", "1")
jobConf.set("dynamodb.throughput.read.percent", "1")
jobConf.set("dynamodb.version", "2011-12-05")  
jobConf.set("mapred.output.format.class", "org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat")
jobConf.set("mapred.input.format.class", "org.apache.hadoop.dynamodb.read.DynamoDBInputFormat")  
var orders = sc.hadoopRDD(jobConf, classOf[DynamoDBInputFormat], classOf[Text], classOf[DynamoDBItemWritable])

我们得到的结果是"命令"变量

如何将此结果转换为 Parquet文件或格式?

更新:我发现这段代码可以访问和转换dynamodb数据 https://github.com/onzocom/spark-dynamodb/blob/master/src/main/scala/com/onzo/spark/dynamodb/DynamoDbRelation.scala

1 个答案:

答案 0 :(得分:1)

数据框可以保存为Parquet文件,但RDD不能。这是因为Parquet文件需要架构。 RDD并不需要拥有架构,但数据框必须。