我正在尝试将HDFS
(在本例中为s3
)中的文件读取为火花作为RDD。该文件位于SequenceInputFileFormat
。但我无法将文件的内容解码为字符串。我有以下代码:
package com.spark.example.ExampleSpark;
import java.util.List;
import scala.Tuple2;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.hive.HiveContext;
public class RawEventDump
{
public static void main( String[] args )
{
SparkConf conf = new SparkConf().setAppName("atlas_raw_events").setMaster("local[2]");
JavaSparkContext jsc = new JavaSparkContext(conf);
JavaPairRDD<String, Byte> file = jsc.sequenceFile("s3n://key_id:secret_key@<file>", String.class, Byte.class);
List<String> values = file.map(
new Function<Tuple2<String, Byte>, String>() {
public String call(Tuple2 row) {
return "Value: " + row._2.toString() + "\n";
}
}).collect();
System.out.println(values);
}
}
但我得到以下输出:
Value: 7b 22 65 76 65 6e ...
, Value: 7b 22 65 76 65 6e 74 22 3a ...
, Value: 7b 22 65 76 65 6...
...
如何在spark中读取文件的内容?
答案 0 :(得分:4)
序列文件通常使用Hadoop类型,如TextWritable,BytesWritable,LongWritable等,因此RDD类型应为JavaPairRDD<LongWritable, BytesWritable>
然后要转动字符串,您应该调用org.apache.hadoop.io.Text.decode(row._2.getBytes())