我正在尝试使用newAPIHadoopFile和FixedLengthInputFormat创建Spark javaRDD。这是我的代码snippit,
Configuration config = new Configuration();
config.setInt(FixedLengthInputFormat.FIXED_RECORD_LENGTH, JPEG_INDEX_SIZE);
config.set("fs.hdfs.impl", DistributedFileSystem.class.getName());
String fileFilter = config.get("fs.defaultFS") + "/A/B/C/*.idx";
JavaPairRDD<LongWritable, BytesWritable> inputRDD = sparkContext.newAPIHadoopFile(fileFilter, FixedLengthInputFormat.class, LongWritable.class, BytesWritable.class, config);
此时我得到以下异常:
Error executing mapreduce job: com.fasterxml.jackson.databind.JsonMappingException: Infinite recursion (StackOverflowError)
知道我做错了什么吗?我是Spark的新手。大卫