绕过org.apache.hadoop.mapred.InvalidInputException:输入模式s3n:// [...]匹配0个文件

时间:2014-05-21 13:00:22

标签: hadoop amazon-s3 apache-spark

这是我在火花用户邮件列表上的问题already asked,我希望在这里获得更多成功。

我不确定它与火花直接相关,虽然火花与我无法轻易解决这个问题的事实有关。

我试图使用各种模式从S3获取一些文件。我的问题是其中一些模式可能没有返回任何内容,当他们这样做时,我得到以下异常:

org.apache.hadoop.mapred.InvalidInputException: Input Pattern s3n://bucket/mypattern matches 0 files
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:140)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)
    at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)
    at org.apache.spark.rdd.FlatMappedRDD.getPartitions(FlatMappedRDD.scala:30)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)
    at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:52)
    at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:52)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
    at scala.collection.AbstractTraversable.map(Traversable.scala:105)
    at org.apache.spark.rdd.UnionRDD.getPartitions(UnionRDD.scala:52)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)
    at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:58)
    at org.apache.spark.api.java.JavaPairRDD.reduceByKey(JavaPairRDD.scala:335)
    ... 2 more

我想要一种忽略丢失文件的方法,在这种情况下什么都不做。 IMO的问题在于,我不知道模式是否会返回某些内容,直到它实际执行,并且只有在发生操作时才开始处理数据(此处为reduceByKey部分)。所以我不能在某个地方发现错误,让事情继续下去。

一种解决方案是强制火花单独处理每条路径,但这可能会花费大量的速度和/或内存,所以我正在寻找一种有效的其他选择。

我使用spark 0.9.1。 感谢

2 个答案:

答案 0 :(得分:4)

好的,在Spark中挖掘一下,感谢有人在spark用户列表上指导我,我想我得到了它:

sc.newAPIHadoopFile("s3n://missingPattern/*", EmptiableTextInputFormat.class, LongWritable.class, Text.class, sc.hadoopConfiguration())
    .map(new Function<Tuple2<LongWritable, Text>, String>() {
        @Override
        public String call(Tuple2<LongWritable, Text> arg0) throws Exception {
            return arg0._2.toString();
        }
    })
    .count();

执行魔术的EmptiableTextInputFormat

import java.io.IOException;
import java.util.Collections;
import java.util.List;

import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.JobContext;
import org.apache.hadoop.mapreduce.lib.input.InvalidInputException;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;

public class EmptiableTextInputFormat extends TextInputFormat {
    @Override
    public List<InputSplit> getSplits(JobContext arg0) throws IOException {
        try {
            return super.getSplits(arg0);
        } catch (InvalidInputException e) {
            return Collections.<InputSplit> emptyList();
        }
    }
}

最终可以检查InvalidInputException的消息,以获得更高的精确度。

答案 1 :(得分:2)

对于任何想要快速破解的人来说,这是使用def wholeTextFilesIgnoreErrors(path: String, sc: SparkContext): RDD[(String, String)] = { // TODO This is a bit hacky, probabally ought to work out a better way using lower level hadoop api sc.wholeTextFiles(path.split(",").filter(subPath => Try(sc.textFile(subPath).take(1)).isSuccess).mkString(",")) }

的示例
#ifdef __cplusplus
extern "C" void DebugTmp(char *str);
#endif