在Spark中跳过前几行

时间:2017-03-22 15:45:46

标签: apache-spark apache-spark-sql spark-dataframe

我有spark 2.0代码,它会读取.gz(文本)文件并将它们写入HIVE表。

我可以知道如何忽略所有文件中的前两行。只想跳过前两行。

   SparkSession spark = SparkSession
            .builder()
            .master("local")
              .appName("SparkSessionFiles")
              .config("spark.some.config.option", "some-value")
              .enableHiveSupport()
              .getOrCreate(); 

  JavaRDD<mySchema> peopleRDD = spark.read()
      .textFile("file:///app/home/emm/zipfiles/myzips/")
      .javaRDD()
      .map(new Function<String, mySchema>()
        {
            @Override
            public mySchema call(String line) throws Exception
                {

                    String[] parts = line.split(";");
                    mySchema mySchema = new mySchema();

                    mySchema.setCFIELD1       (parts[0]);

                    mySchema.setCFIELD2       (parts[1]);
                    mySchema.setCFIELD3       (parts[2]);
                    mySchema.setCFIELD4       (parts[3]);
                    mySchema.setCFIELD5       (parts[4]);
                return mySchema;

                  }
        });

 // Apply a schema to an RDD of JavaBeans to get a DataFrame
    Dataset<Row> myDF = spark.createDataFrame(peopleRDD, mySchema.class);

    myDF.createOrReplaceTempView("myView");

    spark.sql("INSERT INTO myHIVEtable SELECT * from myView");

更新:修改后的代码

Lambda不会在我的日食上工作。所以使用常规的java语法。我现在得到了一个例外。

 .....
  Function2 removeHeader= new Function2<Integer, Iterator<String>, Iterator<String>>(){
        public Iterator<String> call(Integer ind, Iterator<String> iterator) throws Exception {
            System.out.println("ind="+ind);
            if((ind==0) && iterator.hasNext()){
                iterator.next();
                iterator.next();
                return iterator;
            }else
                return iterator;
        }
    };

JavaRDD<mySchema> peopleRDD = spark.read() 
  .textFile(path) //file:///app/home/emm/zipfiles/myzips/
  .javaRDD()
  .mapPartitionsWithIndex(removeHeader,false)
  .map(new Function<String, mySchema>()
    {
    ........


Java.util.NoSuchElementException
        at java.util.LinkedList.removeFirst(LinkedList.java:268)
        at java.util.LinkedList.remove(LinkedList.java:683)
        at org.apache.spark.sql.execution.BufferedRowIterator.next(BufferedRowIterator.java:49)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.next(WholeStageCodegenExec.scala:374)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.next(WholeStageCodegenExec.scala:368)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:31)
        at com.comcast.emm.vodip.SparkSessionFiles.SparkSessionFiles$1.call(SparkSessionFiles.java:2480)
        at com.comcast.emm.vodip.SparkSessionFiles.SparkSessionFiles$1.call(SparkSessionFiles.java:2476)

1 个答案:

答案 0 :(得分:1)

你可以这样做:

 JavaRDD<mySchema> peopleRDD = spark.read()
  .textFile("file:///app/home/emm/zipfiles/myzips/")
  .javaRDD()
  .mapPartitionsWithIndex((index, iter) -> {
                if (index == 0 && iter.hasNext()) {
                    iter.next();
                    if (iter.hasNext()) {
                        iter.next();
                    }
                }
     return iter;
   }, true);
  ...

在Scala中,语法更简单。例如:

    rdd.mapPartitionsWithIndex { (idx, iter) => if (idx == 0) iter.drop(2) else iter }

编辑:

我修改了代码以避免异常。

此代码只会删除RDD的前两行,而不会删除每个文件。

如果要删除每个文件的前两行,我建议您为每个文件执行RDD,为每个RDD应用.mapPartitionWithIndex(...),然后对每个RDD执行union。< / p>