用Scala写HDFS输出文件

时间:2016-05-13 18:13:46

标签: scala apache-spark hdfs

我正在尝试使用Scala编写HDFS输出文件,我收到以下错误:

线程“main”中的

异常org.apache.spark.SparkException:任务不可序列化     在org.apache.spark.util.ClosureCleaner $ .ensureSerializable(ClosureCleaner.scala:315)     在org.apache.spark.util.ClosureCleaner $ .org $ apache $ spark $ util $ ClosureCleaner $$ clean(ClosureCleaner.scala:305)     在org.apache.spark.util.ClosureCleaner $ .clean(ClosureCleaner.scala:132)     在org.apache.spark.SparkContext.clean(SparkContext.scala:1893)     在org.apache.spark.rdd.RDD $$ anonfun $ foreach $ 1.apply(RDD.scala:869)     在org.apache.spark.rdd.RDD $$ anonfun $ foreach $ 1.apply(RDD.scala:868)     在org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:147)     在org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:108)     在org.apache.spark.rdd.RDD.withScope(RDD.scala:286)     在org.apache.spark.rdd.RDD.foreach(RDD.scala:868) 引起:java.io.NotSerializableException:java.io.PrintWriter 序列化堆栈:

所有第23行我需要在输出文件中写一行。

代码来源:

package com.mycode.logs;

import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs._
import org.apache.spark.SparkContext._
import org.apache.spark._
import org.apache.spark.deploy.SparkHadoopUtil
import org.apache.spark.sql._
import org.apache.spark.sql.hive.HiveContext
import scala.io._
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import java.io.PrintWriter;

/**
 * @author RondenaR
 * 
 */
object NormalizeMSLogs{

  def main(args: Array[String]){
    processMsLogs("/user/temporary/*file*")
  }

  def processMsLogs(path: String){
    System.out.println("INFO: ****************** started ******************")

    // **** SetMaster is Local only to test *****
    // Set context
    val sparkConf = new SparkConf().setAppName("tmp-logs").setMaster("local")
    val sc = new SparkContext(sparkConf)
    val sqlContext = new SQLContext(sc)
    val hiveContext = new HiveContext(sc)

    // Set HDFS
    System.setProperty("HADOOP_USER_NAME", "hdfs")
    val hdfsconf = SparkHadoopUtil.get.newConfiguration(sc.getConf)
    hdfsconf.set("fs.defaultFS", "hdfs://192.168.248.130:8020")
    val hdfs = FileSystem.get(hdfsconf)

    val output = hdfs.create(new Path("hdfs://192.168.248.130:8020/tmp/mySample.txt"))
    val writer = new PrintWriter(output)

    val sourcePath = new Path(path)
    var count :Int = 0
    var lineF :String = ""

    hdfs.globStatus( sourcePath ).foreach{ fileStatus =>
      val filePathName = fileStatus.getPath().toString()
      val fileName = fileStatus.getPath().getName()

      val hdfsfileIn = sc.textFile(filePathName)
      val msNode = fileName.substring(1, fileName.indexOf("es"))

      System.out.println("filePathName: " + filePathName)
      System.out.println("fileName: " + fileName)
      System.out.println("hdfsfileIn: " + filePathName)
      System.out.println("msNode: " + msNode)

      for(line <- hdfsfileIn){
        //System.out.println("line = " + line)
        count += 1

        if(count != 23){
          lineF = lineF + line + ", "
        }

        if(count == 23){
          lineF = lineF + line + ", " + msNode
          System.out.println(lineF)
          writer.write(lineF) 
          writer.write("\n")
          count = 0
          lineF = ""
        }
      } // end for loop in file
    } // end foreach loop
    writer.close()
    System.out.println("INFO: ******************ended ******************")
    sc.stop()
  }
}

1 个答案:

答案 0 :(得分:1)

PrintWriter对象writer不仅不可序列化:您也不能将SparkContextsc)放在foreach中:它只是一个构造通过电汇发送给工人是没有意义的。

您应该花些时间考虑通过网络发送哪些类型的对象是有意义的。任何指针/流/句柄都没有意义。结构,字符串,基元:这些有意义包含在Closure(或广播)中。