如何逐行使用Scala(spark)读取文本文件,并使用分隔符拆分并在各列中存储值?

时间:2017-09-21 13:44:21

标签: scala apache-spark

我是Scala的新手。

我的要求是我需要逐行阅读并将其拆分为特定的分隔符并提取值以放入不同文件中的相应列。

以下是我的输入样本数据:

ABC Log

Aug 10 14:36:52 127.0.0.1 CEF:0|McAfee|ePolicy Orchestrator|IFSSLCRT0.5.0.5/epo4.0|2410|DeploymentTask|High  eventId=34 externalId=23
Aug 10 15:45:56 127.0.0.1 CEF:0|McAfee|ePolicy Orchestrator|IFSSLCRT0.5.0.5/epo4.0|2890|DeploymentTask|Medium eventId=888 externalId=7788
Aug 10 16:40:59 127.0.0.1 CEF:0|NV|ePolicy Orchestrator|IFSSLCRT0.5.0.5/epo4.0|2990|DeploymentTask|Low eventId=989 externalId=0004


XYZ Log

Aug 15 14:32:15 142.101.36.118 cef[10612]: CEF:0|fire|cc|3.5.1|FireEye Acquisition Started
Aug 16 16:45:10 142.101.36.189 cef[10612]: CEF:0|cold|dd|3.5.4|FireEye Acquisition Started
Aug 18 19:50:20 142.101.36.190 cef[10612]: CEF:0|fire|ee|3.5.6|FireEye Acquisition Started

在上面的数据中,我需要阅读ABC日志下的第一部分'标题和提取每行的值并将其放在相应的列下。这几个列的名称都是硬编码的,最后一列我需要通过拆分来提取" ="即eventId = 34 externalId = 23 => col = eventId value = 34 and col = value = externalId

Column names 

date time ip_address col1 col2 col3 col4 col5

我想要输出如下:

这是第一部分' ABC Log'并将其放入一个文件中并保持相同。

 date    time     ip_address  col1   col2    col3          col4      col5 col6                            col7  
 Aug 10  14:36:52 127.0.0.1   CEF:0  McAfee   ePolicy Orchestrator IFSSLCRT0.5.0.5/epo4.0 2410 DeploymentTask High

Aug 10 15:45:56 127.0.0.1 CEF:0 McAfee ePolicy Orchestrator IFSSLCRT0.5.0.5/epo4.0 2890 DeploymentTask Medium

以下代码我一直在尝试:

package AV_POC_Parsing
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.log4j.Logger

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf

// For implicit conversions like converting RDDs to DataFrames

//import org.apache.spark.implicits._

//import spark.implicits._


object scala {

   def main(args: Array[String]) {

  // create Spark context with Spark configuration
    val sc = new SparkContext(new SparkConf().setAppName("AV_Log_Processing").setMaster("local[*]"))

    // Read text file in spark RDD 

    val textFile = sc.textFile("input.txt");


    val splitRdd = textFile.map( line => line.split(" "))
    // RDD[ Array[ String ]


    // printing values
    splitRdd.foreach { x => x.foreach { y => println(y) } }

   // how to store split values in different column and write it into file

}}

如何在Scala中拆分两个分隔符。

由于

1 个答案:

答案 0 :(得分:2)

也许它可以帮到你。

import org.apache.spark.{SparkConf, SparkContext}

object DataFilter {

  def main(args: Array[String]): Unit = {

    // create Spark context with Spark configuration
    val sc = new SparkContext(new SparkConf().setAppName("AV_Log_Processing").setMaster("local[*]"))

    // Read text file in spark RDD
    val textFile = sc.textFile("input.txt");

    val splitRdd = textFile.map { s =>
      val a = s.split("[ |]")
      val date = Array(a(0) + " " + a(1))
      (date ++ a.takeRight(10)).mkString("\t")
    }
    // RDD[ Array[ String ]


    // printing values
    splitRdd.foreach(println)

    // how to store split values in different column and write it into file
  }
}