解析日志文件使用spark在方括号之间包含文本

时间:2015-01-19 17:49:59

标签: python hadoop apache-spark

我正在尝试使用spark和python来解析存储在hdfs文本和里面的日志文件[]

e.g。 [abcd] [cdef] [...] [....]

如何将此功能用于此目的,

sc = SparkContext(appName="Log.py")
sqlContext = SQLContext(sc)
 lines = sc.textFile("/user/abcd/abcd.log.................")
parts = lines.map(lambda l: l.split(" "))

有关如何使用拆分功能的任何指示都会有所帮助。


修改

我应用了更改,但它给了我以下错误,任何建议

IndexError: list index out of range
        org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:124)
        org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:154)
        org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:87)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:260)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
        java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:744)
Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
        at scala.Option.foreach(Option.scala:236)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
        at akka.actor.ActorCell.invoke(ActorCell.scala:456)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
        at akka.dispatch.Mailbox.run(Mailbox.scala:219)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

2 个答案:

答案 0 :(得分:1)

parts = lines.map(lambda l: l[1:-1].split("]["))

答案 1 :(得分:0)

  • 示例代码是 sc = SparkContext(appName="SampleLogAnalysis.py") sqlContext = SQLContext(sc) lines = sc.textFile("/user/root/abc.log.14") parts=lines.map(lambda l: l[1:-1].split("][")) people = parts.map(lambda p: (p[0], p[1])) schemaString = "col1 col2" fields = [StructField(field_name, StringType(), True) for field_name in schemaString.split()] schema = StructType(fields) schemaPeople = sqlContext.applySchema(people, schema) schemaPeople.registerTempTable("people") results = sqlContext.sql("SELECT name FROM people") names = results.map(lambda p: "Name: " + p.name) for name in names.collect(): print name
  • 示例日志文件

[2014年12月29日12:42:46,354] [Thread-4] [DEBUG] [root] [taskname-1] Thread-4:重用连接 [2014年12月29日12:42:46,362] [Thread-2] [DEBUG] [root] [tasknam-2] Thread-2:写远程调用头... [2014年12月29日12:42:46,353] [Thread-9] [DEBUG] [root] [taskname-1] Thread-9:写远程调用头... [2014年12月29日12:42:46,368] [Thread-2] [DEBUG] [root] [taskname-1Thread-2:获取输出流