如何使用pyspark从文本日志文件的特定部分创建数据框

时间:2018-05-09 06:03:21

标签: python apache-spark pyspark spark-dataframe

我是pyspark的新手...... 我有一个大日志文件,其中包含如下数据:

sfdfd
fsdfsdffdhfgjgfjkyklhljk,erygrt,tegtyryu ,.
sgsgggggfsdf

==========================================  
Roll Name   class  
==========================================  
1     avb    wer21g2
------------------------------------------  

===========================================  
empcode   Emnname   Dept   Address   
===========================================  
12d      sf        sdf22    dghsjf  
asf2    asdfw2     df21df   fsfsfg  
dsf21   sdf2       df2      sdgfsgf  
------------------------------------------- 

现在我想使用Spark和python(Pyspark)将此文件拆分为多个RDD / Dataframe。我可以使用API​​HadoopFile在Scala中执行此操作,现在我想在Pyspark中执行此操作。谁可以帮我这个事。

Exped输出是:

Roll Name clas  
1   avb   wer21g2  


empcode   Emnname   Dept   Address  
12d      sf        sdf22    dghsjf  
asf2    asdfw2     df21df   fsfsfg  
dsf21   sdf2       df2      sdgfsgf  

这是我尝试过的代码:

with open(path) as f:
    out = []
    for line in f:
        if line.rstrip() == findStr:
            tmp = []
            tmp.append(line)
            for line in f:
               # print(line)
                if line.rstrip() == EndStr:
                    out.append(tmp)
                    break
                tmp.append(line)
f.close()

SMN_df = spark.createDataFrame(tmp, StringType()).show(truncate=False)

我能够创建数据框但没有获得预期的输出。任何人都可以帮助我。

有关详细信息,请参阅附带的屏幕截图。

数据集

enter image description here

1 个答案:

答案 0 :(得分:0)

from pyspark.sql import SparkSession
import re


spark=SparkSession.Builder.config("spark.sql.warehouse.dir","file://C:/temp")
.appName("SparkSQL").getOrCreate()

path="C:/Users/Rudrashis/Desktop/test2.txt"
Txtpath="L:/SparkScala/test.csv"
EndStr="---------------------------------"
FilterStr="================================="
def prepareDataset(Findstr):
with open(path) as f:
    out=[]
    for line in f:
        if line.rstrip()==Findstr:
            tmp=[]
            tmp.append(re.sub("\s+",",",line.strip()))
            for line in f:
                if line.rstrip()==EndStr:
                    out.append(tmp)
                    break

                tmp.append(re.sub("\s+",",",line.strip()))
            return (tmp)
f.close()

def Makesv(Lstcommon):
with open("test.csv","w")as outfile:
    for entries in map(str.strip(),Lstcommon):
        outfile.write(entries)
outfile.close()

###For 1st block################
LstStudent=[]
LstStudent=prepareDataset("Roll  Name  Class")
LstStudent.list(filter(lambda a: a!=FilterStr,LstStudent))
createStudent=Makesv(LstStudent)

Student_DF=spark.read.format('com.databricks.spark.csv')
.options(header="true",inferschema="true").load(Txtpath)
Student_DF.show(truncate=False)
######### end 1st block####

#####2nd block start####
LstEmp=[]
LstEmp=prepareDataset("empcode   Emnname   Dept   Address")
LstEmp.list(filter(lambda a: a!=FilterStr,LstEmp))
CreateEmp=Makesv(LstEmp)

 Emp_DF=spark.read.format('com.databricks.spark.csv')
.options(header="true",inferschema="true").load(Txtpath)
Emp_DF.show(truncate=False)

##### end of 2nd block#####