我正在构建一个解析器,它接受“key”=“value”对的原始文本文件,并使用PySpark写入表格/ .csv结构。
我遇到困难的是,我可以在函数中访问它们的键和值来构造每个csv_row
,甚至检查键是否等于预期键的列表(col_list
),但是我在lambda中调用该函数processCsv
,我不知道如何将每个csv_row
附加到列表l_of_l
的全局列表中,该列表旨在保存.csv的最终列表行。
如何以键/值格式迭代RDD的每个记录并解析为.csv格式?如您所见,我的最终列表列表(l_of_l
)为空,但我可以在循环中获取每一行......令人沮丧。
所有建议都赞赏!
原始文本结构(foo.log):
"A"="foo","B"="bar","C"="baz"
"A"="oof","B"="rab","C"="zab"
"A"="aaa","B"="bbb","C"="zzz"
迄今为止的方法:
from pyspark import SparkContext
from pyspark import SQLContext
from pyspark.sql import Row
sc=SparkContext('local','foobar')
sql = SQLContext(sc)
# Read raw text to RDD
lines=sc.textFile('foo.log')
records=lines.map(lambda x: x.replace('"', '').split(","))
print 'Records pre-transform:\n'
print records.take(100)
print '------------------------------\n'
def processRecord(record, col_list):
csv_row=[]
for idx, val in enumerate(record):
key, value = val.split('=')
if(key==col_list[idx]):
# print 'Col name match'
# print value
csv_row.append(value)
else:
csv_row.append(None)
print 'Key-to-Column Mismatch, dropping value.'
print csv_row
global l_of_l
l_of_l.append(csv_row)
l_of_l=[]
colList=['A', 'B', 'C']
records.foreach(lambda x: processRecord(x, col_list=colList))
print 'Final list of lists:\n'
print l_of_l
输出:
Records pre-transform:
[[u'A=foo', u'B=bar', u'C=baz'], [u'A=oof', u'B=rab', u'C=zab'], [u'A=aaa', u'B=bbb', u'C=zzz']]
------------------------------
[u'foo', u'bar', u'baz']
[u'oof', u'rab', u'zab']
[u'aaa', u'bbb', u'zzz']
Final list of lists:
[]
答案 0 :(得分:1)
尝试此功能:
def processRecord(record, col_list):
csv_row=list()
for idx, val in enumerate(record):
key, value = val.split('=')
if(key==col_list[idx]):
# print 'Col name match'
# print value
csv_row.append(value)
else:
csv_row.append(None)
# print 'Key-to-Column Mismatch, dropping value.'
return csv_row
然后
colList=['A', 'B', 'C']
l_of_l = records.map(lambda x: processRecord(x, col_list=colList)).collect()
print 'Final list of lists:\n'
print l_of_l
应该给出
Final list of lists:
[[u'foo', u'bar', u'baz'], [u'oof', u'rab', u'zab'], [u'aaa', u'bbb', u'zzz']]