我希望能够在将转换应用于值之一的同时选择RDD的多个列。我能够 -选择特定的列 -在其中一列上应用转换
我无法将两者同时应用
1)选择特定列
from pyspark import SparkContext
logFile = "/FileStore/tables/tendulkar.csv"
rdd = sc.textFile(logFile)
rdd.map(lambda line: (line.split(",")[0],line.split(",")[1],line.split(",")
[2])).take(4)
[('Runs', 'Mins', 'BF'),
('15', '28', '24'),
('DNB', '-', '-'),
('59', '254', '172')]
2)将转换应用于第一列
df=(rdd.map(lambda line: line.split(",")[0])
.filter(lambda x: x !="DNB")
.filter(lambda x: x!= "TDNB")
.filter(lambda x: x!="absent")
.map(lambda x: x.replace("*","")))
df.take(4)
['Runs', '15', '59', '8']
我试图按照以下步骤一起做
rdd.map(lambda line: ( (line.split(",")[0]).filter(lambda
x:x!="DNB"),line.split(",")[1],line.split(",")[2])).count()
我遇到错误
Py4JJavaError Traceback (most recent call last)
<command-2766458519992264> in <module>()
10 .map(lambda x: x.replace("*","")))
11
---> 12 rdd.map(lambda line: ( (line.split(",")[0]).filter(lambda x:x!="DNB"),line.split(",")[1],line.split(",")[2])).count()
/databricks/spark/python/pyspark/rdd.py in count(self)
1067 3
1068 """
-> 1069 return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
1070
1071 def stats(self):
请帮助
问候 Ganesh
答案 0 :(得分:1)
只需在地图后的每一行中应用过滤器,并在其中选择所需的所有列即可:
rdd.map(lambda line: line.split(",")[:3]) \
.filter(lambda x: x[0] not in ["DNB", "TDNB", "absent"])