我在hdfs目录中有文件列表,我想从hdfs目录中遍历pyspark中的文件,并将每个文件存储在一个变量中,并使用该变量进行进一步处理。我在下面遇到一个错误。
py4j.protocol.Py4JError: An error occurred while calling z:org.apache.spark.api.python.PythonUtils.toSeq. Trace:
py4j.Py4JException: Method toSeq([class org.apache.hadoop.fs.Path]) does not exist
InputDir = "/Data/Ready/ARRAY_COUNTERS"
#输入hdfs目录。
hadoop = sc._jvm.org.apache.hadoop
fs = hadoop.fs.FileSystem
conf = hadoop.conf.Configuration()
path = hadoop.fs.Path(InputDir)
for f in fs.get(conf).listStatus(path):
Filename = f.getPath()
df = spark.read.csv(Filename,header=True)
#I am getting above error in while reading this file.
答案 0 :(得分:1)
关于这2行:
Filename = f.getPath()
df = spark.read.csv(Filename,header=True)
getPath()不是字符串。另外-f
可能也是目录,因此要确保您不尝试加载目录,可以在f.isFile()
上添加验证:
if(f.isFile()):
Filename = f.getPath()
df = spark.read.csv(str(Filename),header=True)
现在对我有用的替代方法是:
if(f.isFile()):
Filename = f.getPath()
df = sc.textFile(str(Filename), 500).map(lambda x: x.split(", ")) #or any other spearator, returns RDD
headers=df.first() # to infer schema - you can then convert it to pyspark dataframe with specific column types