Windows上的pyspark(从1.6升级到2.0.2):sqlContext.read.format失败

时间:2017-08-26 13:25:08

标签: pyspark spark-dataframe

以下几行在1.6中运行良好,尽管在2.0.2中失败。任何想法,可能是什么问题

file_name = "D:/ProgramFiles/spark-2.0.2-bin-hadoop2.3/data/mllib/sample_linear_regression_data.txt"
df_train = sqlContext.read.format("libsvm").load(file_name)

错误是

  File "<ipython-input-4-e5510d6d3d6a>", line 1, in <module>
    df_train = sqlContext.read.format("libsvm").load("../data/mllib/sample_linear_regression_data.txt")

  File "D:\ProgramFiles\spark-2.0.2-bin-hadoop2.3\python\lib\pyspark.zip\pyspark\sql\readwriter.py", line 147, in load
    return self._df(self._jreader.load(path))

  File "D:\ProgramFiles\spark-2.0.2-bin-hadoop2.3\python\lib\py4j-0.10.3-src.zip\py4j\java_gateway.py", line 1133, in __call__
    answer, self.gateway_client, self.target_id, self.name)

  File "D:\ProgramFiles\spark-2.0.2-bin-hadoop2.3\python\lib\pyspark.zip\pyspark\sql\utils.py", line 79, in deco
    raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)

IllegalArgumentException: 'Can not create a Path from an empty string'

1 个答案:

答案 0 :(得分:0)

可能是由于此错误,已经纠正过:https://github.com/apache/spark/pull/11775

它提出了这个空字符串&#39;错误而不是无效路径。

您正在使用相对路径,该路径将构建默认目录中可能已在您的spark2安装中更改的路径。尝试设置环境变量HADOOP_CONF_DIR或指定绝对路径而不是相对路径。如果是本地路径则使用file:///