我尝试加载本地文件,如下所示
File = sc.textFile('file:///D:/Python/files/tit.csv')
File.count()
完整追溯
IllegalArgumentException Traceback (most recent call last)
<ipython-input-72-a84ae28a29dc> in <module>()
----> 1 File.count()
/databricks/spark/python/pyspark/rdd.pyc in count(self)
1002 3
1003 """
-> 1004 return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
1005
1006 def stats(self):
/databricks/spark/python/pyspark/rdd.pyc in sum(self)
993 6.0
994 """
--> 995 return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
996
997 def count(self):
/databricks/spark/python/pyspark/rdd.pyc in fold(self, zeroValue, op)
867 # zeroValue provided to each partition is unique from the one provided
868 # to the final reduce call
--> 869 vals = self.mapPartitions(func).collect()
870 return reduce(op, vals, zeroValue)
871
/databricks/spark/python/pyspark/rdd.pyc in collect(self)
769 """
770 with SCCallSiteSync(self.context) as css:
--> 771 port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
772 return list(_load_from_socket(port, self._jrdd_deserializer))
773
/databricks/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py in __call__(self, *args)
811 answer = self.gateway_client.send_command(command)
812 return_value = get_return_value(
--> 813 answer, self.gateway_client, self.target_id, self.name)
814
815 for temp_arg in temp_args:
/databricks/spark/python/pyspark/sql/utils.pyc in deco(*a, **kw)
51 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
52 if s.startswith('java.lang.IllegalArgumentException: '):
---> 53 raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
54 raise
55 return deco
IllegalArgumentException: u'java.net.URISyntaxException: Expected scheme-specific part at index 2: D:'
怎么了?我做的很平常 例如 load a local file to spark using sc.textFile() 要么 How to load local file in sc.textFile, instead of HDFS 这些考试适用于scala,但如果我不介意的话,对于python也是如此。
但是
val File = 'D:\\\Python\\files\\tit.csv'
SyntaxError: invalid syntax
File "<ipython-input-132-2a3878e0290d>", line 1
val File = 'D:\\\Python\\files\\tit.csv'
^
SyntaxError: invalid syntax
答案 0 :(得分:2)
更新: &#34;似乎存在问题:&#34;在hadoop ......
filenames with ':' colon throws java.lang.IllegalArgumentException
https://issues.apache.org/jira/browse/HDFS-13
和
Path should handle all characters
https://issues.apache.org/jira/browse/HADOOP-3257
在这个Q&amp; A中,有人设法用spark 2.0来克服它
Spark 2.0: Relative path in absolute URI (spark-warehouse)
问题中有几个问题:
1)python访问windows中的本地文件
File = sc.textFile('file:///D:/Python/files/tit.csv')
File.count()
请你试试:
import os
inputfile = sc.textFile(os.path.normpath("file://D:/Python/files/tit.csv"))
inputfile.count()
os.path.normpath(路径)
通过折叠冗余分隔符和上级引用来规范化路径名,以便A // B,A / B /,A /./ B和A / foo /../ B都变为A / B.此字符串操作可能会更改包含符号链接的路径的含义。在Windows上,它将正斜杠转换为反斜杠。要规范化大小写,请使用normcase()。
https://docs.python.org/2/library/os.path.html#os.path.normpath
输出结果为:
>>> os.path.normpath("file://D:/Python/files/tit.csv")
'file:\\D:\\Python\\files\\tit.csv'
2)在python中测试的scala代码:
val File = 'D:\\\Python\\files\\tit.csv'
SyntaxError: invalid syntax
此代码不在python中运行,因为它是scala代码。
答案 1 :(得分:0)
我已经完成了
import os
os.path.normpath("file:///D:/Python/files/tit.csv")
Out[131]: 'file:/D:/Python/files/tit.csv'
然后
inputfile = sc.textFile(os.path.normpath("file:/D:/Python/files/tit.csv"))
inputfile.count()
IllegalArgumentException: u'java.net.URISyntaxException: Expected scheme-specific part at index 2: D:'
如果我喜欢这个
inputfile = sc.textFile(os.path.normpath("file:\\D:\\Python\\files\\tit.csv"))
inputfile.count()
IllegalArgumentException: u'java.net.URISyntaxException: Relative path in absolute URI: file:%5CD:%5CPython%5Cfiles%5Ctit.csv'
我确实喜欢这个
os.path.normcase("file:///D:/Python/files/tit.csv")
Out[136]: 'file:///D:/Python/files/tit.csv'
inputfile = sc.textFile(os.path.normpath("file:///D:/Python/files/tit.csv"))
inputfile.count()
IllegalArgumentException: u'java.net.URISyntaxException: Expected scheme-specific part at index 2: D:'