我在这条路径下保存了这两个文件:
C:\代码\ SAMPLE1 \ main.py
def method():
return "this is sample method 1"
C:\代码\ SAMPLE2 \ main.py
def method():
return "this is sample method 2"
然后我运行:
from pyspark import SparkContext
from pyspark.sql import SparkSession
sc = SparkContext()
spark = SparkSession(sc)
sc.addPyFile("~/code/sample1/main.py")
main1 = __import__("main")
print(main1.method()) # this is sample method 1
sc.addPyFile("~/code/sample2/main.py") # Error
错误是
Py4JJavaError:调用o21.addFile时发生错误。 :org.apache.spark.SparkException:文件C:\ Users \ hans.yulian \ AppData \ Local \ Temp \ spark-5da165cf-410f-4576-8124-0ab23aba6aa3 \ userFiles-25a7ca23-84fb-42b7-95d9-206867fb9dfd \ main .py存在且与/C:/Users/hans.yulian/Documents/spark-test/main2/main.py
的内容不匹配这意味着它的临时文件夹中已经有“main.py”文件且内容不同。我想知道这个案例是否有任何解决办法,但对我来说,我有这些限制:
答案 0 :(得分:4)
虽然技术上可行,但可以将spark.files.overwrite
设置为"true"
:
from pyspark import SparkConf, SparkContext
sc = SparkContext(conf=SparkConf().set("spark.files.overwrite", "true"))
并且在简单的情况下将给出正确的结果
def f(*_):
from main import method
return [method()]
sc.addFile("/path/to/sample1/main.py")
sc.parallelize([], 3).mapPartitions(f).collect()
['this is sample method 1',
'this is sample method 1',
'this is sample method 1']
sc.addFile("/path/to/sample2/main.py")
sc.parallelize([], 3).mapPartitions(f).collect()
['this is sample method 2',
'this is sample method 2',
'this is sample method 2']
它在实践中不可靠,即使您在每次访问时都有reload
个模块,也会让您的应用难以理解。由于Spark可能会隐式缓存某些对象,或透明地重新启动Python工作程序,因此您可以轻松地在不同节点看到源的不同状态的情况下结束。