PySpark的addPyFile方法使SparkContext无

时间:2015-08-29 09:37:28

标签: python apache-spark pyspark

我一直在尝试do this。在PySpark shell中,我将SparkContext作为private void Window_Loaded(object sender, RoutedEventArgs e) { BeginStoryboard((Storyboard)FindResource("fade")); } 。但是当我使用sc方法时,它会生成SparkContext addPyFile

None

出了什么问题?

1 个答案:

答案 0 :(得分:5)

以下是source code to pyspark's (v1.1.1) addPyFile。 (官方pyspark文档中1.4.1的源链接在我写这篇文章时被打破了)

它返回None,因为没有return语句。另见:in python ,if a function doesn't have a return statement,what does it return?

所以,如果你做sc2 = sc.addPyFile("mymodule.py")当然sc2将是无,因为.addPyFile()不会返回任何内容!

相反,只需致电sc.addPyFile("mymodule.py")并继续使用sc作为SparkContext

def addPyFile(self, path): 
635          """ 
636          Add a .py or .zip dependency for all tasks to be executed on this 
637          SparkContext in the future.  The C{path} passed can be either a local 
638          file, a file in HDFS (or other Hadoop-supported filesystems), or an 
639          HTTP, HTTPS or FTP URI. 
640          """ 
641          self.addFile(path) 
642          (dirname, filename) = os.path.split(path)  # dirname may be directory or HDFS/S3 prefix 
643   
644          if filename.endswith('.zip') or filename.endswith('.ZIP') or filename.endswith('.egg'): 
645              self._python_includes.append(filename) 
646              # for tests in local mode 
647              sys.path.append(os.path.join(SparkFiles.getRootDirectory(), filename))