仅当通过单独的脚本导入时,Pyspark UDF广播变量未定义

时间:2017-03-07 14:03:17

标签: apache-spark pyspark nameerror udf spark-submit

以下是两个最小的工作示例脚本,它们都在pyspark中调用UDF。 UDF依赖于广播的字典,使用该字典将列映射到新列。产生正确输出的完整工作示例如下:

# default_sparkjob.py

from pyspark.sql.types import *
from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext, DataFrame
import pyspark.sql.functions as F

def _transform_df(sc, df):    
    global mapping
    mapping = {1:'First', 2:'Second', 3:'Third'}
    mapping = sc.broadcast(mapping)

    udf_implement_map = F.udf(_implement_map, StringType())
    df = df.withColumn('Mapped', udf_implement_map('A'))
    return df

def _implement_map(column):
    return mapping.value[column]

if __name__ == "__main__":

    #_____________________________________________________________________________
    sc = SparkContext()
    sqlContext = SQLContext(sc)
    #_____________________________________________________________________________

    import pandas as pd
    pd_df = pd.DataFrame.from_dict( {'A':[1,2,3], 'B':['a','b','c']} )
    sp_df = sqlContext.createDataFrame(pd_df)

    sp_df = _transform_df(sc, sp_df)
    sp_df.show()

# OUTPUT:
#+---+---+------+
#|  A|  B|Mapped|
#+---+---+------+
#|  1|  a| First|
#|  2|  b|Second|
#|  3|  c| Third|
#+---+---+------+

但是,如果在单独的脚本中导入并使用该函数,则表示未定义映射:

# calling_sparkjob.py

if __name__ == "__main__":

    #_____________________________________________________________________________
    from pyspark.sql.types import *
    from pyspark import SparkContext, SparkConf
    from pyspark.sql import SQLContext, DataFrame
    import pyspark.sql.functions as F

    sc = SparkContext(pyFiles=['default_sparkjob.py'])
    sqlContext = SQLContext(sc)
    #_____________________________________________________________________________

    from default_sparkjob import _transform_df
    import pandas as pd
    pd_df = pd.DataFrame.from_dict( {'A':[1,2,3], 'B':['a','b','c']} )
    sp_df = sqlContext.createDataFrame(pd_df)

    sp_df = _transform_df(sc, sp_df)
    sp_df.show()

    # File "default_sparkjob.py", line 17, in _implement_map
    # return mapping.value[column]
    # NameError: global name 'mapping' is not defined

任何人都可以解释为什么会这样吗?这是目前代码实际版本中的一个主要障碍,它导入许多依赖外部文件的udfs的函数。是否存在我不理解的命名空间问题?

非常感谢。

1 个答案:

答案 0 :(得分:1)

我有同样的问题。从其他文件导入功能后,程序将引发错误。

我不知道您现在是否有解决方案,但是我找到了一个绝妙的解决方案

您可以将dict变量转换为字符串,然后将新列添加到具有F.lit(str)值的数据框中,最后在udf中使用ast.literal_eval将str转换为dict并在udf中使用它

也许看一下代码会更清楚。

# default_sparkjob.py

import ast

from pyspark.sql.types import *
from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext, DataFrame
import pyspark.sql.functions as F

def _transform_df(sc, df):
    # global mapping
    mapping = {1:'First', 2:'Second', 3:'Third'}
    # mapping = sc.broadcast(mapping)
    df = df.withColumn('mapping_config', F.lit(str(mapping)))

    udf_implement_map = F.udf(_implement_map, StringType())
    df = df.withColumn('Mapped', udf_implement_map('A', 'mapping_config'))
    return df

def _implement_map(column, mapping_config):
    mapping_ = ast.literal_eval(mapping_config)
    return mapping_[column]

然后使用您的calling_sparkjob.py获得正确的结果。

+---+---+--------------------+------+
|  A|  B|      mapping_config|Mapped|
+---+---+--------------------+------+
|  1|  a|{1: 'First', 2: '...| First|
|  2|  b|{1: 'First', 2: '...|Second|
|  3|  c|{1: 'First', 2: '...| Third|
+---+---+--------------------+------+