Pyspark不支持的文字类型类java.util.ArrayList

时间:2018-01-13 17:05:47

标签: python-3.x apache-spark pyspark spark-dataframe pyspark-sql

我在Spark(2.2.0)上使用python3。我想将我的UDF应用于指定的字符串列表。

df = ['Apps A','Chrome', 'BBM', 'Apps B', 'Skype']

def calc_app(app, app_list):

    browser_list = ['Chrome', 'Firefox', 'Opera']
    chat_list = ['WhatsApp', 'BBM', 'Skype']
    sum = 0
    for data in app:
        name = data['name']
        if name in app_list:
            sum += 1
    return sum

calc_appUDF = udf(calc_app)
df = df.withColumn('app_browser', calc_appUDF(df['apps'], browser_list))
df = df.withColumn('app_chat', calc_appUDF(df['apps'], chat_list))

但它失败并返回:'不支持的文字类型类java.util.ArrayList'

1 个答案:

答案 0 :(得分:0)

如果我正确理解了您的要求,那么您应该尝试这个

from pyspark.sql.functions import udf, col

#sample data
df_list = ['Apps A','Chrome', 'BBM', 'Apps B', 'Skype']
df = sqlContext.createDataFrame([(l,) for l in df_list], ['apps'])
df.show()

#some lists definition
browser_list = ['Chrome', 'Firefox', 'Opera']
chat_list = ['WhatsApp', 'BBM', 'Skype']

#udf definition    
def calc_app(app, app_list):
    if app in app_list:
        return 1
    else:
        return 0
def calc_appUDF(app_list):
    return udf(lambda l: calc_app(l, app_list))

#add new columns
df = df.withColumn('app_browser', calc_appUDF(browser_list)(col('apps')))
df = df.withColumn('app_chat', calc_appUDF(chat_list)(col('apps')))
df.show()

示例输入:

+------+
|  apps|
+------+
|Apps A|
|Chrome|
|   BBM|
|Apps B|
| Skype|
+------+

输出是:

+------+-----------+--------+
|  apps|app_browser|app_chat|
+------+-----------+--------+
|Apps A|          0|       0|
|Chrome|          1|       0|
|   BBM|          0|       1|
|Apps B|          0|       0|
| Skype|          0|       1|
+------+-----------+--------+