pyspark Column不能使用withColumn进行迭代

时间:2019-10-30 22:07:29

标签: apache-spark pyspark apache-spark-sql

为什么在使用pyspark时出现列不可迭代错误?

cost_allocation_df = cost_allocation_df.withColumn(
    'resource_tags_user_engagement',          
     f.when(
         (f.col('line_item_usage_account_id') == '123456789101', '1098765432101') &
         (f.col('resource_tags_user_engagement') == '' ) |
         (f.col('resource_tags_user_engagement').isNull()) |
         (f.col('resource_tags_user_engagement').rlike('^[a-zA-Z]')),
    '10546656565').otherwise(f.col('resource_tags_user_engagement'))
)

1 个答案:

答案 0 :(得分:0)

您可以直接将一列与value进行比较,这将不起作用。您将必须使用value

在该lit()中创建一列

尝试将您的代码转换为:

cost_allocation_df = cost_allocation_df.withColumn('resource_tags_user_engagement',          
 f.when(
       ((f.col('line_item_usage_account_id') == f.lit('123456789101')) | 
       (f.col('line_item_usage_account_id') == f.lit('1098765432101'))) & 
       (f.col('resource_tags_user_engagement') == f.lit('') ) |
       (f.col('resource_tags_user_engagement').isNull()) |
       (f.col('resource_tags_user_engagement').rlike('^[a-zA-Z]')), '10546656565'
       ).otherwise(f.col('resource_tags_user_engagement')))