如何在PySpark中有效地计算WoE

时间:2018-06-01 08:10:53

标签: python-3.x apache-spark pyspark-sql apache-spark-ml

我想根据二进制目标列计算特征列上的证据权重,有没有办法在Spark中有效地执行此操作?

截至目前,Spark仍然没有任何用于计算WoE的内置API。

我使用一些Spark-SQL查询构建,如下所示(此处项目在for循环中一次只有一列) -

new_df = spark.sql('Select `' + item + '`, `' + target_col + '`, count(*) as Counts from a group by `'
                   + item + '`, `' + target_col + '` order by `' + item + '`, `' + target_col + '`')
new_df.show(2)

new_df.show()
new_df.registerTempTable('b')
# exit(0)
new_df2 = spark.sql('Select `' + item + '`, ' +
                    'case when `' + target_col + '` == 0 then Counts else 0 end as Count_0, ' +
                    'case when `' + target_col + '` == 1 then Counts else 0 end as Count_1 ' +
                    'from b')

spark.catalog.dropTempView('b')
# new_df2.show()
new_df2.registerTempTable('c')
# exit(0)

new_df3 = spark.sql('SELECT `' + item + '`, SUM(Count_0) AS Count_0, ' +
                    'SUM(Count_1) AS Count_1 FROM c GROUP BY `' + item + '`')

spark.catalog.dropTempView('c')
# new_df3.show()
# exit(0)

new_df3.registerTempTable('d')

# SQL DF Experiment
new_df4 = spark.sql(
    'Select `' + item + '` as bucketed_col_of_source, Count_0/(select sum(d.Count_0) as sum from d) as Prop_0, ' +
    'Count_1/(select sum(d.Count_1) as sum from d) as Prop_1 from d')

spark.catalog.dropTempView('d')
# new_df4.show()
# exit(0)
new_df4.registerTempTable('e')

new_df5 = spark.sql(
    'Select *, case when log(e.Prop_0/e.Prop_1) IS NULL then 0 else log(e.Prop_0/e.Prop_1) end as WoE from e')

spark.catalog.dropTempView('e')

# print('Problem starts here: ')
# new_df5.show()

new_df5.registerTempTable('WoE_table')

joined_Train_DF = spark.sql('Select bucketed.*, WoE_table.WoE as `' + item +
                            '_WoE` from a bucketed inner join WoE_table on bucketed.`' + item +
                            '` = WoE_table.bucketed_col_of_source')

它按预期工作,但效率不高。对于大约50,000个数据点/行的数据集,它会因超出Java堆和GC Overhead错误而失败。

有人可以帮忙吗?在数据集上进行特征工程时,WoE是一个常见问题。

0 个答案:

没有答案