pyspark.ml管道:是基本预处理任务所必需的自定义变换器吗?

时间:2018-04-09 13:38:27

标签: python apache-spark machine-learning pyspark data-science

使用pyspark.ml和管道API入门,我发现自己正在为典型的预处理任务编写自定义变换器,以便在管道中使用它们。例子:

from pyspark.ml import Pipeline, Transformer


class CustomTransformer(Transformer):
    # lazy workaround - a transformer needs to have these attributes
    _defaultParamMap = dict()
    _paramMap = dict()
    _params = dict()

class ColumnSelector(CustomTransformer):
    """Transformer that selects a subset of columns
    - to be used as pipeline stage"""

    def __init__(self, columns):
        self.columns = columns


    def _transform(self, data):
        return data.select(self.columns)


class ColumnRenamer(CustomTransformer):
    """Transformer renames one column"""


    def __init__(self, rename):
        self.rename = rename

    def _transform(self, data):
        (colNameBefore, colNameAfter) = self.rename
        return data.withColumnRenamed(colNameBefore, colNameAfter)


class NaDropper(CustomTransformer):
    """
    Drops rows with at least one not-a-number element
    """

    def __init__(self, cols=None):
        self.cols = cols


    def _transform(self, data):
        dataAfterDrop = data.dropna(subset=self.cols) 
        return dataAfterDrop


class ColumnCaster(CustomTransformer):

    def __init__(self, col, toType):
        self.col = col
        self.toType = toType

    def _transform(self, data):
        return data.withColumn(self.col, data[self.col].cast(self.toType))

他们工作,但我想知道这是一个模式还是反模式 - 这样的变换器是一个使用管道API的好方法吗?是否有必要实现它们,或者是否在其他地方提供了相同的功能?

1 个答案:

答案 0 :(得分:2)

我认为它主要是基于意见的,虽然它看起来不必要地冗长,但Python TransformersPipeline API的其余部分没有很好地集成。

值得指出的是,您可以使用SQLTransformer轻松实现此处的所有功能。例如:

from pyspark.ml.feature import SQLTransformer

def column_selector(columns):
    return SQLTransformer(
        statement="SELECT {} FROM __THIS__".format(", ".join(columns))
    )

def na_dropper(columns):
    return SQLTransformer(
        statement="SELECT * FROM __THIS__ WHERE {}".format(
            " AND ".join(["{} IS NOT NULL".format(x) for x in columns])
        )
    )

通过一点点努力,您可以将SQLAlchemy与Hive方言一起使用,以避免手写SQL。