我在Create a custom Transformer in PySpark ML的评论部分找到了相同的讨论,但没有明确的答案。还有一个未解决的JIRA对应于:https://issues.apache.org/jira/browse/SPARK-17025。
鉴于Pyspark ML管道没有提供用于保存用python编写的自定义转换器的选项,还有哪些其他选项可以完成?如何在我的python类中实现返回兼容java对象的_to_java方法?
答案 0 :(得分:8)
我不确定这是最好的方法,但我也需要能够保存我在Pyspark中创建的自定义估算器,变换器和模型,并且还支持它们在持久性管道API中的使用。可以在Pipeline API中创建和使用自定义Pyspark估算器,变换器和模型,但无法保存。当模型训练花费的时间长于事件预测周期时,这就产生了生产中的问题。
一般来说,Pyspark Estimators,Transformers和Models只是围绕Java或Scala等价物的包装器,而Pyspark包装器只是通过py4j编组来自Java的参数。然后,在Java端完成模型的任何持久化。由于目前的这种结构,这限制了Custom Pyspark估算器,变形金刚和模型只能在蟒蛇世界生活。
在之前的尝试中,我能够使用Pickle / dill序列化保存单个Pyspark模型。这很好用,但仍然不允许在Pipeline API中保存或加载。但是,另一个SO帖子指出我被引导到OneVsRest分类器,并检查了_to_java和_from_java方法。他们在Pyspark一侧做了所有繁重的工作。看了之后我想,如果有人能够将pickle转储保存到已经制作并支持的可保存java对象,那么应该可以使用Pipeline API保存Custom Pyspark Estimator,Transformer和Model。
为此,我发现StopWordsRemover是劫持的理想对象,因为它有一个属性,即停用词,即一个字符串列表。 dill.dumps方法以字符串形式返回对象的pickle表示。计划是将字符串转换为列表,然后将StopWordsRemover的stopwords参数设置为此列表。虽然列表字符串,我发现一些字符不会编组到java对象。所以字符转换为整数然后整数转换为字符串。这一切都非常适合保存单个实例,也适用于在管道中保存,因为管道尽职尽责地调用我的python类的_to_java方法(我们仍然在Pyspark方面这样工作)。但是,从java回到Pyspark并不是在Pipeline API中。
因为我在一个StopWordsRemover实例中隐藏了我的python对象,所以当回到Pyspark时,Pipeline对我隐藏的类对象一无所知,它只知道它有一个StopWordsRemover实例。理想情况下,继承Pipeline和PipelineModel会很棒,但是这会让我们回到尝试序列化Python对象。为了解决这个问题,我创建了一个PysparkPipelineWrapper,它接受一个Pipeline或PipelineModel并只扫描各个阶段,在stopwords列表中查找一个编码ID(记住,这只是我的python对象的pickle字节),告诉它打开列表到我的实例并将其存储在它来自的阶段。下面的代码显示了这一切是如何运作的。
对于任何Custom Pyspark Estimator,Transformer和Model,只需继承自Identifiable,PysparkReaderWriter,MLReadable,MLWritable。然后在加载Pipeline和PipelineModel时,通过PysparkPipelineWrapper.unwrap(管道)传递。
此方法不涉及在Java或Scala中使用Pyspark代码,但至少我们可以保存并加载Custom Pyspark Estimators,Transformers和Models并使用Pipeline API。
import dill
from pyspark.ml import Transformer, Pipeline, PipelineModel
from pyspark.ml.param import Param, Params
from pyspark.ml.util import Identifiable, MLReadable, MLWritable, JavaMLReader, JavaMLWriter
from pyspark.ml.feature import StopWordsRemover
from pyspark.ml.wrapper import JavaParams
from pyspark.context import SparkContext
from pyspark.sql import Row
class PysparkObjId(object):
"""
A class to specify constants used to idenify and setup python
Estimators, Transformers and Models so they can be serialized on there
own and from within a Pipline or PipelineModel.
"""
def __init__(self):
super(PysparkObjId, self).__init__()
@staticmethod
def _getPyObjId():
return '4c1740b00d3c4ff6806a1402321572cb'
@staticmethod
def _getCarrierClass(javaName=False):
return 'org.apache.spark.ml.feature.StopWordsRemover' if javaName else StopWordsRemover
class PysparkPipelineWrapper(object):
"""
A class to facilitate converting the stages of a Pipeline or PipelineModel
that were saved from PysparkReaderWriter.
"""
def __init__(self):
super(PysparkPipelineWrapper, self).__init__()
@staticmethod
def unwrap(pipeline):
if not (isinstance(pipeline, Pipeline) or isinstance(pipeline, PipelineModel)):
raise TypeError("Cannot recognize a pipeline of type %s." % type(pipeline))
stages = pipeline.getStages() if isinstance(pipeline, Pipeline) else pipeline.stages
for i, stage in enumerate(stages):
if (isinstance(stage, Pipeline) or isinstance(stage, PipelineModel)):
stages[i] = PysparkPipelineWrapper.unwrap(stage)
if isinstance(stage, PysparkObjId._getCarrierClass()) and stage.getStopWords()[-1] == PysparkObjId._getPyObjId():
swords = stage.getStopWords()[:-1] # strip the id
lst = [chr(int(d)) for d in swords]
dmp = ''.join(lst)
py_obj = dill.loads(dmp)
stages[i] = py_obj
if isinstance(pipeline, Pipeline):
pipeline.setStages(stages)
else:
pipeline.stages = stages
return pipeline
class PysparkReaderWriter(object):
"""
A mixin class so custom pyspark Estimators, Transformers and Models may
support saving and loading directly or be saved within a Pipline or PipelineModel.
"""
def __init__(self):
super(PysparkReaderWriter, self).__init__()
def write(self):
"""Returns an MLWriter instance for this ML instance."""
return JavaMLWriter(self)
@classmethod
def read(cls):
"""Returns an MLReader instance for our clarrier class."""
return JavaMLReader(PysparkObjId._getCarrierClass())
@classmethod
def load(cls, path):
"""Reads an ML instance from the input path, a shortcut of `read().load(path)`."""
swr_java_obj = cls.read().load(path)
return cls._from_java(swr_java_obj)
@classmethod
def _from_java(cls, java_obj):
"""
Get the dumby the stopwords that are the characters of the dills dump plus our guid
and convert, via dill, back to our python instance.
"""
swords = java_obj.getStopWords()[:-1] # strip the id
lst = [chr(int(d)) for d in swords] # convert from string integer list to bytes
dmp = ''.join(lst)
py_obj = dill.loads(dmp)
return py_obj
def _to_java(self):
"""
Convert this instance to a dill dump, then to a list of strings with the unicode integer values of each character.
Use this list as a set of dumby stopwords and store in a StopWordsRemover instance
:return: Java object equivalent to this instance.
"""
dmp = dill.dumps(self)
pylist = [str(ord(d)) for d in dmp] # convert byes to string integer list
pylist.append(PysparkObjId._getPyObjId()) # add our id so PysparkPipelineWrapper can id us.
sc = SparkContext._active_spark_context
java_class = sc._gateway.jvm.java.lang.String
java_array = sc._gateway.new_array(java_class, len(pylist))
for i in xrange(len(pylist)):
java_array[i] = pylist[i]
_java_obj = JavaParams._new_java_obj(PysparkObjId._getCarrierClass(javaName=True), self.uid)
_java_obj.setStopWords(java_array)
return _java_obj
class HasFake(Params):
def __init__(self):
super(HasFake, self).__init__()
self.fake = Param(self, "fake", "fake param")
def getFake(self):
return self.getOrDefault(self.fake)
class MockTransformer(Transformer, HasFake, Identifiable):
def __init__(self):
super(MockTransformer, self).__init__()
self.dataset_count = 0
def _transform(self, dataset):
self.dataset_count = dataset.count()
return dataset
class MyTransformer(MockTransformer, Identifiable, PysparkReaderWriter, MLReadable, MLWritable):
def __init__(self):
super(MyTransformer, self).__init__()
def make_a_dataframe(sc):
df = sc.parallelize([Row(name='Alice', age=5, height=80), Row(name='Alice', age=5, height=80), Row(name='Alice', age=10, height=80)]).toDF()
return df
def test1():
trA = MyTransformer()
trA.dataset_count = 999
print trA.dataset_count
trA.save('test.trans')
trB = MyTransformer.load('test.trans')
print trB.dataset_count
def test2():
trA = MyTransformer()
pipeA = Pipeline(stages=[trA])
print type(pipeA)
pipeA.save('testA.pipe')
pipeAA = PysparkPipelineWrapper.unwrap(Pipeline.load('testA.pipe'))
stagesAA = pipeAA.getStages()
trAA = stagesAA[0]
print trAA.dataset_count
def test3():
dfA = make_a_dataframe(sc)
trA = MyTransformer()
pipeA = Pipeline(stages=[trA]).fit(dfA)
print type(pipeA)
pipeA.save('testB.pipe')
pipeAA = PysparkPipelineWrapper.unwrap(PipelineModel.load('testB.pipe'))
stagesAA = pipeAA.stages
trAA = stagesAA[0]
print trAA.dataset_count
dfB = pipeAA.transform(dfA)
dfB.show()
答案 1 :(得分:7)
从Spark 2.3.0开始,有一种很多,很多更好的方法来实现这一点。
只需扩展DefaultParamsWritable
和DefaultParamsReadable
,您的课程将自动具有write
和read
方法,这些方法将保存您的参数并由PipelineModel
使用序列化系统。
文档不是很清楚,我必须做一些源代码阅读才能理解这是反序列化的工作方式。
PipelineModel.read
实例化一个PipelineModelReader
PipelineModelReader
加载元数据并检查语言是否为'Python'
。如果不是,则使用典型的JavaMLReader
(这些答案中的大多数是针对此目的设计的)PipelineSharedReadWrite
,它调用DefaultParamsReader.loadParamsInstance
loadParamsInstance
从保存的元数据中找到class
。它将实例化该类并在其上调用.load(path)
。您可以扩展DefaultParamsReader
并自动获取DefaultParamsReader.load
方法。如果您确实需要实现专用的反序列化逻辑,那么我将以该load
方法作为起点。
反面:
PipelineModel.write
将检查所有阶段是否都是Java(实现JavaMLWritable
)。如果是这样,则使用典型的JavaMLWriter
(这些答案中的大多数是为这个目的而设计的)PipelineWriter
,它检查所有阶段是否实现MLWritable
并调用PipelineSharedReadWrite.saveImpl
PipelineSharedReadWrite.saveImpl
将在每个阶段调用.write().save(path)
。您可以扩展DefaultParamsWriter
以获得DefaultParamsWritable.write
方法,该方法可以以正确的格式保存您的类和参数的元数据。如果您需要实现自定义序列化逻辑,那么我将以DefaultParamsWriter
为起点。
好吧,最后,您有一个非常简单的转换器来扩展Params,并且所有参数都以典型的Params方式存储:
from pyspark import keyword_only
from pyspark.ml import Transformer
from pyspark.ml.param.shared import HasOutputCols, Param, Params
from pyspark.ml.util import DefaultParamsReadable, DefaultParamsWritable
from pyspark.sql.functions import lit # for the dummy _transform
class SetValueTransformer(
Transformer, HasOutputCols, DefaultParamsReadable, DefaultParamsWritable,
):
value = Param(
Params._dummy(),
"value",
"value to fill",
)
@keyword_only
def __init__(self, outputCols=None, value=0.0):
super(SetValueTransformer, self).__init__()
self._setDefault(value=0.0)
kwargs = self._input_kwargs
self._set(**kwargs)
@keyword_only
def setParams(self, outputCols=None, value=0.0):
"""
setParams(self, outputCols=None, value=0.0)
Sets params for this SetValueTransformer.
"""
kwargs = self._input_kwargs
return self._set(**kwargs)
def setValue(self, value):
"""
Sets the value of :py:attr:`value`.
"""
return self._set(value=value)
def getValue(self):
"""
Gets the value of :py:attr:`value` or its default value.
"""
return self.getOrDefault(self.value)
def _transform(self, dataset):
for col in self.getOutputCols():
dataset = dataset.withColumn(col, lit(self.getValue()))
return dataset
现在我们可以使用它:
from pyspark.ml import Pipeline
svt = SetValueTransformer(outputCols=["a", "b"], value=123.0)
p = Pipeline(stages=[svt])
df = sc.parallelize([(1, None), (2, 1.0), (3, 0.5)]).toDF(["key", "value"])
pm = p.fit(df)
pm.transform(df).show()
pm.write().overwrite().save('/tmp/example_pyspark_pipeline')
pm2 = PipelineModel.load('/tmp/example_pyspark_pipeline')
print('matches?', pm2.stages[0].extractParamMap() == pm.stages[0].extractParamMap())
pm2.transform(df).show()
结果:
+---+-----+-----+-----+
|key|value| a| b|
+---+-----+-----+-----+
| 1| null|123.0|123.0|
| 2| 1.0|123.0|123.0|
| 3| 0.5|123.0|123.0|
+---+-----+-----+-----+
matches? True
+---+-----+-----+-----+
|key|value| a| b|
+---+-----+-----+-----+
| 1| null|123.0|123.0|
| 2| 1.0|123.0|123.0|
| 3| 0.5|123.0|123.0|
+---+-----+-----+-----+
答案 2 :(得分:3)
我无法获得@ dmbaker在Spark 2.2.0上使用Python 2工作的巧妙解决方案;我不断得到酸洗错误。在几个盲道之后,我通过修改他(她?)的想法来编写并将参数值作为字符串直接读入StopWordsRemover's
停用词来获得一个有效的解决方案。
如果您想要保存并加载自己的估算器或变换器,那么您需要基类:
from pyspark import SparkContext
from pyspark.ml.feature import StopWordsRemover
from pyspark.ml.util import Identifiable, MLWritable, JavaMLWriter, MLReadable, JavaMLReader
from pyspark.ml.wrapper import JavaWrapper, JavaParams
class PysparkReaderWriter(Identifiable, MLReadable, MLWritable):
"""
A base class for custom pyspark Estimators and Models to support saving and loading directly
or within a Pipeline or PipelineModel.
"""
def __init__(self):
super(PysparkReaderWriter, self).__init__()
@staticmethod
def _getPyObjIdPrefix():
return "_ThisIsReallyA_"
@classmethod
def _getPyObjId(cls):
return PysparkReaderWriter._getPyObjIdPrefix() + cls.__name__
def getParamsAsListOfStrings(self):
raise NotImplementedError("PysparkReaderWriter.getParamsAsListOfStrings() not implemented for instance: %r" % self)
def write(self):
"""Returns an MLWriter instance for this ML instance."""
return JavaMLWriter(self)
def _to_java(self):
# Convert all our parameters to strings:
paramValuesAsStrings = self.getParamsAsListOfStrings()
# Append our own type-specific id so PysparkPipelineLoader can detect this algorithm when unwrapping us.
paramValuesAsStrings.append(self._getPyObjId())
# Convert the parameter values to a Java array:
sc = SparkContext._active_spark_context
java_array = JavaWrapper._new_java_array(paramValuesAsStrings, sc._gateway.jvm.java.lang.String)
# Create a Java (Scala) StopWordsRemover and give it the parameters as its stop words.
_java_obj = JavaParams._new_java_obj("org.apache.spark.ml.feature.StopWordsRemover", self.uid)
_java_obj.setStopWords(java_array)
return _java_obj
@classmethod
def _from_java(cls, java_obj):
# Get the stop words, ignoring the id at the end:
stopWords = java_obj.getStopWords()[:-1]
return cls.createAndInitialisePyObj(stopWords)
@classmethod
def createAndInitialisePyObj(cls, paramsAsListOfStrings):
raise NotImplementedError("PysparkReaderWriter.createAndInitialisePyObj() not implemented for type: %r" % cls)
@classmethod
def read(cls):
"""Returns an MLReader instance for our clarrier class."""
return JavaMLReader(StopWordsRemover)
@classmethod
def load(cls, path):
"""Reads an ML instance from the input path, a shortcut of `read().load(path)`."""
swr_java_obj = cls.read().load(path)
return cls._from_java(swr_java_obj)
您自己的pyspark算法必须继承PysparkReaderWriter
并覆盖getParamsAsListOfStrings()
方法,该方法将参数保存到字符串列表中。您的算法还必须覆盖createAndInitialisePyObj()
方法,以便将字符串列表转换回您的参数。在幕后,参数将转换为StopWordsRemover
使用的停用词。
示例估算工具,包含3个不同类型的参数:
from pyspark.ml.param.shared import Param, Params, TypeConverters
from pyspark.ml.base import Estimator
class MyEstimator(Estimator, PysparkReaderWriter):
def __init__(self):
super(MyEstimator, self).__init__()
# 3 sample parameters, deliberately of different types:
stringParam = Param(Params._dummy(), "stringParam", "A dummy string parameter", typeConverter=TypeConverters.toString)
def setStringParam(self, value):
return self._set(stringParam=value)
def getStringParam(self):
return self.getOrDefault(self.stringParam)
listOfStringsParam = Param(Params._dummy(), "listOfStringsParam", "A dummy list of strings.", typeConverter=TypeConverters.toListString)
def setListOfStringsParam(self, value):
return self._set(listOfStringsParam=value)
def getListOfStringsParam(self):
return self.getOrDefault(self.listOfStringsParam)
intParam = Param(Params._dummy(), "intParam", "A dummy int parameter.", typeConverter=TypeConverters.toInt)
def setIntParam(self, value):
return self._set(intParam=value)
def getIntParam(self):
return self.getOrDefault(self.intParam)
def _fit(self, dataset):
model = MyModel()
# Just some changes to verify we can modify the model (and also it's something we can expect to see when restoring it later):
model.setAnotherStringParam(self.getStringParam() + " World!")
model.setAnotherListOfStringsParam(self.getListOfStringsParam() + ["E", "F"])
model.setAnotherIntParam(self.getIntParam() + 10)
return model
def getParamsAsListOfStrings(self):
paramValuesAsStrings = []
paramValuesAsStrings.append(self.getStringParam()) # Parameter is already a string
paramValuesAsStrings.append(','.join(self.getListOfStringsParam())) # ...convert from a list of strings
paramValuesAsStrings.append(str(self.getIntParam())) # ...convert from an int
return paramValuesAsStrings
@classmethod
def createAndInitialisePyObj(cls, paramsAsListOfStrings):
# Convert back into our parameters. Make sure you do this in the same order you saved them!
py_obj = cls()
py_obj.setStringParam(paramsAsListOfStrings[0])
py_obj.setListOfStringsParam(paramsAsListOfStrings[1].split(","))
py_obj.setIntParam(int(paramsAsListOfStrings[2]))
return py_obj
示范模型(也是一个变形金刚),它有3个不同的参数:
from pyspark.ml.base import Model
class MyModel(Model, PysparkReaderWriter):
def __init__(self):
super(MyModel, self).__init__()
# 3 sample parameters, deliberately of different types:
anotherStringParam = Param(Params._dummy(), "anotherStringParam", "A dummy string parameter", typeConverter=TypeConverters.toString)
def setAnotherStringParam(self, value):
return self._set(anotherStringParam=value)
def getAnotherStringParam(self):
return self.getOrDefault(self.anotherStringParam)
anotherListOfStringsParam = Param(Params._dummy(), "anotherListOfStringsParam", "A dummy list of strings.", typeConverter=TypeConverters.toListString)
def setAnotherListOfStringsParam(self, value):
return self._set(anotherListOfStringsParam=value)
def getAnotherListOfStringsParam(self):
return self.getOrDefault(self.anotherListOfStringsParam)
anotherIntParam = Param(Params._dummy(), "anotherIntParam", "A dummy int parameter.", typeConverter=TypeConverters.toInt)
def setAnotherIntParam(self, value):
return self._set(anotherIntParam=value)
def getAnotherIntParam(self):
return self.getOrDefault(self.anotherIntParam)
def _transform(self, dataset):
# Dummy transform code:
return dataset.withColumn('age2', dataset.age + self.getAnotherIntParam())
def getParamsAsListOfStrings(self):
paramValuesAsStrings = []
paramValuesAsStrings.append(self.getAnotherStringParam()) # Parameter is already a string
paramValuesAsStrings.append(','.join(self.getAnotherListOfStringsParam())) # ...convert from a list of strings
paramValuesAsStrings.append(str(self.getAnotherIntParam())) # ...convert from an int
return paramValuesAsStrings
@classmethod
def createAndInitialisePyObj(cls, paramsAsListOfStrings):
# Convert back into our parameters. Make sure you do this in the same order you saved them!
py_obj = cls()
py_obj.setAnotherStringParam(paramsAsListOfStrings[0])
py_obj.setAnotherListOfStringsParam(paramsAsListOfStrings[1].split(","))
py_obj.setAnotherIntParam(int(paramsAsListOfStrings[2]))
return py_obj
下面是一个示例测试用例,展示了如何保存和加载模型。它与估算器相似,所以为了简洁我省略了它。
def createAModel():
m = MyModel()
m.setAnotherStringParam("Boo!")
m.setAnotherListOfStringsParam(["P", "Q", "R"])
m.setAnotherIntParam(77)
return m
def testSaveLoadModel():
modA = createAModel()
print(modA.explainParams())
savePath = "/whatever/path/you/want"
#modA.save(savePath) # Can't overwrite, so...
modA.write().overwrite().save(savePath)
modB = MyModel.load(savePath)
print(modB.explainParams())
testSaveLoadModel()
输出:
anotherIntParam: A dummy int parameter. (current: 77)
anotherListOfStringsParam: A dummy list of strings. (current: ['P', 'Q', 'R'])
anotherStringParam: A dummy string parameter (current: Boo!)
anotherIntParam: A dummy int parameter. (current: 77)
anotherListOfStringsParam: A dummy list of strings. (current: [u'P', u'Q', u'R'])
anotherStringParam: A dummy string parameter (current: Boo!)
注意参数如何作为unicode字符串返回。这可能会对您在_transform()
(或估算工具为_fit()
)中实施的基础算法产生影响,也可能不会产生影响。所以要注意这一点。
最后,因为幕后的Scala算法实际上是StopWordsRemover
,所以在从磁盘加载Pipeline
或PipelineModel
时,需要将其解包回自己的类中。这是执行此解包的实用程序类:
from pyspark.ml import Pipeline, PipelineModel
from pyspark.ml.feature import StopWordsRemover
class PysparkPipelineLoader(object):
"""
A class to facilitate converting the stages of a Pipeline or PipelineModel
that were saved from PysparkReaderWriter.
"""
def __init__(self):
super(PysparkPipelineLoader, self).__init__()
@staticmethod
def unwrap(thingToUnwrap, customClassList):
if not (isinstance(thingToUnwrap, Pipeline) or isinstance(thingToUnwrap, PipelineModel)):
raise TypeError("Cannot recognize an object of type %s." % type(thingToUnwrap))
stages = thingToUnwrap.getStages() if isinstance(thingToUnwrap, Pipeline) else thingToUnwrap.stages
for i, stage in enumerate(stages):
if (isinstance(stage, Pipeline) or isinstance(stage, PipelineModel)):
stages[i] = PysparkPipelineLoader.unwrap(stage)
if isinstance(stage, StopWordsRemover) and stage.getStopWords()[-1].startswith(PysparkReaderWriter._getPyObjIdPrefix()):
lastWord = stage.getStopWords()[-1]
className = lastWord[len(PysparkReaderWriter._getPyObjIdPrefix()):]
stopWords = stage.getStopWords()[:-1] # Strip the id
# Create and initialise the appropriate class:
py_obj = None
for clazz in customClassList:
if clazz.__name__ == className:
py_obj = clazz.createAndInitialisePyObj(stopWords)
if py_obj is None:
raise TypeError("I don't know how to create an instance of type: %s" % className)
stages[i] = py_obj
if isinstance(thingToUnwrap, Pipeline):
thingToUnwrap.setStages(stages)
else:
# PipelineModel
thingToUnwrap.stages = stages
return thingToUnwrap
测试保存和加载管道:
def testSaveAndLoadUnfittedPipeline():
estA = createAnEstimator()
#print(estA.explainParams())
pipelineA = Pipeline(stages=[estA])
savePath = "/whatever/path/you/want"
#pipelineA.save(savePath) # Can't overwrite, so...
pipelineA.write().overwrite().save(savePath)
pipelineReloaded = PysparkPipelineLoader.unwrap(Pipeline.load(savePath), [MyEstimator])
estB = pipelineReloaded.getStages()[0]
print(estB.explainParams())
testSaveAndLoadUnfittedPipeline()
输出:
intParam: A dummy int parameter. (current: 42)
listOfStringsParam: A dummy list of strings. (current: [u'A', u'B', u'C', u'D'])
stringParam: A dummy string parameter (current: Hello)
测试保存和加载管道模型:
from pyspark.sql import Row
def make_a_dataframe(sc):
df = sc.parallelize([Row(name='Alice', age=5, height=80), Row(name='Bob', age=7, height=85), Row(name='Chris', age=10, height=90)]).toDF()
return df
def testSaveAndLoadPipelineModel():
dfA = make_a_dataframe(sc)
estA = createAnEstimator()
#print(estA.explainParams())
pipelineModelA = Pipeline(stages=[estA]).fit(dfA)
savePath = "/whatever/path/you/want"
#pipelineModelA.save(savePath) # Can't overwrite, so...
pipelineModelA.write().overwrite().save(savePath)
pipelineModelReloaded = PysparkPipelineLoader.unwrap(PipelineModel.load(savePath), [MyModel])
modB = pipelineModelReloaded.stages[0]
print(modB.explainParams())
dfB = pipelineModelReloaded.transform(dfA)
dfB.show()
testSaveAndLoadPipelineModel()
输出:
anotherIntParam: A dummy int parameter. (current: 52)
anotherListOfStringsParam: A dummy list of strings. (current: [u'A', u'B', u'C', u'D', u'E', u'F'])
anotherStringParam: A dummy string parameter (current: Hello World!)
+---+------+-----+----+
|age|height| name|age2|
+---+------+-----+----+
| 5| 80|Alice| 57|
| 7| 85| Bob| 59|
| 10| 90|Chris| 62|
+---+------+-----+----+
在展开管道或管道模型时,您必须传入与您自己的pyspark算法对应的类的列表,这些算法在保存的管道或管道模型中伪装成StopWordsRemover
个对象。保存对象中的最后一个停用词用于标识您自己的班级名称,然后调用createAndInitialisePyObj()
创建班级实例并使用剩余的停用词初始化其参数。
可以进行各种改进。但希望这将使您能够在管道内外保存和加载自定义估算器和变换器,直到SPARK-17025得到解决并可供您使用。
答案 3 :(得分:2)
与@dmbaker的working answer类似,我将一个名为Aggregator
的自定义变换器包含在内置Spark变换器中,在本例中为Binarizer
,尽管我是{&#39}。我相信你也可以继承其他变形金刚。这允许我的自定义转换器继承序列化所需的方法。
from pyspark.ml import Pipeline
from pyspark.ml.feature import VectorAssembler, Binarizer
from pyspark.ml.regression import LinearRegression
class Aggregator(Binarizer):
"""A huge hack to allow serialization of custom transformer."""
def transform(self, input_df):
agg_df = input_df\
.groupBy('channel_id')\
.agg({
'foo': 'avg',
'bar': 'avg',
})\
.withColumnRenamed('avg(foo)', 'avg_foo')\
.withColumnRenamed('avg(bar)', 'avg_bar')
return agg_df
# Create pipeline stages.
aggregator = Aggregator()
vector_assembler = VectorAssembler(...)
linear_regression = LinearRegression()
# Create pipeline.
pipeline = Pipeline(stages=[aggregator, vector_assembler, linear_regression])
# Train.
pipeline_model = pipeline.fit(input_df)
# Save model file to S3.
pipeline_model.save('s3n://example')
答案 4 :(得分:0)
@dmbaker解决方案对我不起作用。我相信那是因为python版本(2.x对3.x)。我对他的解决方案做了一些更新,现在它适用于Python 3.我的设置如下:
class PysparkObjId(object):
"""
A class to specify constants used to idenify and setup python
Estimators, Transformers and Models so they can be serialized on there
own and from within a Pipline or PipelineModel.
"""
def init(self):
super(PysparkObjId, self).init()
@staticmethod
def _getPyObjId():
return '4c1740b00d3c4ff6806a1402321572cb'
@staticmethod
def _getCarrierClass(javaName=False):
return 'org.apache.spark.ml.feature.StopWordsRemover' if javaName else StopWordsRemover
class PysparkPipelineWrapper(object):
"""
A class to facilitate converting the stages of a Pipeline or PipelineModel
that were saved from PysparkReaderWriter.
"""
def __init__(self):
super(PysparkPipelineWrapper, self).__init__()
@staticmethod
def unwrap(pipeline):
if not (isinstance(pipeline, Pipeline) or isinstance(pipeline, PipelineModel)):
raise TypeError("Cannot recognize a pipeline of type %s." % type(pipeline))
stages = pipeline.getStages() if isinstance(pipeline, Pipeline) else pipeline.stages
for i, stage in enumerate(stages):
if (isinstance(stage, Pipeline) or isinstance(stage, PipelineModel)):
stages[i] = PysparkPipelineWrapper.unwrap(stage)
if isinstance(stage, PysparkObjId._getCarrierClass()) and stage.getStopWords()[-1] == PysparkObjId._getPyObjId():
swords = stage.getStopWords()[:-1] # strip the id
# convert stop words to int
swords = [int(d) for d in swords]
# get the byte value of all ints
lst = [x.to_bytes(length=1, byteorder='big') for x in
swords] # convert from string integer list to bytes
# return the first byte and concatenates all the others
dmp = lst[0]
for byte_counter in range(1, len(lst)):
dmp = dmp + lst[byte_counter]
py_obj = dill.loads(dmp)
stages[i] = py_obj
if isinstance(pipeline, Pipeline):
pipeline.setStages(stages)
else:
pipeline.stages = stages
return pipeline
class PysparkReaderWriter(object):
"""
A mixin class so custom pyspark Estimators, Transformers and Models may
support saving and loading directly or be saved within a Pipline or PipelineModel.
"""
def __init__(self):
super(PysparkReaderWriter, self).__init__()
def write(self):
"""Returns an MLWriter instance for this ML instance."""
return JavaMLWriter(self)
@classmethod
def read(cls):
"""Returns an MLReader instance for our clarrier class."""
return JavaMLReader(PysparkObjId._getCarrierClass())
@classmethod
def load(cls, path):
"""Reads an ML instance from the input path, a shortcut of `read().load(path)`."""
swr_java_obj = cls.read().load(path)
return cls._from_java(swr_java_obj)
@classmethod
def _from_java(cls, java_obj):
"""
Get the dumby the stopwords that are the characters of the dills dump plus our guid
and convert, via dill, back to our python instance.
"""
swords = java_obj.getStopWords()[:-1] # strip the id
lst = [x.to_bytes(length=1, byteorder='big') for x in swords] # convert from string integer list to bytes
dmp = lst[0]
for i in range(1, len(lst)):
dmp = dmp + lst[i]
py_obj = dill.loads(dmp)
return py_obj
def _to_java(self):
"""
Convert this instance to a dill dump, then to a list of strings with the unicode integer values of each character.
Use this list as a set of dumby stopwords and store in a StopWordsRemover instance
:return: Java object equivalent to this instance.
"""
dmp = dill.dumps(self)
pylist = [str(int(d)) for d in dmp] # convert bytes to string integer list
pylist.append(PysparkObjId._getPyObjId()) # add our id so PysparkPipelineWrapper can id us.
sc = SparkContext._active_spark_context
java_class = sc._gateway.jvm.java.lang.String
java_array = sc._gateway.new_array(java_class, len(pylist))
for i in range(len(pylist)):
java_array[i] = pylist[i]
_java_obj = JavaParams._new_java_obj(PysparkObjId._getCarrierClass(javaName=True), self.uid)
_java_obj.setStopWords(java_array)
return _java_obj
class HasFake(Params):
def __init__(self):
super(HasFake, self).__init__()
self.fake = Param(self, "fake", "fake param")
def getFake(self):
return self.getOrDefault(self.fake)
class CleanText(Transformer, HasInputCol, HasOutputCol, Identifiable, PysparkReaderWriter, MLReadable, MLWritable):
@keyword_only
def __init__(self, inputCol=None, outputCol=None):
super(CleanText, self).__init__()
kwargs = self._input_kwargs
self.setParams(**kwargs)