我在pySpark(带有Spark 1.6.0)中使用MLP多类分类器,大致遵循here中的示例。
由于我有兴趣对模型进行一次训练,然后对不同的数据集使用已经训练过的模型,因此我想检索神经元权重(就像使用pickle程序包解释了python sklearn的here一样)
但是,在读取documentation之后,我无法获得模型的权重和内部参数。
如果有帮助,我的代码是:
# Importing PySpark libraries
from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext, HiveContext
from pyspark.ml.classification import MultilayerPerceptronClassifier
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
#%% Codigo inicio
if __name__ == "__main__":
conf = SparkConf().setAppName("prueba_features")
sc = SparkContext(conf=conf)
hc = HiveContext(sc)
sqlc = SQLContext(sc)
# Load training data
data = sqlc.read.format("libsvm")\
.load("/user/sample_multiclass_classification_data.txt")
# print data
print("\nData set: \n{}".format(data))
# Split the data into train and test
splits = data.randomSplit([0.6, 0.4], 1234)
train = splits[0]
test = splits[1]
# print sets
print("\nTraining set: \n{}".format(train))
print("\nTest set: \n{}".format(test))
# specify layers for the neural network:
# input layer of size 4 (features), two intermediate of size 5 and 4
# and output of size 3 (classes)
layers = [4, 5, 4, 3]
# create the trainer and set its parameters
trainer = MultilayerPerceptronClassifier(maxIter=100, layers=layers, blockSize=128, seed=1234)
# train the model
model = trainer.fit(train)
# compute precision on the test set
result = model.transform(test)
predictionAndLabels = result.select("prediction", "label")
evaluator_prec = MulticlassClassificationEvaluator(metricName="precision")
evaluator_rec = MulticlassClassificationEvaluator(metricName="recall")
evaluator_f1 = MulticlassClassificationEvaluator(metricName="f1")
# print fitting precision and results
print("\nResults: \n{}".format(result))
print("\nKPIs")
print("Precision: " + str(evaluator_prec.evaluate(predictionAndLabels)))
print("Recall: " + str(evaluator_rec.evaluate(predictionAndLabels)))
print("F1-score: " + str(evaluator_f1.evaluate(predictionAndLabels)))
# we end the SparkContext
sc.stop()
有人知道如何使用pySpark MLP做到这一点吗?