我是PySpark的新手。我在Windows 10上安装了Spark 2.3.0。 我想使用线性SVM分类器进行交叉验证训练,但是对于具有3个类的数据集。所以我试图从Spark ML应用One vs Rest策略。但是我的代码似乎有问题,因为我得到一个错误,表明LinearSVC用于二进制分类。
这是我在调试时尝试执行“crossval.fit”行时发生的错误:
pyspark.sql.utils.IllegalArgumentException: u'requirement failed: LinearSVC only supports binary classification. 1 classes detected in LinearSVC_43a48b0b70d59a8cbdb1__labelCol'
这是我的代码: (我正在尝试一个只有10个实例的非常小的数据集)
from pyspark import SparkContext
sc = SparkContext('local', 'my app')
from pyspark.ml.linalg import Vectors
from pyspark import SQLContext
sqlContext = SQLContext(sc)
import numpy as np
x_train=np.array([[1,2,3],[5,6,7],[9,10,11],[2,4,5],[2,7,9],[3,7,6],[8,3,6],[5,8,2],[44,11,55],[77,33,22]])
y_train=[1,0,2,1,0,2,1,0,2,1]
#converting numpy array to dataframe
df_list = []
i = 0
for element in x_train: # row
tup = (y_train[i], Vectors.dense(element))
i = i + 1
df_list.append(tup)
Train_sparkframe = sqlContext.createDataFrame(df_list, schema=['label', 'features'])
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.ml.classification import OneVsRest
from pyspark.ml.classification import LinearSVC
LSVC = LinearSVC()
ovr = OneVsRest(classifier=LSVC)
paramGrid = ParamGridBuilder().addGrid(LSVC.maxIter, [10, 100]).addGrid(LSVC.regParam,
[0.001, 0.01, 1.0,10.0]).build()
crossval = CrossValidator(estimator=ovr,
estimatorParamMaps=paramGrid,
evaluator=MulticlassClassificationEvaluator(metricName="f1"),
numFolds=2)
cvModel = crossval.fit(Train_sparkframe)
bestModel = cvModel.bestModel
答案 0 :(得分:0)
在这个IBM笔记本中,在Python 3.5 / Spark 2.3托管环境中,我能够有效地重现您的代码,而不会出现问题:https://eu-gb.dataplatform.cloud.ibm.com/analytics/notebooks/v2/24bb87d9-d28b-433b-b85a-5a86f4d0b56b/view?access_token=3c7bec3ed89bb518357fcce8005874a66a1d65833e997603141632b5cbb484db
由于云环境为您管理Spark上下文,因此建议您调查一下Spark设置并仔细检查列命名。
答案 1 :(得分:0)
正如documentation所说:
请注意,现在仅支持LogisticRegression和NaiveBayes。