我正在尝试使用ML库在Spark中使用决策树运行交叉验证,但是在调用cv.fit(train_dataset)
时出现此错误:
pyspark.sql.utils.IllegalArgumentException: u'requirement failed: Invalid initial capacity'
除了数据框是空的之外,我还没有找到关于它可能是什么的更多信息,但事实并非如此。 这是我的代码:
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.data')
df.columns = ['Sex', 'Length', 'Diameter', 'Height', 'Whole weight', 'Schuked weight', 'Viscera weight', 'Shell weight', 'Rings']
train_dataset = sqlContext.createDataFrame(df)
column_types = train_dataset.dtypes
categoricalCols = []
numericCols = []
for ct in column_types:
if ct[1] == 'string':
categoricalCols += [ct[0]]
else:
numericCols += [ct[0]]
stages = []
for categoricalCol in categoricalCols:
stringIndexer = StringIndexer(inputCol=categoricalCol, outputCol=categoricalCol+"Index")
stages += [stringIndexer]
assemblerInputs = map(lambda c: c + "Index", categoricalCols) + numericCols
assembler = VectorAssembler(inputCols=assemblerInputs, outputCol="features")
stages += [assembler]
labelIndexer = StringIndexer(inputCol='Rings', outputCol='indexedLabel')
stages += [labelIndexer]
dt = DecisionTreeClassifier(labelCol="indexedLabel", featuresCol="features")
evaluator = MulticlassClassificationEvaluator(labelCol='indexedLabel', predictionCol='prediction', metricName='f1')
paramGrid = (ParamGridBuilder()
.addGrid(dt.maxDepth, [1,2,6])
.addGrid(dt.maxBins, [20,40])
.build())
stages += [dt]
pipeline = Pipeline(stages=stages)
cv = CrossValidator(estimator=pipeline, estimatorParamMaps=paramGrid, evaluator=evaluator, numFolds=1)
cvModel = cv.fit(train_dataset)
train_dataset = cvModel.transform(train_dataset)
我在本地运行Spark standalone。可能有什么不对?
谢谢!
答案 0 :(得分:1)
所以,问题是将numFolds
的{{1}}参数设置为1.如果我想用CrossValidation
进行参数调整,只有一个列车测试分割,显然我需要使用ParamGrid
代替。