PySpark和MLLib:随机森林预测结果始终为0

时间:2018-09-12 21:28:30

标签: python pyspark apache-spark-mllib random-forest

df = pd.read_csv(r'main.csv', header=0)
spark = SparkSession \
    .builder \
    .master("local") \
    .appName("myapp") \
    .getOrCreate()
s_df = spark.createDataFrame(df)
transformed_df = s_df.rdd.map(lambda row: LabeledPoint(row[0], Vectors.dense(row[1:])))

splits = [0.7, 0.3]
training_data, test_data = transformed_df.randomSplit(splits, 100)
model = RandomForest.trainClassifier(training_data, numClasses=2, categoricalFeaturesInfo={},
                                 numTrees=3, featureSubsetStrategy="auto",
                                 impurity='gini', maxDepth=4, maxBins=32)

predictions = model.predict(test_data.map(lambda x: x.features))

当打印test_data.map(lambda x:x.features)时,结果是

[DenseVector([1431500000.0, 9.3347, 79.8337, 44.6364, 194.0, 853.0, 196.9998]),
 DenseVector([1431553600.0, 9.5484, 80.7409, 39.5968, 78.0, 923.0, 196.9994])....]
DenseVector([numbers])中的

数字对于预测是正确的

但预测结果为0

[0.0, 0.0, 0.0, 0.0, 0.0...]

0 个答案:

没有答案