TypeError:数据应该是LabeledPoint的RDD,但得到<type'numpy.ndarray'=“”>

时间:2017-12-20 18:32:28

标签: python numpy apache-spark pyspark

我收到错误:

TypeError: data should be an RDD of LabeledPoint, but got <type 'numpy.ndarray'>

执行时:

import sys
import numpy as np
from pyspark import SparkConf, SparkContext
from pyspark.mllib.classification import LogisticRegressionWithSGD


conf = (SparkConf().setMaster("local")
.setAppName("Logistic Regression")
.set("spark.executor.memory", "1g"))
sc = SparkContext(conf = conf) 


def mapper(line):
    feats = line.strip().split(",") 
    label = feats[len(feats) - 1]       # Last column is the label
    feats = feats[2: len(feats) - 1]    # remove id and type column
    feats.insert(0,label)
    features = [ float(feature) for feature in feats ] # need floats
    return np.array(features)

data = sc.textFile("test.csv")
parsedData = data.map(mapper)

# Train model
model = LogisticRegressionWithSGD.train(parsedData)

我在model = LogisticRegressionWithSGD.train(parsedData)行收到错误。

parsedData应该是一个RDD。我不知道为什么我会这样做。

Github链接到full source code

1 个答案:

答案 0 :(得分:0)

  

parsedData应该是一个RDD。我不知道为什么我会这样做。

问题不在于parsedData不是RDD,问题在于它存储的问题。正如消息所示,当您通过RDD[LabeledPoint]时需要RDD[numpy.ndarray]

from pyspark.mllib.regression import LabeledPoint

def mapper(line):
    ...
    return LabeledPoint(label, features)