IllegalArgumentException:你在kmeans.fit上{u'requirement failed'

时间:2017-05-03 09:06:23

标签: apache-spark dataframe pyspark k-means apache-spark-mllib

使用zeppelin笔记本中的spark,我从昨天开始就遇到了这个错误。 这是我的代码:

from pyspark.ml.clustering import KMeans
from pyspark.ml.feature import VectorAssembler

df = sqlContext.table("rfmdata_clust")

k = 4

# Set Kmeans input/output columns
vecAssembler = VectorAssembler(inputCols=["v1_clust", "v2_clust", "v3_clust"], outputCol="features")
featuresDf = vecAssembler.transform(df)

# Run KMeans
kmeans = KMeans().setInitMode("k-means||").setK(k)
model = kmeans.fit(featuresDf)
resultDf = model.transform(featuresDf)

# KMeans WSSSE
wssse = model.computeCost(featuresDf)
print("Within Set Sum of Squared Errors = " + str(wssse))

这是错误:

Traceback (most recent call last):
  File "/tmp/zeppelin_pyspark-8890997346928959256.py", line 346, in <module>
    raise Exception(traceback.format_exc())
Exception: Traceback (most recent call last):
  File "/tmp/zeppelin_pyspark-8890997346928959256.py", line 334, in <module>
    exec(code)
  File "<stdin>", line 8, in <module>
  File "/usr/lib/spark/python/pyspark/ml/base.py", line 64, in fit
    return self._fit(dataset)
  File "/usr/lib/spark/python/pyspark/ml/wrapper.py", line 236, in _fit
    java_model = self._fit_java(dataset)
  File "/usr/lib/spark/python/pyspark/ml/wrapper.py", line 233, in _fit_java
    return self._java_obj.fit(dataset._jdf)
  File "/usr/lib/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "/usr/lib/spark/python/pyspark/sql/utils.py", line 79, in deco
    raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
IllegalArgumentException: u'requirement failed'

抛出错误的行是kmeans.fit()。 我检查了rfmdata_clust数据帧,它似乎并不奇怪。

df.printSchema()给出:

root
 |-- id: string (nullable = true)
 |-- v1_clust: double (nullable = true)
 |-- v2_clust: double (nullable = true)
 |-- v3_clust: double (nullable = true)

featuresDf.printSchema()给出:

root
 |-- id: string (nullable = true)
 |-- v1_clust: double (nullable = true)
 |-- v2_clust: double (nullable = true)
 |-- v3_clust: double (nullable = true)
 |-- features: vector (nullable = true)

另一个有趣的一点是,在featuresDf的定义之下添加featuresDf = featuresDf.limit(10000)会使代码无错运行。也许它与数据的大小有关?

1 个答案:

答案 0 :(得分:2)

希望这已经解决了,如果没有,请试试这个

    df=df.na.fill(1)

这会将NaN的所有值填充为1,当然您可以选择任何其他值。 该错误是由于您在特征向量中有NaN这一事实。 您可能需要导入必要的包。 This也应该有所帮助。

如果失败,请告诉我。