pysparkDistributedKmodes lib错误

时间:2017-05-09 17:28:59

标签: python-2.7 apache-spark pyspark apache-zeppelin

我试图运行pyspark-distributed-kmodes示例:

import numpy as np
data = np.random.choice(["a", "b", "c"], (50000, 10))
data2 = np.random.choice(["e", "f", "g"], (50000, 10))
data = list(data) + list(data2)

from random import shuffle
shuffle(data)

# Create a Spark RDD from our sample data and decrease partitions to max_partions
max_partitions = 32

rdd = sc.parallelize(data)
rdd = rdd.coalesce(max_partitions)

for x in rdd.take(10):
    print x

method = EnsembleKModes(n_clusters, max_iter)
model = method.fit(df.rdd)

print(model.clusters)
print(method.mean_cost)

predictions = method.predictions
datapoints = method.indexed_rdd
combined = datapoints.zip(predictions)
print(combined.take(10))

model.predict(rdd).take(5)

我使用的是Python 2.7,Apache Zeppelin 0.7.1和Apache Spark 2.1.0。

这是输出错误:

('Iteration ', 0)

Traceback (most recent call last):
      File "/tmp/zeppelin_pyspark-1298251609305129154.py", line 349, in <module>
        raise Exception(traceback.format_exc())
    Exception: Traceback (most recent call last):
      File "/tmp/zeppelin_pyspark-1298251609305129154.py", line 337, in <module>
        exec(code)
      File "<stdin>", line 13, in <module>
      File "/usr/local/lib/python2.7/dist-packages/pyspark_kmodes/pyspark_kmodes.py", line 430, in fit
        self.n_clusters,self.max_dist_iter)
      File "/usr/local/lib/python2.7/dist-packages/pyspark_kmodes/pyspark_kmodes.py", line 271, in k_modes_partitioned
        clusters = check_for_empty_cluster(clusters, rdd)
      File "/usr/local/lib/python2.7/dist-packages/pyspark_kmodes/pyspark_kmodes.py", line 317, in check_for_empty_cluster
        random_element = random.choice(clusters[biggest_cluster].members)
      File "/usr/lib/python2.7/random.py", line 275, in choice
        return seq[int(self.random() * len(seq))]  # raises IndexError if seq is empty
    IndexError: list index out of range

用于拟合模型的RDD不是空的,我已经检查过了。我认为这是pyspark-distributed-kmodes和spark之间的版本不兼容问题,但我无法降级Spark。

知道怎么解决吗?

1 个答案:

答案 0 :(得分:0)

什么是df?看起来不像火花错误。来自https://github.com/ThinkBigAnalytics/pyspark-distributed-kmodes的代码在Spark 2.1.0下为我工作。即使我从你的代码中改变了这行代码,它也可以工作:

method = EnsembleKModes(n_clusters, max_iter)
model = method.fit(rdd)