FP-growth - 交易中的项目必须是唯一的

时间:2016-05-16 08:04:25

标签: apache-spark pyspark apache-spark-mllib

我已在我的计算机中运行代码并使用频繁模式挖掘。我使用了FP-growth,但是pyspark引发了一个错误,我不知道如何解决它,所以使用pyspark的人可以帮助我吗?

首先我得到数据

data = sc.textFile(somewhere)

此步骤没有错误 然后

transactions = data.map(lambda line: line.strip().split(' '))

接下来是

model = FPGrowth.train(transactions, minSupport=0.2, numPartitions=10)

会抛出错误

An error occurred while calling o19.trainFPGrowthModel.:org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1.0 (TID 3, localhost): org.apache.spark.SparkException: Items in a transaction must be unique but got WrappedArray(,  ,  A,  ,  Seq,  0xBB20C554Ack,  0xE6A8BA01Win,  0x7D78TcpLen,  20).

我的数据看起来像这样

 transactions.take(1)

[[u'03/07',
  u' 10',
  u' 22',
  u' 04.439824',
  u' 139',
  u' 1',
  u' 1',
  u' spp_sdf',
  u' SDFCombinationAlert',
  u' Classification',
  u' SenstiveData',
  u' Priority',
  u' 2',
  u' PROTO',
  u' 254',
  u' 197.218.177.69',
  u' 172.16.113.84']]

1 个答案:

答案 0 :(得分:6)

嗯,你获得的异常几乎是不言自明的。传递给FP-growth的每个桶必须包含一组项目,因此不能重复。例如,这不是有效的输入:

transactions = sc.parallelize([["A", "A", "B", "C"], ["B", "C", "A", "A"]])
FPGrowth.train(transactions, minSupport=0.2, numPartitions=10)
## Py4JJavaError: An error occurred while calling o71.trainFPGrowthModel.
## ...
## Caused by: org.apache.spark.SparkException: 
##   Items in a transaction must be unique but got WrappedArray(A, A, B, C).

在传递这些项目之前,您必须确保这些项目是唯一的。

unique = transactions.map(lambda x: list(set(x))).cache()
FPGrowth.train(unique, minSupport=0.2, numPartitions=10)

备注

  • 在运行cache之前,FPGrowth数据是个好主意。
  • 主观上它不是您使用的数据的最佳选择。