Spark MLlib关联规则置信度大于1.0

时间:2017-02-20 01:14:24

标签: apache-spark apache-spark-mllib

我使用Spark 2.0.2从一些数据中提取一些关联规则,而当我得到结果时,我发现我有一些奇怪的规则,如下所示:

【[MUJI,ROEM,西单科技广场] => Bauhaus ] 2.0

“2.0”是打印规则的置信度,并不是"先行后果的概率"应该小于1.0?

1 个答案:

答案 0 :(得分:1)

KEY WORD :transactions!= freqItemset

解决方案:改为使用spark.mllib.FPGrowth,它接受一个rdd事务并可以自动计算freqItemsets。

您好,我找到了。出现这种现象的原因是因为我的输入FreqItemset数据 freqItemsets 是错误的。让我们详细说明。我只使用三个原始的交易(" a"),(" a"," b"," c&#34 ;),(" a"," b"," d"),它们的频率都相同1.

一开始,我认为spark会自动计算子项集频率,我唯一需要做的就是创建像这样的freqItemsets(官方示例告诉我们):

val freqItemsets = sc.parallelize(Seq(
  new FreqItemset(Array("a"), 1),
  new FreqItemset(Array("a","b","d"), 1),
  new FreqItemset(Array("a", "b","c"), 1)
))

这就是它出错的原因,AssociationRules的参数是FreqItemset,而不是交易,所以我错误地理解了这两个定义。

根据这三个事务,freqItemsets应该是

new FreqItemset(Array("a"), 3),//because "a" appears three times in three transactions
  new FreqItemset(Array("b"), 2),//"b" appears two times
  new FreqItemset(Array("c"), 1),
  new FreqItemset(Array("d"), 1),
  new FreqItemset(Array("a","b"), 2),// "a" and "b" totally appears two times
  new FreqItemset(Array("a","c"), 1),
  new FreqItemset(Array("a","d"), 1),
  new FreqItemset(Array("b","d"), 1),
  new FreqItemset(Array("b","c"), 1)
  new FreqItemset(Array("a","b","d"), 1),
  new FreqItemset(Array("a", "b","c"), 1)

你可以自己使用以下代码进行统计工作

val transactons = sc.parallelize(
Seq(
  Array("a"),
  Array("a","b","c"),
  Array("a","b","d")
))

val freqItemsets = transactions
.map(arr => {
        (for (i <- 1 to arr.length) yield {
          arr.combinations(i).toArray
        })
          .toArray
          .flatten
      })
      .flatMap(l => l)
      .map(a => (Json.toJson(a.sorted).toString(), 1))
      .reduceByKey(_ + _)
      .map(m => new FreqItemset(Json.parse(m._1).as[Array[String]], m._2.toLong))


//then use freqItemsets like the example code
val ar = new AssociationRules()
  .setMinConfidence(0.8)
val results = ar.run(freqItemsets)
//....

我们只需使用 FPGrowth 代替&#34; AssociationRules&#34;,就可以接受rdd事务。

val fpg = new FPGrowth()
  .setMinSupport(0.2)
  .setNumPartitions(10)
val model = fpg.run(transactions) //transactions is defined in the previous code 

这就是全部。