我正在尝试使用Scala从Spark MLLib中获取一些频繁的项目集和关联规则。但是实际上我什么也没得到,甚至没有错误。 可以在code上找到data input file(一个spark / databricks笔记本)和here。
该算法都找不到任何频繁的项目集和/或关联规则,但是有证据表明这是错误的。我主要使用KNIME(非编程分析平台)进行了相同的操作,但使用了Borgelt算法进行关联规则学习。在那里,我得到了先行的,随之而来的升力和所有其他所需比率的映射。但是在Spark与Scala中我什么也没得到。
%scala
import org.apache.spark.rdd.RDD
import org.apache.spark.mllib.fpm.AssociationRules
import org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
// loading data
val data = sc.textFile("/FileStore/tables/onlinePurchasedProducts.txt")
val onlineTrx: RDD[Array[String]] = data.map(s => s.trim.split(' '))
println("Read: " + onlineTrx.count() + " online baskets")
// checking how transactions look like
val dataframe = onlineTrx.toDF()
println("Schema of transactions looks like: ")
dataframe.printSchema()
println("Content of transactions looks like: ")
dataframe.show()
val fpg = new FPGrowth()
val model = fpg
.setMinSupport(0.2)
.setNumPartitions(1)
.run(onlineTrx)
model.freqItemsets.collect().foreach { itemset =>
println(itemset.items.mkString("[", ",", "]") + ", " + itemset.freq)
}
model.generateAssociationRules(0.4).collect().foreach { rule =>
println(s"${rule.antecedent.mkString("[", ",", "]")}=> " +
s"${rule.consequent .mkString("[", ",", "]")},${rule.confidence}")
}
此代码的输出是:
Read: 42897 online baskets
Schema of transactions looks like:
root
|-- value: array (nullable = true)
| |-- element: string (containsNull = true)
Content of transactions looks like:
e+--------------------+
| value|
+--------------------+
| [34502, 70312]|
| [44247]|
| [45127]|
| [79560]|
| [74801]|
| [15500]|
| [74801]|
| [31149, 78707]|
| [74801]|
| [40774]|
| [76675]|
|[26507, 26638, 33...|
| [74801]|
| [78707]|
| [74801]|
| [21253]|
| [74801]|
|[75729, 10899, 26...|
| [24834]|
| [74801]|
+--------------------+
only showing top 20 rows
import org.apache.spark.rdd.RDD
import org.apache.spark.mllib.fpm.AssociationRules
import org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
data: org.apache.spark.rdd.RDD[String]=
/FileStore/tables/onlinePurchasedProducts.txt MapPartitionsRDD[150] at
textFile at command-4263745371438753:8
onlineTrx: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[151] at map at command-4263745371438753:9
dataframe: org.apache.spark.sql.DataFrame = [value: array<string>]
fpg: org.apache.spark.mllib.fpm.FPGrowth = org.apache.spark.mllib.fpm.FPGrowth@23fd0c4
model: org.apache.spark.mllib.fpm.FPGrowthModel[String] = org.apache.spark.mllib.fpm.FPGrowthModel@41278271
任何想法都会受到赞赏。
答案 0 :(得分:0)
发布的代码可以完美运行,之所以没有结果,是因为传递的参数是最低支持。如果将最低支持设置为较低的水平,则该代码将起作用并显示结果。
部分显示的结果是:
[70423,70422], 123
[70423,70422,70800], 106
[70423,70800], 138
[45005], 400
[37991], 56
[33759], 73
[22024], 57
[34420], 46
[45132], 69
[78515], 53
[11407], 51
[54431], 60
[54432], 55
[35431], 58
[17488], 54
[82885], 45
[99678], 47
[70312], 791
[22087], 44
[70424,70425]=> [70800],0.825
[70425,70422]=> [70800],0.8533333333333334
[52570]=> [52577],0.6129032258064516
[70423,70800]=> [70422],0.7681159420289855
[70423,70422]=> [70800],0.8617886178861789
[26634]=> [26633],0.4909090909090909