使用Hive表在spark中的FP增长算法

时间:2017-01-17 11:14:15

标签: scala apache-spark hive apache-spark-mllib

以下是我从hive表

生成频繁项目集的代码
val sparkConf = new SparkConf().setAppName("Recommender").setMaster("local")
val sc = new SparkContext(sparkConf)
val hiveContext = new HiveContext(sc)
import hiveContext.implicits._
import hiveContext.sql

val schema = new StructType(Array(
StructField("col1", StringType, false)
))

val dataRow = hiveContext.sql("select col1 from hive_table limit 100000").cache()
val dataRDD = hiveContext.createDataFrame(dataRow.rdd,schema).cache()
dataRDD.show()

val transactions = dataRDD.map((row:Row) => {
val stringarray=row.getAs[String](0).split(",")
var arr=new Array[String](stringarray.length)
for( a <- 0 to arr.length-1) {
  arr(a)=stringarray(a)
}
arr
})

val fpg = new FPGrowth().setMinSupport(0.1).setNumPartitions(10)
val model = fpg.run(transactions)
val size: Double = transactions.count()
println("MODEL FreqItemCount "+model.freqItemsets.count())
println("Transactions count : "+size)

但FreqItemCount总是为0。

输入查询结果如下所示

270035_1,249134_1,929747_1
259138_1,44072_1,326046_1
385448_1,747230_1,74440_1,68096_1,610434_1,215589_3,999507_1,74439_1,36260_1,925018_1,588394_1,986622_1,64585_1,942893_1,5421_1,37041_1,52500_1,4925_1,553613           415353_1,600036_1,75955_1
693780_1,31379_1
465624_1,28993_1,1899_2,823631_1
667863_1,95623_3,345830_8,168966_1
837337_1,95586_1,350341_1,67379_1,837347_1,20556_1,17567_1,77713_1,361216_1,39535_1,525748_1,646241_1,346425_1,219266_1,77717_1,179382_3,702935_1
249882_1,28977_1
78025_1,113415_1,136718_1,640967_1,787444_1
193307_1,266303_1,220199_2,459193_1,352411_1,371579_1,45906_1,505334_1,9816_1,12627_1,135294_1,28182_1,132470_1
526260_1,305646_1,65438_1

但是当我使用以下硬编码输入执行代码时,我正在获得适当的频繁项目集

val transactions = sc.parallelize(Seq(
  Array("Tuna", "Banana", "Strawberry"),
  Array("Melon", "Milk", "Bread", "Strawberry"),
  Array("Melon", "Kiwi", "Bread"),
  Array("Bread", "Banana", "Strawberry"),
  Array("Milk", "Tuna", "Tomato"),
  Array("Pepper", "Melon", "Tomato"),
  Array("Milk", "Strawberry", "Kiwi"),
  Array("Kiwi", "Banana", "Tuna"),
  Array("Pepper", "Melon")
))
你能告诉我我做错了什么吗?我正在使用带有scala 2.10的spark 1.6.2。

1 个答案:

答案 0 :(得分:0)

看起来问题的根源是支持阈值(0.1)太高。您使用的价值非常高,不太可能在现实生活交易数据中观察到。尝试逐渐减少它,直到你开始接受规则。