我是Scala-Spark的新人,但我需要用此开发我的最终项目单身汉。
我尝试从数据构建K-means算法。 数据来自kaggle:https://www.kaggle.com/murderaccountability/homicide-reports
我用数据读取文件。 创建一个案例类,如:
case class CrimeReport (Record_ID: String, Agency_Name: String,
City: String, State: String, Year: Int, Month: Int, Crime_Type: String,
Crime_Solved: String, Victim_Sex: String, Victim_Age: Int, Victim_Race: String,
Perpetrator_Sex: String, Perpetrator_Age: String, Perpetrator_Race: String, Relationship: String, Victim_Count: String)
我将数据映射到案例类。例如,月份是字符串,我需要Int(在我的特征向量之后创建)我定义了一个解析它的函数:
//Parsear Month: String ===> Int
def parseMonthToNumber(month: String) : Int = {
var result = 0
month match {
case "January" => result = 1
case "February" => result = 2
case "March" => result = 3
case "April" => result = 4
case "May" => result = 5
case "June" => result = 6
case "July" => result = 7
case "August" => result = 8
case "September" => result = 9
case "October" => result = 10
case "November" => result = 11
case _ => result = 12
}
result
}
data = sc.textFile (... .csv)
val data_split = data.map(line => line.split(","))
val allData = data_split.map(p => CrimeReport(p(0).toString,
p(1).toString, p(2).toString, p(3).toString, parseInt(p(4)),
parseMonthToNumber(p(5)), p(6).toString, p(7).toString, p(8).toString,
parseInt(p(9)), p(10).toString, p(11).toString, p(12).toString,
p(13).toString, p(14).toString, p(15).toString))
//DataFrame
val allDF = allData.toDF()
//convert data to RDD which will be passed to KMeans
val rowsRDD = allDF.rdd.map( x =>
(x(0).getString, x.getString(1), x.getString(2), x.getString(3), x(4).getInt, x(5).getInt, x.getString(6), x.getString(7), x.getString(8), x(9).getInt, x.getString(10), x.getString(11), x.getString(12), x.getString(13), x.getString(14), x.getString(15))
)
但是我收到了这个错误:
error: value getInt is not a member of Any
(x(0).getString, x.getString(1), x.getString(2), x.getString(3), x(4).getInt, x(5).getInt, x.getString(6), x.getString(7), x.getString(8), x(9).getInt, x.getString(10), x.getString(11), x.getString(12), x.getString(13), x.getString(14), x.getString(15))
^
为什么?
答案 0 :(得分:2)
我假设Spark 2.1.1的最新版本。
我首先问你一个问题,为什么你将DataFrame转换为RDD[Row]
来执行KMeans,因为DataFrame-based KMeans implementation in Spark。
我不会这样做Spark MLlib's RDD-based API is deprecated:
此页面记录了基于RDD的API(
spark.mllib
包)的MLlib指南的各个部分。请参阅MLlib主要指南,了解基于DataFrame的API(spark.ml
包),它现在是MLlib的主要API。
话虽如此,让我们看看你面临的问题。
如果我是你(并且无视坚持使用Spark MLlib的基于DataFrame的API的建议),我将执行以下操作:
// val allDF = allData.toDF()
val allDF = allData.toDS
通过上述内容,您拥有的Dataset[CrimeReport]
比纯Row
更令人愉快。
完成转换后,您可以
val rowsRDD = allDF.rdd.map { x => ... }
其中x
属于您的CrimeReport
类型,我相信您会知道如何处理它。
直接回答您的问题,错误的原因是:
错误:值getInt不是Any
的成员
x(5)
(和其他人)属于Any
类型,因此您必须将其强制转换为您的类型,或者只需将x(5)
替换为x.getInt(5)
。
请参阅Row的scaladoc。
答案 1 :(得分:0)
当我们在case类中处理String数据类型而不是双精度数时,我们如何使用kmeans?这个代码我不会工作,因为vector期望一个double。
// Passing in Crime_Type, Crime_Solved, Perpetrator_Race to KMeans as
the attributes we want to use to assign the instance to a cluster.
val vectors = allDF.rdd.map(r => Vectors.dense( r.Crime_Type, r.Crime_Solved, r.Perpetrator_Race ))
//KMeans model with 2 clusters and 10 iterations
val kMeansModel = KMeans.train(vectors, 2, 10)
答案 2 :(得分:0)
您应该将 int / double 定义为要在方法Vector.dense中使用的属性。
之后,当您将案例类与文件中的数据进行映射时,您应该调用之前定义的函数。正如你在这里看到的那样:
val data_split = data.map(line => line.split(","))
val allData = data_split.map(p =>
CrimeReport(p(0).toString, p(1).toString, p(2).toString, p(3).toString, parseInt(p(4)), parseMonthToNumber(p(5)), p(6).toString, parseSolved(p(7)), parseSex(p(8)), parseInt(p(9)), parseRaceToNumber(p(10)), p(11).toString, p(12).toString, p(13).toString, p(14).toString, p(15).toString))
功能是:
//Filter and Cleaning data => Crime Solved
def parseSolved (solved: String): Int = {
var result = 0
solved match {
case "Yes" => result = 1
case _ => result = 0
}
result
}
或者:
//Parsear Victim_Race: String ===> Int
def parseRaceToNumber (crType : String) : Int = {
var result = 0
val race = crType.split("/")
race(0) match {
case "White" => result = 1
case "Black" => result = 2
case "Asian" => result = 3
case "Native American" => result = 4
case _ => result = 0
}
result
}