使用Apache Spark在特定列上应用PCA

时间:2017-06-01 12:14:20

标签: scala apache-spark apache-spark-mllib

我正在尝试在包含标题并包含字段的数据集上应用PCA 这是我使用的代码,能够选择我们应用PCA的特定列的任何帮助。

val inputMatrix = sc.textFile("C:/Users/mhattabi/Desktop/Realase of 01_06_2017/TopDrive_WithoutConstant.csv").map { line =>
  val values = line.split(",").map(_.toDouble)
  Vectors.dense(values)
}

val mat: RowMatrix = new RowMatrix(inputMatrix)
val pc: Matrix = mat.computePrincipalComponents(4)
// Project the rows to the linear space spanned by the top 4 principal components.

val projected: RowMatrix = mat.multiply(pc)

//更新版本 我试着这样做

val spark = SparkSession.builder.master("local").appName("my-spark-app").getOrCreate()
val dataframe = spark.read.format("com.databricks.spark.csv")

val columnsToUse: Seq[String] =  Array("Col0","Col1", "Col2", "Col3", "Col4").toSeq
val k: Int = 2

val df = spark.read.format("csv").options(Map("header" -> "true", "inferSchema" -> "true")).load("C:/Users/mhattabi/Desktop/donnee/cassandraTest_1.csv")

val rf = new RFormula().setFormula(s"~ ${columnsToUse.mkString(" + ")}")
val pca = new PCA().setInputCol("features").setOutputCol("pcaFeatures").setK(k)

val featurized = rf.fit(df).transform(df)
//prinpal component
val principalComponent = pca.fit(featurized).transform(featurized)
principalComponent.select("pcaFeatures").show(4,false)

+-----------------------------------------+
|pcaFeatures                              |
+-----------------------------------------+
|[-0.536798281241379,0.495499034754084]   |
|[-0.32969328815797916,0.5672811417154808]|
|[-1.32283465170085,0.5982789033642704]   |
|[-0.6199718696225502,0.3173072633712586] |
+-----------------------------------------+

我得到这个为主要组件,我想在csv文件中保存这个问题并添加标题。任何帮助很多谢谢 任何帮助将不胜感激。

非常感谢

2 个答案:

答案 0 :(得分:2)

在这种情况下,您可以使用Too few arguments to function Illuminate\Support\Collection::where(), 1 passed . . . but two expected

RFormula

答案 1 :(得分:0)

java.lang.NumberFormatException: For input string: "DateTime"

这意味着在您的输入文件中有一个值DateTime,然后您尝试将其转换为Double

可能它在您输入文件的标题中的某处