假设我有一个双打的RDD,我想按如下方式“标准化”它:
这可以高效且轻松地完成(无需在任何阶段将RDD转换为双数组)吗?
谢谢和问候,
答案 0 :(得分:5)
您可以使用Spark本身的StandardScaler
/**
* Standardizes features by removing the mean and scaling to unit variance
* using column summary
*/
import org.apache.spark.mllib.feature.StandardScaler
import org.apache.spark.mllib.linalg.Vector
import org.apache.spark.rdd.RDD
val data: RDD[Vector] = ???
val scaler = new StandardScaler(true, true).fit(data)
data.foreach { vector =>
val scaled = scaler.transform(vector)
}